Large language models have a well-earned reputation for making things up. But for AI cybersecurity architect Erica Burgess, rather than being a bug, GPT hallucinations can be a threat-modeling feature. "I like to think of the hallucinations as just ideas that haven't been tested yet," she said.