What Are AI Hallucinations, Anyway?

And how to harness them to your advantage

In partnership with

Let’s get serious

What’s the mark of the world’s best, most growth-minded newsletter creators? They’re all on beehiiv.

Why? Our entire platform exists to help serious content creators scale faster. We’re built for those who are ready to take their content and build it into a behemoth. 

It’s why we offer a no-code website builder. It’s why our ad network matches you with global brands like Nike and Netflix. It’s why we never take a dime of your subscription revenue. And it’s why Arnold Schwarzenegger and Ashley Graham trust us to connect with their huge fan bases. 

It’s all to put your hard work in front of more people. So if you’re ready to build, ready to grow, and ready to make the world take notice, beehiiv is ready for you.

We’ve all heard that AI models hallucinate. But what does that actually mean?

In my most recent video, I break down exactly what AI hallucinations are, why they happen, how to avoid them, and most interestingly, how to harness them to your advantage.

What are Hallucinations and Why Do They Happen?

First principles: What’s a hallucination?

AI hallucinations happen when Large Language Models and other AI systems imagine information that’s not accurate, but that’s consistent with patterns in their training data.

For example, imagine that I asked an LLM to come up with a list of 10 barbecue restaurants in Lafayette, California, where I live.

There are really only 3 I could name. But since I’ve asked the model for 10, it’s very likely that it would imagine at least a few non-existent barbecue restaurants in an attempt to honor the intent of my query.

Crucially, it would likely write compelling, realistic-sounding descriptions for the imagined restaurants. Maybe it would say they were located on Mount Diablo Blvd (a real road) or include a realistic-sounding, made up quote from the local chamber of commerce about the restaurant’s service the community.

Those kinds of imagined pieces of information are hallucinations.

Again—and this is important—they’re often consistent with the patterns in the model’s training data.

That can make hallucinations very hard to spot. They often sound so realistic and plausible that they can be hard to distinguish from reality. Just ask the lawyer who included irrelevant court cases dreamed up by ChatGPT in a brief he filed with the Federal courts…

How to Avoid Them

Because they’re so realistic—yet still totally wrong—AI hallucinations can potentially be extremely harmful.

How do you avoid them?

Firstly, make sure that your prompts don’t ask an LLM like ChatGPT to answer queries where it’s likely to have incomplete data, or where a real answer doesn’t exist.

For example, I know there’s only a handful of BBQ places in Lafayette. Using that knowledge, I could ask a more open ended question of an LLM, like “Make a list of BBQ Places in Lafayette.”

With a query like that, I’m not specifying that I need 10. I’m therefore not asking the LLM to answer a question it can’t answer. As a result, it’s much more likely to name the three that actually do exist and stop there, rather than hallucinating more in order to honor my query.

You can also avoid hallucinations by handing actual data to your LLM, rather than asking it to rely solely on its training.

If I gave ChatGPT a full list of every restaurant in Lafayette and then asked it for BBQ places, it would be much less likely to imagine nonexistent ones.

Whenever possible, give your LLM actual, accurate data to work from. With today’s massive context windows, this is easier and cheaper than ever.

How to Use Hallucinations to Your Advantage

AI hallucinations aren’t always bad. In fact, hallucinations are often the closest that an LLM can get to anything like real creativity.

In some applications—like imagining new business ideas, or creating AI art—hallucinations are actually desirable. By following patterns in their massive training data sets—and then extrapolating new information—LLMs can essentially come up with new ideas.

Many AI models allow you to specify a “temperature” or other measure of randomness or extrapolation. By increasing it, you can increase the chance that the model hallucinates. This increases the weirdness and inaccuracy of many of the results.

But it also means you’re more likely to get something really out there and new.

As an example, here’s Midjourney’s response to the query “A really delicious meal”

It’s a cooked steak with potatoes and root veggies. Sure, that’s delicious. A little mid, though.

Now, here’s the same prompt, but with the temperature (Midjourney calls it “weirdness”) set to the max.

That looks like seared ahi tuna with sliced cherry tomatoes served over a bed of linguine with green beans.

Also delicious—but a lot more creative than a simple steak with potatoes!

Again, hallucinations aren’t always bad. As a responsible user of AI, your goal is to decide when you want to avoid hallucinations, and when you want to embrace them.

AI hallucinations could land you in hot water—but they could also help you come up with something truly novel and interesting.

Used well, they’re a superpower.

ICYMI

Want to learn how to use OpenAI’s new Sora video generator? I’ve got a tutorial on that!