AI hallucinations: what we can learn from them
Have you wondered about the quality of answers from AI tools? I guess we can tell when they're not to the mark. In the same vein, have you wondered about the quality of your own answers - in life, work or elsewhere?
There are moments when we're not providing a quality answer. There's a common link between us and AI tools. Let's explore that, today.
If you prompt any AI tool, it will give you an answer. It has to. Because it's programmed to do so. There are times when it's answer may not make sense. These states are called hallucinations i.e. instances when it's generating an answer that sounds like made up stuff.
Its kind of like an engineering student at vivas (Q&A examinations with professors). If you've done engineering, you know what I mean. We had to give some answer because an "I don't know" would work heavily against you.
But those answers worked at times. I can attest to that ;)
It all depends on the one evaluating the output. If they're smart and knowledgeable, the bs is caught immediately. If not, you can roll with it not realising that we're in a trap. Of our own ignorance.
We can use these hallucinations to our advantage. Here's how:
Know that there is a capability to provide an answer.
Even though it may not be correct, an answer is provided as long as there's some training material. We don't have the complete knowledge. That's an excellent area to be aware of. Now, work on it :-)
Insert a pause
We are conditioned to provide rapid answers. AI tools are no different. A friend got a better answer when he promoted the tool to "take your time, I am in no rush." Aren't we the same? Take your time in the search for a better answer.
Leverage a team
We have our blindspots. Even if we all learn the same thing, each one of us has a different story when asked to repeat. Use that multi varied perspective to get a better insight.
AI tools will keep optimising. In the near future, they will not have many of these unknown states.
We, humans, on the other hand will have many more of those. But there's no reason to be a know it all. For the areas we care about, let's get better.
That's it for today.
Happy Hallucinating!
Hemang.
I love this analogy. Drawing parallels between AI hallucinations and human learning really makes the concept tangible. Dr. Hemang Shah 💡
The parallel hits deep - after 2.5 years of catching AI systems confidently fabricating data, I started questioning my own confident assumptions in business decisions. Key point here - Turns out humans and AI fail the same way: we both optimize for sounding right instead of being right. The difference is humans usually know when we're guessing, but AI delivers fabrications with maximum confidence. That's why I built consensus validation into my systems - multiple models arguing with each other catches the "confident lies" that single models (and single humans) miss. The real lesson from fixing AI hallucinations? Always show your work, whether you're human or machine. My best business decisions come from forcing myself to provide evidence for claims, just like I force my AI systems to do. Curious, what's been your experience with overconfident human decisions vs overconfident AI outputs?
There is ongoing work to teach AI to say “I don’t know”. Like most aspects in life we are remembered for errors and not success. 6-sigma remembers 3.4 defects not the 9,99,996 times someone was right. The “know-it-all” eventually loses credibility and would be perceived untrustworthy during critical times.