AI hallucinations: what we can learn from them

Have you wondered about the quality of answers from AI tools? I guess we can tell when they're not to the mark. In the same vein, have you wondered about the quality of your own answers - in life, work or elsewhere?


There are moments when we're not providing a quality answer. There's a common link between us and AI tools. Let's explore that, today.


If you prompt any AI tool, it will give you an answer. It has to. Because it's programmed to do so. There are times when it's answer may not make sense. These states are called hallucinations i.e. instances when it's generating an answer that sounds like made up stuff.


Its kind of like an engineering student at vivas (Q&A examinations with professors). If you've done engineering, you know what I mean. We had to give some answer because an "I don't know" would work heavily against you.


But those answers worked at times. I can attest to that ;)


It all depends on the one evaluating the output. If they're smart and knowledgeable, the bs is caught immediately. If not, you can roll with it not realising that we're in a trap. Of our own ignorance.


We can use these hallucinations to our advantage. Here's how:


Know that there is a capability to provide an answer.

Even though it may not be correct, an answer is provided as long as there's some training material. We don't have the complete knowledge. That's an excellent area to be aware of. Now, work on it :-)


Insert a pause

We are conditioned to provide rapid answers. AI tools are no different. A friend got a better answer when he promoted the tool to "take your time, I am in no rush." Aren't we the same? Take your time in the search for a better answer.


Leverage a team

We have our blindspots. Even if we all learn the same thing, each one of us has a different story when asked to repeat. Use that multi varied perspective to get a better insight.


AI tools will keep optimising. In the near future, they will not have many of these unknown states.


We, humans, on the other hand will have many more of those. But there's no reason to be a know it all. For the areas we care about, let's get better.


That's it for today.

Happy Hallucinating!

Hemang.

I love this analogy. Drawing parallels between AI hallucinations and human learning really makes the concept tangible. Dr. Hemang Shah 💡

Like
Reply

The parallel hits deep - after 2.5 years of catching AI systems confidently fabricating data, I started questioning my own confident assumptions in business decisions. Key point here - Turns out humans and AI fail the same way: we both optimize for sounding right instead of being right. The difference is humans usually know when we're guessing, but AI delivers fabrications with maximum confidence. That's why I built consensus validation into my systems - multiple models arguing with each other catches the "confident lies" that single models (and single humans) miss. The real lesson from fixing AI hallucinations? Always show your work, whether you're human or machine. My best business decisions come from forcing myself to provide evidence for claims, just like I force my AI systems to do. Curious, what's been your experience with overconfident human decisions vs overconfident AI outputs?

Like
Reply

There is ongoing work to teach AI to say “I don’t know”. Like most aspects in life we are remembered for errors and not success. 6-sigma remembers 3.4 defects not the 9,99,996 times someone was right. The “know-it-all” eventually loses credibility and would be perceived untrustworthy during critical times.

  • No alternative text description for this image

To view or add a comment, sign in

More articles by Dr. Hemang Shah 💡

  • Wondering out loud: How to program our minds for strategy

    Among all industries I find the hospitality industry to be fascinating. Especially how our expectations as a consumer…

    1 Comment
  • The Focus Phases in Innovation

    Today’s post comes from an observation on the tennis court. Doubles matches are always entertaining, especially when…

    3 Comments
  • An Intervention for Innovation and Strategy

    Have you ever found people going through the motions of an important activity? It can be how they run, dance, or how…

    1 Comment
  • Working and Ideating better

    Today’s thought is an analogy between how we work and an indispensable task we do several times a day - breathe…

    6 Comments
  • On Intellectual Honesty

    Intellectual Honesty is an x factor in your arsenal. It requires you to be brutally honest with yourself first and then…

    2 Comments
  • Subtitles needed

    I write this in between rounds of Diwali sweets. Sometimes I like to share half-baked thoughts with you.

    5 Comments
  • Failure and Innovation

    Failure is not everyone’s cup of tea remarked a government leader at a program last week. He was commenting on how…

    4 Comments
  • The Value - Effort Framework

    Today’s discussion is thanks to an almost pointless shopping visit I had to be a part of in Chandigarh. I’ll explain…

    1 Comment
  • Solving the to-do list overload

    When's the last time you were asked "what shall we have for dinner?" I doesn't matter if I'm with friends, family, or…

    1 Comment
  • The perfect place for ideation

    This place is my go-to place for ginger chai on weekends. It's usually buzzing with young professionals hanging out.

    2 Comments

Others also viewed

Explore content categories