top of page

AI Hallucinations Are Inevitable: The Ongoing Need for Human Expertise in the Age of AI

  • Writer: caitlinraymondmdphd
    caitlinraymondmdphd
  • 1 day ago
  • 2 min read
ree

The other day, I asked an AI model about the Diego blood group system. It gave me a slick, confident answer — beautifully formatted, authoritative in tone — and completely wrong.


If I were a patient, or even a busy clinician, I might not have caught it. But as a transfusion medicine physician, I knew immediately: this was a hallucination.


And here’s the kicker — hallucinations like this aren’t just occasional glitches. They’re mathematically inevitable.


The Myth of the Perfect AI

Recent studies, including some from OpenAI itself, show that no matter how advanced large language models become, they will sometimes generate false information. That’s not because they’re “bad” or because engineers haven’t worked hard enough. It’s built into how they function.


AI predicts the most likely next word, not the absolute truth. For common facts, it does pretty well. But for rare details — like unusual antigens, edge-case transfusion reactions, or cell therapy nuances — the statistical floor drops out. It’s harder to predict the next word for a rare scenario or very nuanced context. In these situations, the model is more likely to “make something up” than to leave a blank or express uncertainty.


And in medicine, “making something up” isn’t just embarrassing. It’s dangerous.


Why This Is Good News for Workers

For those of us working in medicine, this is actually good news. Hallucinations prove that AI isn’t an autonomous replacement — it’s a sophisticated tool that still needs us.


AI will be able to:

  • Spot patterns in antibody panels faster than a human

  • Suggest the most efficient inventory allocation

  • Draft a transfusion reaction note in seconds


But it will never guarantee accuracy in the cases that matter most. Rare antigens. Borderline transfusion decisions. Patients whose context changes the entire equation. Those are exactly the situations where hallucinations spike — and where human oversight is non-negotiable.


As long as hallucinations exist, so does the need for human experts.


For laboratorians and transfusion specialists, that’s not just job security — it’s reassurance that expertise remains indispensable.


The Hybrid Future

So what does the future of transfusion medicine look like in the AI era?


I think of it as a “calculator moment.” AI will make our work faster, broader, and more efficient. It will handle the rote paperwork, surface the guidelines, and flag patterns across massive datasets that no human could scan in real time.


But it won’t replace the expert at the bench or the physician making the final call. Instead, our role becomes even more vital: verifying, contextualizing, and deciding when the model is wrong.


That’s not a bug in the system. That’s the point.


Closing Thoughts

When I asked the AI about Diego, it hallucinated. A transfusion medicine physician wouldn’t.


That difference — the human check, the lived expertise — is what keeps medicine safe.


AI hallucinations may be inevitable. But so is the ongoing need for human expertise in medicine. If anything, the rise of AI doesn’t erase our roles. It makes them stronger.

 
 
Raymond, Caitlin M._edited.jpg

Caitlin Raymond MD/PhD

I'm a hybrid of Family Medicine and Pathology training. I write about the intersection of blood banking and informatics, medical education, and more!

  • Twitter
  • LinkedIn
  • Instagram

Subscribe

Thanks for submitting!

©2023 by Caitlin Raymond. Powered and secured by Wix

bottom of page