top of page

How I Actually Use AI: A Case for Augmented Intelligence

  • 14 hours ago
  • 4 min read



The discourse has two settings, and both are wrong

 

Pick up any piece about artificial intelligence in medicine and you will find one of two arguments. Either AI is going to revolutionize clinical practice by automating diagnosis and replacing physician judgment, or AI is a dangerous, hallucinating black box that no responsible clinician should touch. Both camps are loud. Both camps are largely arguing past the actual experience of physicians who use these tools.

 

The thing both arguments have in common is that they imagine AI as an autonomous agent — something that acts independently, makes decisions, and produces outputs you simply accept or reject wholesale. That framing drives the fear and the hype equally. And it doesn't describe how I use AI, or how I think physicians should use it.

 

There's a better frame. It's called augmented intelligence, and the distinction matters.

 

What augmented intelligence actually means

 

Augmented intelligence is not a euphemism for AI with better PR. It describes a specific relationship between the human and the tool: the AI amplifies your thinking, your drafting, your analysis — and you retain intellectual ownership of the output. You are the decision-maker. You direct the work. You evaluate what comes back and correct it when it's wrong. The AI doesn't publish anything. You do.

 

This is meaningfully different from autonomous AI, which operates independently and generates outputs without ongoing human oversight. The distinction isn't just philosophical — it has real implications for how you build your workflow, how you evaluate output quality, and where accountability sits.

 

In augmented intelligence, accountability never leaves the physician. That's not a limitation. It's the point.

 

What this looks like in practice

 

I use AI tools daily. I use Claude for writing and coding: editing blog posts, structuring arguments, generating diagrams, iterating on prose. I use Gemini for personal assistant tasks — scheduling, reminders, quick lookups. Different tools, different jobs, same underlying principle.

 

When I'm drafting a post, I bring the idea, the clinical knowledge, the interpretive framework, and the editorial judgment. Claude proposes structure, generates prose, and produces things like SVG diagrams that I couldn't efficiently produce by hand. I read everything. I correct errors — and there are always errors, some subtle. I rewrite passages that don't sound like me or aren't correct. I verify factual claims against primary sources.

 

The post that goes up is mine. The thinking is mine. The AI accelerated the production of a written artifact that represents my analysis. It did not perform the analysis.

 

This workflow is only valuable if I maintain that discipline. The moment I start publishing AI output I haven't critically evaluated, I've stopped practicing augmented intelligence and started practicing something more like delegation to a very fluent but unreliable assistant. Those are not the same thing.

 

The oversight imperative

 

Anyone who works in laboratory medicine already understands this intuitively, even if they haven't applied the framework to AI.

 

We do not report analyzer results without understanding what the analyzer did. We run QC. We investigate flags. We understand the assay's limitations, its interference profile, the conditions under which it fails. When a result looks wrong, we don't shrug and report it — we investigate. The instrument is a tool. We are responsible for the result.

 

AI output requires exactly the same critical scrutiny. The distinctive failure mode of large language models is not that they produce obviously garbled output — it's that they produce fluent, confident, plausible-sounding output that is wrong. A traditional analyzer error usually looks like an error. An AI hallucination often doesn't. It reads like a normal sentence. It cites a study that doesn't exist in the same register as one that does.

 

This is why oversight isn't optional. It's not a hedge for cautious people. It's the minimum standard for using the tool responsibly. If you're accepting AI output without evaluating it, you're not practicing augmented intelligence. You're practicing something with no quality control, and in medicine, we know exactly how that ends.

 

The case for engaging now

 

I understand the instinct to wait. The tools are changing fast. The evidence base for clinical AI is immature. The regulatory landscape is unclear. Sitting it out feels like the prudent move.

 

But physicians who opt out aren't avoiding risk — they're just outsourcing the learning curve. Someone is going to set the norms for how AI gets used in your institution, your specialty, your practice environment. It will either be clinicians who have hands-on experience with the tools and understand their limitations, or it will be administrators, vendors, and policy-makers who don't see patients.

 

The physicians who engage critically now — who build workflows with real oversight, who learn where the tools fail, who can articulate what responsible use actually looks like — are the ones who will be positioned to shape those norms. The ones who wait will have AI handed to them later, implemented by people who weren't asking the right questions.

 

I'd rather be in the first group. I'd rather have colleagues in medicine who are in the first group.

 

Augmented intelligence, done right, is not about ceding judgment to a machine. It's about using a powerful tool with the same rigor we bring to every other tool in medicine. We validate. We monitor. We maintain accountability. That's not fear-mongering and it's not hype. It's just good practice.

 
 
Raymond, Caitlin M._edited.jpg

Caitlin Raymond MD/PhD

I'm a hybrid of Family Medicine and Pathology training. I write about the intersection of blood banking and informatics, medical education, and more!

  • Twitter
  • LinkedIn
  • Instagram

​

​

Subscribe

Thanks for submitting!

©2023 by Caitlin Raymond. Powered and secured by Wix

bottom of page