top of page

AI as a Second Reader, Not a Second Brain: What We’re Getting Wrong in Pathology AI Adoption

  • Writer: caitlinraymondmdphd
    caitlinraymondmdphd
  • 3 minutes ago
  • 5 min read
ree

Introduction: The Problem With the "Second Brain" Metaphor

Artificial intelligence in pathology and laboratory medicine is often marketed with an irresistible promise: a second brain that will spot what humans miss, automate the tedious parts of practice, and bring order to the overwhelming volume of data moving through modern health systems. It’s a compelling metaphor—but also a deeply misleading one.

The truth is simpler and far more useful: most AI tools in lab medicine today are not second brains. They are second readers. They assist. They triage. They flag patterns. They highlight outliers. They nudge clinicians toward questions worth asking.


This is not a limitation—it is the sweet spot of responsible AI. The problem is that our metaphors, expectations, and sometimes our implementation strategies haven’t caught up with this reality. When we treat assistive AI as if it were autonomous, we misjudge both its power and its risks.


This piece reframes AI in pathology and transfusion medicine through a more grounded, clinically realistic lens: AI as a second reader—never the primary decision-maker.


Assistive vs Autonomous AI: Why the Distinction Matters

In public conversations, "AI" tends to be treated as a single monolithic category. But in clinical practice, the distinction between assistive and autonomous systems is foundational.


Assistive AI

Assistive AI tools support human decision-making without replacing it. They:

  • flag abnormal cells or slide regions for review,

  • surface unusual utilization patterns,

  • predict inventory needs,

  • identify potential bleeding risks or outlier transfusion practices,

  • augment quality control workflows.


The human remains the final decision-maker. The AI's role is advisory.


Autonomous AI

Autonomous AI, by contrast, can issue a clinical interpretation without human confirmation. The classic example is FDA-cleared autonomous diabetic retinopathy screening, where the system renders a result independently.


Pathology is not there—and ethically, operationally, and scientifically, it shouldn’t aspire to be. Tissue interpretation, pre-analytic variability, complex clinical context, and downstream consequences place pathology squarely in the domain of human-in-the-loop practice.


Moreover, the limitations of autonomous AI make full automation particularly risky in this field. Even state‑of‑the‑art large models exhibit irreducible error rates, including hallucinations that arise not from software bugs but from the fundamental way probabilistic systems generate outputs. OpenAI and other major developers have acknowledged that hallucinations are inevitable in current-generation AI—an acceptable risk for drafting emails, but not for diagnosing malignancy.


In pathology, an autonomous error is not a benign failure mode; it is a misdiagnosis. Slides vary between institutions, stains differ, scanners introduce artifacts, and rare entities can be misclassified with absolute confidence. The model does not know when it does not know. Human-in-the-loop practice is therefore not a philosophical preference but a safety requirement.


Current professional sentiment reflects this: most pathologists are cautiously optimistic about assistive AI but deeply wary of autonomous systems. The field understands that algorithms can elevate quality and efficiency, but they cannot—and should not—bear sole responsibility for interpreting tissue, integrating clinical nuance, or adjudicating uncertainty.


Why the distinction matters

Marketing narratives blur the line between assistive and autonomous. Operationally, this creates two dangerous extremes:

  • over-trust: assuming the model "knows" more than it does,

  • under-trust: dismissing or ignoring helpful signals because expectations were unreasonable.


Treating AI as a second reader helps calibrate our expectations and clarifies the respective responsibilities of humans and machines.


Workflow, Not Math: The Hidden Barriers to Clinical Integration

Technical performance is rarely the limiting factor for AI deployment in lab medicine. More often, the barriers are operational and workflow-driven.


Pre-analytic variability

No algorithm, however elegant, can overcome poor input. Hemolysis, mislabeled samples, incomplete clinical information, and inconsistent sample handling all degrade model performance. "Garbage in, garbage out" is not cynicism; it is clinical reality.


LIS/EMR integration

An AI flag that never reaches the transfusion physician or technologist in a usable format is functionally irrelevant. Many promising tools fail not because they are inaccurate, but because they exist outside the everyday workflow.


Alert fatigue

If an AI model surfaces insights the same way EMR pop-ups surface medication alerts, clinicians will click through them reflexively. Effective AI must blend into the workflow — not interrupt it.


Staff training

AI disagreement is a liminal space. When a model flags an unexpected pattern, what is the technologist supposed to do? Without clear protocols, the burden on staff increases rather than decreases.


Model stewardship

Who revalidates the model yearly? Who monitors drift? Who owns threshold adjustments? Governance is critical and cannot be an afterthought.


These challenges are not exciting, but they are what determine whether an AI tool genuinely helps clinicians — or becomes abandoned.


The Hype Cycle Problem

AI in medicine moves in predictable hype cycles. When expectations are unrealistic, three harms follow:


1. Overpromising leads to disillusionment

When leadership expects instant automation, disappointment is inevitable. This can poison the well for future tools that are more modest but more practical.


2. Steps get skipped

Proper change management, validation, and staff training take time. Under the pressure of hype, institutions try to "roll out" tools before anyone understands how to use them.


3. Trust becomes polarized

Some clinicians embrace AI uncritically. Others reject it entirely. Neither posture produces safe patient care.


Reframing AI as a second reader helps temper the hype and brings expectations back into alignment with clinical workflow and real-world constraints.


What Safe, Responsible AI Actually Looks Like

Clear intended use

Every AI tool must answer one question precisely: What is the intended use? Ambiguous purpose leads to ambiguous outcomes.


Human-in-the-loop structure

High-impact clinical decisions — transfusion thresholds, rejection of critical values, or product allocation — should never be automated fully. AI highlights patterns; humans interpret them.


Local validation

Models must be calibrated to local population characteristics, including major demographic differences, high-obesity populations, rare disease prevalence, and unique practice patterns.


Ongoing monitoring

Performance changes over time. Drift is real. Monitoring is not optional.


Defined failure modes

Clinicians need clarity: When should I ignore this model? Understanding limits is as important as understanding utility.


Explainability (pragmatic, not academic)

Technologists and clinicians need broad insight into why a model fires — high-level logic is sufficient. Full algorithmic transparency is not required.


Together, these guardrails ensure that AI functions as a clinically meaningful assistant, not an unpredictable black box.


A Transfusion Medicine Lens: Where AI Actually Delivers Value

Transfusion medicine offers a prime example of how AI should function in practice: as a second reader that enhances safety and efficiency.


Utilization and stewardship

AI can identify patterns of overuse or underuse, highlight outlier ordering habits, or flag cases where restrictive thresholds are inconsistently applied. But humans — transfusion physicians, technologists, PBM programs — interpret and respond to these patterns.


Inventory and product management

Platelet forecasting, rare phenotype prediction, and resource allocation are well-suited to assistive AI. The model surfaces the signal; the human makes the plan.


Risk prediction

Predictive models for bleeding, DHTR risk, TRALI likelihood, or massive transfusion activation can bring subtle risk factors to the surface. They augment human judgment but do not replace it.


These examples demonstrate the core argument of this piece: AI helps most when it supports human cognition without competing with it.


Conclusion: Getting the Metaphor Right

AI in pathology and laboratory medicine is not a second brain—and expecting it to be one sets everyone up for failure.


It is a second reader.

A pattern spotter.

A triage assistant.

A flagger of outliers.

A partner in safety and quality.


When we ground AI in its true purpose, we can finally deploy it in ways that are meaningful, safe, and sustainable. The challenge is not to automate pathology or transfusion medicine, but to integrate AI into workflows as a thoughtful collaborator.


The future of AI in the laboratory will belong to the institutions and clinicians that understand this distinction:


Useful AI is not autonomous. It is assistive — and that is exactly where it belongs.

 
 
Raymond, Caitlin M._edited.jpg

Caitlin Raymond MD/PhD

I'm a hybrid of Family Medicine and Pathology training. I write about the intersection of blood banking and informatics, medical education, and more!

  • Twitter
  • LinkedIn
  • Instagram

​

​

Subscribe

Thanks for submitting!

©2023 by Caitlin Raymond. Powered and secured by Wix

bottom of page