top of page

Search Results

98 results found with an empty search

  • Where Autonomy Ends: Directed Donation, COVID Myths, and the Ethics of Saying No

    Today we had a case that many transfusion services will recognize. A patient scheduled for surgery requested a directed blood donation. The reason given was concern about receiving blood from donors who had received a COVID-19 vaccine. The answer was no. She returned with a revised request: this time citing religious preference and psychological comfort. Again, the answer was no. Afterward, I had a long discussion with a resident — thoughtful, patient-centered, and clearly uncomfortable with refusing a request framed in ethical language. I don’t think I convinced them. And that matters, because this is exactly the kind of scenario where kindness  and ethics  feel deceptively close, and where “just accommodating” can feel easier than holding the line. So let’s be explicit about why the answer was no—and why it needed to be. What the Evidence Actually Says About COVID Vaccination and Blood Safety The fear driving these requests is understandable—but it is not evidence-based. There is no evidence that blood from donors who were vaccinated against COVID-19—or previously infected with SARS-CoV-2—poses increased risk to transfusion recipients. The strongest data come from a large recipient-linked study published in Transfusion  in 2025 (Roubinian et al.). Investigators examined 7,773 transfusion recipients across 8,715 hospitalizations, directly linking over 34,000 plasma and platelet units to donor vaccination and infection status. They assessed outcomes people worry about most: thrombosis, increased respiratory support, and hospital mortality. They found no association — not with vaccinated donors, not with previously infected donors, not with recent vaccination, recent infection, or high antibody titers (Roubinian et al., Transfusion , 2025). Concerns about transfusion-transmitted SARS-CoV-2 have likewise failed to materialize. While viral RNA can be transiently detected in blood during infection, infectious virus has not been recovered, and no cases of transfusion-transmitted COVID-19 have been documented. This is why donor vaccination status is not tracked or used in blood allocation. So when patients request “non-vaccinated blood,” they are not asking for something safer. They are asking for something different , based on a belief that the data do not support. What Directed Donation Is Actually For Directed donation exists—but for narrow medical reasons, not reassurance. Historically, it was used before modern infectious disease testing. Today, it is reserved for specific clinical indications, such as: Patients with rare blood types or antigen profiles Situations where compatible community donors are unavailable Selected pediatric or immunologic scenarios where compatibility constraints are real Outside of these circumstances, directed donation does not improve safety. In fact, it often makes things worse. A 2025 multidisciplinary consensus analysis in Annals of Internal Medicine  (Jacobs et al.) concluded that directed donation for nonmedical reasons — such as donor vaccination status or personal belief — introduces patient safety risks, operational burden, and societal harm without evidence of benefit. Why Directed Donation Increases Risk and Cost (Even When Everyone Means Well) The most persistent misconception about directed donation is that it is, at worst, harmless. It is not. Directed donation systematically increases risk, cost, and error—and it does so in predictable ways. First, donor risk. Directed donations disproportionately rely on first-time donors, who have consistently higher rates of infectious disease marker positivity than repeat community donors (Dorsey et al., Transfusion , 2013). In addition, directed donors are often under emotional or social pressure, which reduces the accuracy of donor health-history reporting—critical because all testing has a window period (Jacobs et al., Ann Intern Med , 2025). Second, immunologic risk. When directed donors are family members, additional hazards appear: HLA alloimmunization, transfusion-associated graft-versus-host disease (necessitating irradiation), TRALI risk, and complications relevant to future transplantation or pregnancy (Jacobs et al., 2025; Weaver et al., Pediatrics , 2023). Community blood is deliberately immunologically “boring.” Directed blood is not. Third, error and logistics. Modern transfusion safety depends on standardization. Directed units require special scheduling, labeling, tracking, storage, and coordination across multiple systems. Each deviation from routine workflow increases the risk of mislabeling, misidentification, expiration, delay, or waste. This is a human-factors problem, not a personnel problem (Jacobs et al., 2025). Fourth, reliability. Directed donation assumes ideal timing: donors qualify, donate on schedule, units clear testing, surgeries proceed as planned, and blood needs match exactly. In reality, donors are deferred, units expire, surgeries change, and emergencies don’t wait. When directed units fail, patients still receive community blood — often under more urgent conditions. Fifth, cost. Directed donation is substantially more expensive: additional recruitment, separate processing and inventory, irradiation, staff time, and higher wastage rates. Who pays is often unclear — the patient, the hospital, the blood center, or all three. There is no evidence these costs improve outcomes (Jacobs et al., 2025). Finally, system-level harm. Blood is a shared resource. Normalizing directed donation diverts donors from the community supply, worsens shortages, delays care, and privileges patients with social capital and access. It also implicitly validates misinformation — suggesting that some donors’ blood is inherently safer without evidence. Where Autonomy Applies—and Where It Does Not This is where the ethical line must be drawn clearly. Religious objection to blood transfusion itself is ethically valid. Competent adults may refuse blood products entirely, even if refusal carries serious risk. That is autonomy. But autonomy does not extend to requesting blood from donors with preferred personal characteristics absent medical necessity. Religion and moral frameworks may motivate people to donate blood altruistically to the community supply (Maghsudlu & Nasizadeh, 2011; Gillum & Masters, 2010). They do not create a right to receive blood from a chosen category of donors. Once belief-based donor preferences are accommodated, medicine implicitly endorses them. That opens the door to discriminatory requests — vaccination status today, race or gender tomorrow — and undermines decades of ethical progress in transfusion medicine (Jacobs et al., 2025). Respecting patients does not require validating unfounded fears or restructuring safety systems around them. The Uncomfortable Truth What made this case difficult wasn’t the policy—it was the discomfort. Saying no feels unkind. Especially when requests are reframed in ethical language. Especially when anxiety is real. Especially when the temptation is to say, “Why not just this once?” But “just this once” is never neutral. Every exception teaches something: about evidence, about safety, about whose fears medicine will legitimize. Transfusion medicine exists precisely because we learned—often painfully—that systems protect patients better than intentions. So yes, we said no. Twice. Not because we dismiss religion. Not because we don’t care about comfort. But because our ethical obligation is to protect patients, preserve trust in the blood supply, and practice medicine grounded in evidence — not fear. And sometimes, that means holding the line clearly, calmly, and without apology. References Roubinian NH, Greene J, Spencer BR, et al. Blood donor SARS-CoV-2 infection or vaccination and adverse outcomes in plasma and platelet transfusion recipients. Transfusion.  2025;65(3):485–495.doi:10.1111/trf.18159 Jacobs JW, Booth GS, Lewis-Newby M, et al. Medical, societal, and ethical considerations for directed blood donation in 2025. Annals of Internal Medicine.  2025;178:1021–1026.doi:10.7326/ANNALS-25-00815 Dorsey KA, Moritz ED, Steele WR, et al. A comparison of HIV, HCV, HBV, and HTLV marker rates for directed versus volunteer blood donations to the American Red Cross, 2005–2010. Transfusion.  2013;53:1250–1256.doi:10.1111/j.1537-2995.2012.03904.x Weaver MS, Yee MEM, Lawrence CE, Matheny Antommaria AH, Fasano RM. Requests for directed blood donations. Pediatrics.  2023;151(3):e2022058183.doi:10.1542/peds.2022-058183 Maghsudlu M, Nasizadeh S. Iranian blood donors’ motivations and their influencing factors. Transfusion Medicine.  2011;21(4):247–255.doi: 10.1111/j.1365-3148.2011.01077.x Gillum RF, Masters KS. Religiousness and blood donation: Findings from a national survey. Journal of Health Psychology.  2010;15(2):163–172.doi: 10.1177/1359105309345171

  • Extracorporeal Photopheresis Schedules: A Practical Guide for Trainees

    Schedules, Evidence, and Real-World Alternatives One of the most common questions I get from residents rotating through apheresis or transplant is deceptively simple: “How often do we do extracorporeal photopheresis?” The honest answer is: it depends —and not in a hand-wavy way. ECP schedules vary by disease, acuity, and goals of therapy, and the evidence actually supports very different approaches for acute GVHD, chronic GVHD, and cutaneous T-cell lymphoma. Add in newer targeted agents like ruxolitinib and belumosudil, and the question becomes not just how often , but why ECP at all . Let’s walk through what we know, what we don’t, and how to explain this clearly to trainees. First: What an “ECP cycle” actually means Before getting into frequency, it helps to define the unit of treatment. Traditionally, one ECP cycle = treatment on two consecutive days. This convention dates back to the original FDA-approved protocols for cutaneous T-cell lymphoma and has persisted across indications. UK consensus statements and most international guidelines still define ECP this way—whether the cycles are weekly, every two weeks, or monthly. Importantly, this two-day structure is not based on randomized comparisons showing superiority over alternate-day or single-day schedules. It’s a mix of historical precedent, logistics, and immunologic plausibility: delivering two closely spaced infusions of apoptotic, photoactivated leukocytes may amplify the tolerogenic signal that drives regulatory T-cell expansion. There are  data supporting single-day, higher-volume ECP protocols—especially when access, staffing, or infection risk is a concern—but we do not have evidence that every-other-day (QOD) schedules improve outcomes. In practice, QOD would increase patient burden without a demonstrated benefit. So when residents ask, “Why two days in a row?” the most accurate answer is: Because that’s how ECP has been studied, standardized, and operationalized—not because it’s the only biologically plausible option. Acute GVHD: Intensive up front, then stop For acute GVHD, the signal is fairly consistent across studies: front-load the intensity. Most consensus guidelines support: Weekly ECP, usually as two consecutive days per week For about 8 weeks With no routine maintenance once a response is achieved Real-world and pediatric studies vary in how aggressive they start—some using twice-weekly or even three-times-weekly treatments early on—but the theme is the same: hit hard early, then taper or discontinue. Response rates across these studies fall in the 55–65% range early, with higher cumulative response by 8–12 weeks. The key teaching point for trainees is this: Acute GVHD behaves like an inflammatory emergency. ECP works best when used intensively and early—not as a slow burn. Chronic GVHD: Lower intensity, much longer runway Chronic GVHD is a different disease biologically and clinically, and ECP schedules reflect that. Typical regimens include: Two consecutive days every 2 weeks With tapering to monthly treatments based on response Over 12–18 months, sometimes longer Large series using bimonthly schedules report response rates approaching 80–90%, especially for skin and mucocutaneous disease. Importantly, longer duration of therapy appears to correlate with better outcomes, even when early responses are modest. This is a critical mindset shift for residents: Chronic GVHD is not about rapid control—it’s about sustained immune retraining. Stopping ECP too early is one of the most common reasons for perceived “failure.” CTCL / Sézary syndrome: Slow and steady For cutaneous T-cell lymphoma, ECP remains a preferred therapy in major guidelines, either alone or in combination. The classic approach is: Two consecutive days every 2–4 weeks With the expectation that responses take months, not weeks This is often frustrating for trainees (and patients), but it mirrors the biology of the disease. CTCL responds to cumulative immunomodulation, not rapid cytoreduction. “If ruxolitinib works so well… why ECP?” This is the question residents are really  asking now. Ruxolitinib is FDA-approved and guideline-endorsed as first-line therapy for steroid-refractory acute and chronic GVHD. Belumosudil has strong data in later-line chronic GVHD. So where does ECP fit? The short answer: toxicity, durability, and complementarity. Ruxolitinib (JAK1/2 inhibition) is highly effective but commonly causes cytopenias and increases infection risk. Belumosudil (ROCK2 inhibition) targets fibrosis and immune imbalance, particularly useful in sclerotic chronic GVHD. ECP, by contrast, is remarkably safe—minimal cytopenias, low infection risk, and steroid-sparing over time. That safety profile matters. ECP is often favored: When cytopenias limit ruxolitinib When infections are active or recurrent As combination therapy, where emerging data suggest better long-term control than ruxolitinib alone In other words, ECP isn’t obsolete—it’s strategic. What I tell residents to remember If I had to distill this into a few teaching pearls: ECP is not one schedule—it’s a framework. Acute GVHD → intensive, short-term. Chronic GVHD → prolonged, maintenance-oriented. Two consecutive days is convention, not dogma. ECP’s value is safety, durability, and synergy—not speed. And perhaps most importantly: If you’re asking how often to do ECP, you’re already asking the right question. The answer lives at the intersection of disease biology, patient tolerance, and what you’re trying to achieve.

  • Thrombosis and Extracorporeal Photopheresis: What the Risk Actually Looks Like

    Extracorporeal photopheresis (ECP) has one of the best safety reputations in procedural medicine. It’s been used for decades. Hundreds of thousands of treatments. Indications ranging from cutaneous T-cell lymphoma to chronic graft-versus-host disease. And yet, every so often, the same question resurfaces: Does ECP increase the risk of thrombosis? The short answer is: there is a signal, but it’s small, context-dependent, and often misunderstood. The longer answer is more interesting—and more useful. Where the concern comes from In 2018, the FDA issued a letter to healthcare providers warning of reported cases of venous thromboembolism (VTE), including pulmonary embolism, in patients undergoing ECP with the THERAKOS CELLEX system. That sentence alone has done a lot of quiet work over the years. What often gets lost is why  the FDA issued the letter and what it actually said . The warning was based on post-marketing reports, not on prospective trials or large cohort studies. The FDA described seven pulmonary emboli and two deep vein thromboses, all occurring in patients treated for chronic GVHD. Two of the pulmonary emboli were fatal. The mean time to event was about 1.7 days, leading to the phrasing that events occurred “during or shortly after” treatment sessions. Importantly, the FDA did not conclude that ECP causes thrombosis. The language was careful: ECP may  increase risk, based on timing and clustering in a vulnerable population. That distinction matters. What the published literature shows (and doesn’t) If you go looking for thrombosis in the ECP literature, you’ll find… very little. Across more than 30 years of published experience: Thrombotic events are rare Most reported cases are catheter-associated, not systemic Large case series and reviews consistently emphasize ECP’s excellent safety profile Coagulation parameters remain stable during treatment, even with long-term therapy Laboratory studies show platelet activation after UVA/8-MOP exposure—but without aggregation or downstream thrombotic effects In pediatric cohorts, multicenter studies, and long-term follow-up reports, thrombosis appears as an isolated complication, not a recurring pattern. That doesn’t mean the FDA signal was wrong. It means the signal exists in a space the literature hasn’t fully interrogated. The missing denominator problem One of the hardest things about post-marketing safety signals is that they arrive without context. We don’t know: How many total ECP treatments occurred during the reporting window Whether events clustered around central venous access How immobility, inflammation, infection, or baseline hypercoagulability contributed Whether similar patients not receiving ECP had comparable short-term VTE rates And chronic GVHD patients—who made up all reported cases—already carry a high baseline risk of thrombosis. When a population is fragile enough, even a neutral intervention can appear suspicious if you look only at timing. So where does that leave us? A reasonable, evidence-based position looks something like this: ECP is not a high-thrombosis procedure There is a small regulatory safety signal, concentrated in a very high-risk population Timing alone does not establish causality Access-related thrombosis likely explains a meaningful fraction of reported events Clinicians should remain alert—but not alarmist This is not a story of a dangerous therapy being uncovered. It’s a story of how safety signals emerge, how they should be interpreted, and how nuance gets flattened over time. Why this matters ECP is often used when options are limited. Overstating risk can quietly narrow access to a therapy that is otherwise well-tolerated and effective. At the same time, ignoring regulatory signals entirely isn’t good medicine either. The work, as always, is in the middle: understanding who  might be at risk, when  vigilance matters most, and how  to contextualize rare events without letting fear do the thinking. Bottom line: If thrombosis were a common or intrinsic complication of ECP, we would know by now. What we have instead is a small, signal-level warning that deserves clarity—not amplification. And clarity is something we can still build.

  • When to Culture a Product: AABB vs BEST Guidelines

    How the BEST Criteria Updated a Decade-Old AABB Approach to Septic Transfusion Reactions One of the most uncomfortable questions in transfusion medicine is deceptively simple: When should we culture the patient and the blood product after a transfusion reaction? Culture too often, and you trigger false positives, unnecessary lookbacks, and wasted resources.Culture too conservatively, and you risk missing a true septic transfusion reaction — one of the most dangerous complications we manage. For years, many institutions have relied on guidance from an AABB Association Bulletin published in 2014. But in 2019, a large multicenter study fundamentally challenged whether those criteria are sensitive enough for real-world practice. This post walks through what changed, why it matters, and what the tradeoff actually is. The AABB 2014 Bulletin: Safety Through Clinical Vigilance The 2014 AABB Association Bulletin on suspected bacterial contamination of platelets was written with a clear goal:don’t miss sepsis. Its framework is intentionally broad and clinically driven. In short, it recommends investigation when: A patient develops fever ≥38°C with a ≥1°C rise, plus  at least one associated symptom (rigors, hypotension, tachycardia, dyspnea, etc.), or There is any  clinical change that raises concern for sepsis — even without fever Importantly, the bulletin acknowledges: Fever may be absent in neutropenic or immunosuppressed patients Antipyretics may blunt temperature rise Symptoms may be delayed This guidance reflects its era. In 2014, the dominant concern was under-recognition of septic transfusion reactions, especially with gram-positive organisms. The solution was education, vigilance, and a low threshold to act. What the bulletin did not  do was define: Objective thresholds for hypotension or tachycardia How to systematically account for antipyretic use How well these criteria actually perform in practice That gap mattered more than we realized. The Problem: How Well Do the AABB Criteria Actually Work? In 2019, investigators from the BEST (Biomedical Excellence for Safer Transfusion) Collaborative asked a hard question: If we apply the AABB criteria to real-world transfusion reactions, how many culture-positive cases do we actually detect? Using data from nearly 800,000 transfusions across 20 centers, they found that the answer was… not many. When evaluated empirically: The AABB criteria detected only ~40% of culture-positive reactions The majority of reactions that ultimately yielded positive cultures never met AABB triggers Reliance on fever and subjective symptom reporting was a major limitation In other words, the system was doing exactly what it was designed to do — but that design was missing cases. The BEST Criteria: Trading Specificity for Sensitivity (On Purpose) Rather than discarding the AABB framework, the BEST investigators asked: What small, evidence-based changes would catch more cases? They tested three modifications, all of which improved detection: 1. Isolated High Fever Counts A temperature ≥39°C with a ≥1°C rise triggered culture even without other symptoms. Why? Because multiple international criteria already recommended this — and AABB did not. 2. Objective Vital Sign Definitions Instead of relying on checkbox reporting: Hypotension required both an absolute BP threshold and a percentage drop Tachycardia required ≥100 bpm and a significant increase from baseline This mattered because provider-reported vital sign abnormalities were frequently inaccurate. 3. Antipyretics Matter If a patient received antipyretics before transfusion, absence of fever could not be used to rule out sepsis when other concerning signs were present. This was not a philosophical change — it reflected basic physiology. Did It Work? Yes — and predictably. When all three modifications were combined into the BEST criteria: Sensitivity improved to ~70–75% Specificity decreased to ~45% Crucially: there were no cases detected by AABB that BEST missed In other words, BEST caught substantially more potential septic reactions — at the cost of more cultures and more false positives. This was not an accident. It was a conscious tradeoff. The Real Debate: False Positives vs Missed Sepsis Critics of broader culturing thresholds often raise legitimate concerns: Positive product cultures trigger supplier notification Co-components may be quarantined or destroyed Many positive cultures do not correlate with patient infection All of that is true. But the BEST authors make a different argument: In a passive surveillance system, missing cases is the greater danger. Septic transfusion reactions are rare, difficult to adjudicate, and often masked by critical illness. Fever is unreliable. Cultures are imperfect. But hypotension requiring pressors, shock, or unexplained deterioration are not benign signals, even when temperature is normal. The BEST criteria reflect a shift from: “Culture when sepsis is obvious” to “Culture when sepsis is plausible and high-risk.” Where This Leaves Us The AABB 2014 bulletin is not wrong .It is incomplete by modern standards. The BEST criteria don’t replace clinical judgment — they formalize what experienced clinicians already know: Fever is not required for sepsis Antipyretics obscure key signals Objective thresholds matter Sensitivity matters more than comfort when stakes are high Institutions now face a choice: Accept fewer cultures and higher miss rates, or Accept more cultures to reduce the risk of missing a true septic transfusion reaction That choice is about risk tolerance, not right vs wrong. But it should be made using current evidence, not decade-old assumptions. Bottom line If you are still relying solely on fever-centric AABB criteria from 2014, you are almost certainly missing cases. The BEST criteria offer a data-driven update that reflects how septic transfusion reactions actually present — messy, masked, and dangerous. In transfusion medicine, that tradeoff is worth naming out loud.

  • TTP’s Little-Known Cousin: TAMOF as a TTP-Like Process

    It was a routine service morning — until it wasn’t. The patient wasn’t crashing in a cinematic way. No massive bleeding. No dramatic hypotension. But the labs were drifting in a direction that felt wrong: platelets falling, creatinine creeping, LDH elevated, hemoglobin sliding just enough to notice. Organ dysfunction without a single unifying explanation. Somewhere between the pattern and the unease it produced, the diagnosis surfaced quietly: TAMOF. What TAMOF Is — and Why It’s Easy to Miss Thrombocytopenia-associated multiple organ failure (TAMOF) occupies an uncomfortable space between entities we think we understand well: DIC, TTP, and sepsis-associated coagulopathy. Because it doesn’t fit cleanly into any of them, it is often mislabeled — or not labeled at all. At its core, TAMOF is a secondary thrombotic microangiopathy driven by systemic inflammation. Systemic inflammation leads to a relative decrease in ADAMTS13, which results in the accumulation of ultra long vWF multimers. When regulatory capacity is insufficient for that inflammatory burden, platelet-rich microthrombi form in the microvasculature, impairing organ perfusion. This is why TAMOF behaves like TTP downstream — even though the trigger is infection or inflammation rather than autoimmunity. It is not primarily a bleeding disorder. It is not primarily a consumptive coagulopathy. It is a microvascular platelet process with systemic consequences. How the Labs Tell the Story TAMOF rarely declares itself with a single decisive test. Instead, it reveals itself through converging laboratory signals, each nudging the differential toward microangiopathy. LDH  is often elevated, reflecting both microangiopathic hemolysis and tissue ischemia from small-vessel thrombosis. In isolation, this finding is nonspecific. In context — alongside falling platelets and worsening organ function — it becomes meaningful. Haptoglobin  may be low, but normal values do not exclude TAMOF. Inflammatory states raise baseline haptoglobin levels, which can mask hemolysis. Trends and correlations matter more than absolutes. Peripheral smear  findings can support the diagnosis, but they are often subtle. Schistocytes may be present — sometimes sparsely, sometimes clearly — depending on the degree of microangiopathic hemolysis. Coagulation studies  are especially useful for what they don’t  show. In TAMOF, PT and aPTT are often normal or only mildly prolonged, fibrinogen is typically preserved, and bleeding is uncommon. This profile argues against overt DIC and redirects attention away from primary consumptive coagulopathy. ADAMTS13: Severity Without a Shortcut ADAMTS13 activity in TAMOF spans a wide range. Levels may be modestly reduced, markedly decreased, or — in some cases — severely deficient, with accompanying schistocytosis and overt microangiopathic hemolysis. What distinguishes TAMOF from classic immune-mediated TTP is not the absolute ADAMTS13 level, but the mechanism of deficiency. In TAMOF, reduced ADAMTS13 reflects: Consumption during systemic endothelial activation Inflammatory inhibition of enzyme activity Reduced hepatic synthesis of ADAMTS13 Even when ADAMTS13 activity is severely reduced, the process is typically secondary to inflammation or sepsis, rather than autoantibody-mediated. Rigid thresholds can mislead. An ADAMTS13 level interpreted in isolation may obscure the diagnosis rather than clarify it. Context matters. Narrowing the Differential Taken together, this laboratory pattern helps distinguish TAMOF from its closest mimics: Versus DIC: preserved coagulation parameters and a platelet-driven microangiopathic picture argue against primary consumption. Versus classic TTP: an inflammatory trigger and secondary mechanism of ADAMTS13 deficiency point away from immune-mediated disease, even when the downstream effects overlap. Versus “just sepsis”: progressive thrombocytopenia, rising LDH, and organ dysfunction out of proportion to hemodynamics suggest something more than cytokines alone. No single lab makes the diagnosis. But the pattern does. Why Plasma Exchange Makes Sense Once TAMOF is recognized as a thrombotic microangiopathy, the rationale for therapeutic plasma exchange becomes clear. Plasma exchange functions in TAMOF much as it does in TTP: Replenishing ADAMTS13 Removing ultra-large and high-molecular-weight von Willebrand factor multimers Reducing circulating inflammatory mediators that perpetuate endothelial injury The trigger differs. The downstream pathophysiology does not. When TAMOF is dismissed as “just sepsis” or mislabeled as DIC, the opportunity for targeted intervention narrows. Early recognition turns an otherwise nebulous complication into a treatable process. A Final Lab-Centered Takeaway TAMOF is not rare because it is uncommon. It is rare because we don’t look for it. For those of us in laboratory medicine, this is where our value is clearest — not in reporting isolated numbers, but in helping clinicians see how those numbers fit together. Sometimes the diagnosis isn’t hidden. It ’s just fragmented — waiting for someone to assemble the story.

  • When At-Home ABO Typing Creates a Family Crisis

    I learned something new this week: you can buy an at-home ABO blood typing kit on Amazon. I didn’t know that. And I suspect many transfusion medicine physicians don’t either. I found out when a pediatrician called with a worried question. A newborn’s blood type had been determined appropriately in the hospital: A negative . The mother’s type was known: O negative . The father reported he was O negative , based on an at-home blood typing kit. The parents were now concerned about non-paternity. At first glance, this looks like a classic ABO inheritance problem. Two O parents should not have an A child. But the problem wasn’t genetics — it was data quality. The father’s blood type was not actually known. What at-home ABO typing really tells you Consumer ABO kits perform forward typing only, using fingerstick blood applied to anti-A and anti-B reagents, with visual interpretation by the user. They do not  include: Reverse typing Internal concordance checks Trained interpretation Safeguards against weak reactions, drying artifact, or clotting These kits are widely available online and are not FDA-cleared diagnostic tests. They do not  reliably determine a person’s blood type. The most likely explanation is also the least dramatic The simplest explanation was that the father is not type O. One particularly plausible possibility is blood group A2. About 20% of people with blood group A are A2, translating to roughly 4–8% of the general population, depending on ancestry. A2 red cells express fewer A antigens and may show weak or absent agglutination with some anti-A reagents, especially outside a controlled laboratory setting. Critically: A2 individuals are identified on reverse typing, by the presence of anti-A1 At-home kits do not include reverse typing Newborn hospital testing does  include appropriate confirmatory methods So an A2 father could easily misinterpret a forward-only home test as “O,” while the newborn’s A type is correctly identified. No exotic genetics required. Other mundane failure modes Even without A2: Weak agglutination may be misread as negative Drying artifact can obscure reactions Fingerstick clotting or poor mixing can alter appearance User interpretation error is common, even among trained staff This is precisely why laboratory ABO determination relies on redundancy and safeguards, not a single visual read. Why this matters clinically ABO typing feels deceptively simple. Most people learn their blood type early and treat it as a personal identifier. That familiarity makes it especially vulnerable to misunderstanding. When an at-home test says “O,” people don’t hear: this is a forward type screen without confirmation. They hear: I know my blood type. In this case, a testing limitation nearly became a family crisis. The ethical risk Non-paternity should never be raised on the basis of an unvalidated consumer test. The risk here isn’t the existence of these kits — it’s clinicians being unaware of them and their failure modes. A simple rule If a patient says: "I know my blood type - I tested it at home." The response should be calm and direct: “At-home blood typing kits are not reliable. If needed, we can determine your blood type properly through a laboratory.” No speculation. No escalation. Why transfusion medicine should know this exists This issue won’t appear in hemovigilance reports or quality dashboards. It will surface quietly as: Pediatric questions Awkward counseling conversations Family anxiety Recognizing at-home ABO typing for what it is allows us to de-escalate quickly and prevent harm that has nothing to do with biology. I didn’t know these kits were being marketed. Now I do. And next time, I’ll recognize the problem immediately — not as a mystery of inheritance, but as a reminder that laboratory safeguards are part of the test.

  • Board Prep: Overview of Donor Infectious Disease Eligibility

    Donor infectious disease eligibility is one of those topics that feels straightforward until you’re asked to explain why  a donor with negative testing still isn’t eligible — or why some pathogens get NAT, others don’t, and some only get tested once. This post walks through donor eligibility the way the boards expect you to understand it: as a risk-assessment framework , not just a checklist of tests. What Is Donor Infectious Disease Eligibility? Donor eligibility  is the assessment of a donor’s risk of transmitting infectious disease to a recipient. Its purpose is recipient protection and it is based on two pillars: Donor history Laboratory testing This is distinct from donor suitability , which focuses on donor safety (for example, hemoglobin thresholds or procedural tolerance). A donor can be suitable but not  eligible — and vice versa. How Donor Eligibility Is Determined The Donor History Questionnaire (DHQ) The DHQ evaluates risks that laboratory testing alone cannot fully capture, including: Symptoms of infection Behavioral risk factors Travel and residence history Exposure history (blood, needles, sexual contact) Testing does not  eliminate window-period risk, and emerging pathogens may not yet have validated screening assays. As a result, negative testing does not equal eligibility  when exposure risk is recent. The Window Period (Why History Still Matters) The window period  is the time between infection and when that infection becomes detectable by testing. Even with modern NAT-based screening, window periods still exist. This is why donor history remains a critical component of eligibility determination. Infectious Disease Screening: The Tests (What We Use and Why) All allogeneic donors undergo infectious disease screening using serologic testing, nucleic acid testing (NAT), or both . The strategy used for each pathogen reflects its biology: duration of viremia, durability of antibody response, prevalence, and the clinical consequences of transmission. HIV-1/2 Serology HIV-1/2 antibody HIV-1 p24 antigen NAT HIV-1 RNA Window Period NAT: 9–11 days Serology: 15–20 days Notes Layered NAT and Ag/Ab screening minimizes window-period transmission, making residual transfusion-transmitted HIV risk extremely low. However, there is no licensed HIV-2 NAT test in the U.S. , so HIV-2 detection relies entirely on serology. Hepatitis B Virus (HBV) Serology HBsAg Anti-HBc NAT HBV DNA Window Period NAT: 20–22 days Serology: 30–38 days Notes HBV has the longest residual transfusion risk  among routinely screened viral infections due to low-level, intermittent viremia . Triple-layer testing mitigates occult and low-level infection but does not eliminate risk entirely. Hepatitis C Virus (HCV) Serology Anti-HCV NAT HCV RNA Window Period NAT: 3–5 days Serology: 50–70 days Notes Universal NAT has nearly eliminated window-period HCV transmission. Anti-HCV testing is notorious for false positives , which is why donor re-entry policies matter. HTLV-I/II Serology Anti-HTLV-1/2 NAT Not performed Window Period Serology: 45–60 days Notes HTLV infection is chronic with a durable antibody response, enabling serology-only screening. HTLV-1 is associated with adult T-cell leukemia/lymphoma . The screening strategy reflects low prevalence , not a short window period. West Nile Virus (WNV) Serology Not performed NAT WNV RNA Window Period NAT: 6–10 days Notes Short viremia necessitates NAT-only, seasonally adaptive screening . Serology is not useful for donor screening in acute infection. Syphilis ( Treponema pallidum ) Serology Treponemal antibody test Non-treponemal test NAT Not performed Window Period Serology: 10–30 days Notes T. pallidum  survives poorly in refrigerated blood, making transfusion transmission rare but documented. Treponemal antibodies persist long after infection and treatment, which is why syphilis is a key context for donor re-entry . Trypanosoma cruzi  (Chagas Disease) Serology Antibody testing only NAT Not performed Window Period Serology: 3–8 weeks Notes Chronic infection with durable antibodies enables one-time serologic screening . Transmission is rare but serious and linked to donors with residence in endemic areas. Babesia Serology Not performed NAT Babesia  DNA Window Period NAT: 7–14 days Notes Persistent asymptomatic parasitemia necessitates regional NAT screening . Babesia  is a leading cause of fatal transfusion-transmitted infection in the U.S. , with required testing in endemic regions including the Northeast and Upper Midwest . Deferrals: Temporary vs Indefinite Temporary deferrals  apply when risk decreases with time Indefinite deferrals  apply when risk does not meaningfully decrease Examples include: Temporary: recent tattoo, recent exposure, acute illness Indefinite: HIV, chronic HBV, HCV, HTLV, vCJD risk High-Yield Deferral Periods (Boards Love These) Some deferrals are particularly high yield because they test whether you understand current , risk-based policy rather than outdated rules. Malaria Travel to endemic area (no illness): 3-month deferral Residence in endemic area or prior malaria: 2-year deferral High-Risk Sexual Behavior or Injection Drug Use Universal 3-month deferral Applies regardless of gender or sexual orientation Incarceration >72 hours: 12-month deferral <72 hours: No deferral Tattoos State-licensed facility: No deferral Non-state-licensed facility: 3-month deferral vCJD-Related Risks Residence in Great Britain or Europe: No deferral Use of bovine growth hormone: Indefinite deferral Cadaveric dura mater transplant: Indefinite deferral Donor Eligibility Potpourri (The Real-World Stuff) Donor Re-Entry Donor re-entry allows individuals with false-positive screening tests  to become eligible to donate again. This process: Is pathogen-specific Is regulated by the FDA Requires repeat testing on a subsequent donation Confirmed infections generally preclude re-entry, with syphilis (after full treatment)  being the main exception. Product Look-Backs Product look-backs occur when a donor is later found to have a positive infectious disease test. All donations during a defined prior period must be investigated to determine: Whether products were transfused Whether recipient notification or testing is required For boards: Look-backs are mandated for HIV-1/2 and HCV The required look-back period is 12 months prior to the positive test Special Donor Populations Directed Donors Must meet the same infectious disease eligibility criteria No relaxation of standards If unused, products may be returned to general inventory Autologous Donors Minimum hemoglobin: >11 g/dL Collection must occur >72 hours before surgery Requires physician order Infectious disease testing may vary by institutional policy If unused, products are discarded Regulatory Oversight: Who Sets the Rules? FDA Establishes laws and regulations 21 CFR 630 governs donor infectious disease testing AABB Interprets FDA regulations into operational standards Maintains the Donor History Questionnaire Understanding who  regulates what  matters — especially when policies change. Consolidated Board Pearls The Basics Define the window period → Time between infection and detection What does DHQ stand for? → Donor History Questionnaire Infectious Disease Screening Name 3 pathogen classes not directly tested → Most bacteria, most parasites, prion diseases Screening Tests Most common fatal transfusion-associated infection? → Babesia HIV-1/2 NAT window period? → 9–11 days Virus with longest residual transfusion risk? → HBV Deferrals Universal deferral period for high-risk behavior? → 3 months Deferral for incarceration <72 hours? → None Deferral for cadaveric dura mater transplant? → Indefinite Eligibility Potpourri Mandated look-back period for HIV? → 12 months What is donor re-entry? → Process allowing donors with false-positive tests to donate again If a directed unit is unused, must it be discarded? → No, it may enter general inventory

  • Board Prep: Introduction to Stem Cell Collection and Transplant

    Stem cell collection sits at the intersection of hematology, immunology, and procedural medicine. It’s conceptually simple — collect enough hematopoietic stem cells to reconstitute marrow — but operationally complex, with decisions at every step that affect engraftment, toxicity, and long-term outcomes. This post walks through stem cell collection from a practical, systems-level perspective: what we collect, where it comes from, how we mobilize it, and what determines whether a transplant succeeds. The Big Picture: What Are We Collecting? At the center of stem cell transplantation are hematopoietic stem cells (HSCs)  — most commonly identified clinically as CD34-positive cells . These cells are capable of: Self-renewal Differentiation into all mature blood lineages Clinically, we collect them for three main purposes: Autologous transplant , where patients receive their own cells back after myeloablative therapy Allogeneic transplant , where donor cells replace a recipient’s marrow Marrow rescue  following intensive chemotherapy While multiple sources exist, modern practice overwhelmingly favors peripheral blood collection. Where Stem Cells Come From Peripheral Blood Peripheral blood stem cells are now the dominant source for both autologous and allogeneic transplants. They: Yield higher CD34+ cell counts Engraft faster than bone marrow Are collected via apheresis rather than surgery The tradeoff, particularly in the allogeneic setting, is a higher risk of graft-versus-host disease (GVHD) . Bone Marrow Bone marrow harvests are obtained directly from the iliac crests under anesthesia. Compared with peripheral collections, they: Require invasive access Contain more red blood cell contamination Carry higher risk of contamination with skin flora They are used less frequently but remain relevant in specific clinical contexts. Cord Blood Cord blood is largely peripheral to apheresis practice but remains board-relevant. It is: Cryopreserved and banked long-term More tolerant of HLA mismatch Limited by lower total cell dose, sometimes requiring multiple units or ex vivo expansion Mobilization: Getting Stem Cells Into the Blood Under normal conditions, hematopoietic progenitor cells reside in the bone marrow niche, where adhesion molecules and chemokine gradients keep them anchored and quiescent. Mobilization disrupts that relationship. Key mechanisms include: CXCR4–CXCL12 (SDF-1α) signaling , which tethers stem cells to marrow stroma Soluble factors such as stem cell factor Proteases and neurotransmitter-mediated signals The most commonly used mobilizing agent is G-CSF , which indirectly alters the marrow microenvironment and increases circulating CD34+ cells. Plerixafor (AMD3100, Mozobil)  works differently: it directly inhibits CXCR4, rapidly releasing stem cells into the peripheral circulation. This is particularly useful in poor mobilizers. How We Collect Stem Cells Apheresis Peripheral blood stem cells are collected via leukapheresis , using continuous-flow cell separators. The procedure: Processes large blood volumes Uses ACD-A  as the anticoagulant Selectively collects mononuclear cells enriched for CD34+ cells This is the most common and operationally efficient collection method. Bone Marrow Harvest Bone marrow collection involves multiple passes through skin and cortical bone. Compared with apheresis, it: Has higher contamination risk Produces products with more RBCs Carries procedural risks such as bleeding and post-procedure anemia How Much Is Enough? Target Cell Dose Cell dose matters — both for engraftment speed and downstream complications. Autologous transplant Minimum effective dose: ~2 × 10⁶ CD34+ cells/kg Optimal dose: 4–6 × 10⁶ CD34+ cells/kg Allogeneic transplant Similar target range Higher doses improve engraftment but increase GVHD risk Collection strategies often balance donor safety, collection efficiency, and the marginal benefit of additional cells. Complications of Stem Cell Collection Citrate Toxicity (Most Common) ACD-A chelates calcium, leading to hypocalcemia. Symptoms range from: Perioral tingling and paresthesias Tetany Cardiac arrhythmias in severe cases Management includes oral or IV calcium supplementation and slowing the collection rate. Vascular Access Issues Central venous catheters carry risks of: Infection Thrombosis Bleeding Donor-Specific Issues Allogeneic donors may experience G-CSF-related side effects, most commonly bone pain and headache. Donor safety always takes precedence over collection yield. Bone Marrow Harvest Complications These include local site pain, bruising, hematoma formation, and anemia. Autologous vs Allogeneic Collection: Why the Difference Matters Autologous transplants avoid GVHD but lack graft-versus-tumor effects. Allogeneic transplants introduce immunologic risk — but also therapeutic benefit. This balance drives donor selection, conditioning regimens, and post-transplant monitoring. Infectious Disease Testing and Product Handling All stem cell products require infectious disease screening, including: HIV HBV HCV HTLV Syphilis Product handling differs by transplant type: Autologous products  are typically cryopreserved Allogeneic products  may be infused fresh or frozen Cryopreservation Basics DMSO  is the most common cryoprotectant Controlled-rate freezing  precisely regulates temperature to prevent intracellular ice crystal formation Passive freezing  uses insulated containers and −80 °C storage but offers less control Engraftment: The Endpoints Everyone Cares About Boards — and clinicians — care deeply about engraftment definitions: Neutrophil engraftment: ANC > 500 for 3 consecutive days Platelet engraftment: Platelets > 20,000 without transfusion support for 7 days These metrics anchor post-transplant monitoring and outcome reporting. Consolidated Board Pearls Stem Cell Sources Which source has the most CD34+ cells? → Peripheral blood Highest GVHD risk? → Peripheral blood Faster engraftment than marrow? → Yes Mobilization Mechanism of plerixafor? → CXCR4 inhibition Most commonly used mobilizing agent? → G-CSF Collection Highest contamination risk with skin commensals? → Bone marrow harvest Most common collection method? → Apheresis Target Dose Minimum effective dose? → 2 × 10⁶ CD34+ cells/kg Benefit of higher dose? → Faster engraftment Risk of higher dose? → GVHD Apheresis Complications Most common anticoagulant? → ACD-A Mechanism? → Calcium chelation Most common side effect? → Hypocalcemia Treatment? → Calcium supplementation Most common G-CSF side effect? → Bone pain Autologous vs Allogeneic Risk of allogeneic transplant? → GVHD Benefit? → Graft-versus-tumor effect Product Handling Most common cryoprotectant? → DMSO Why controlled-rate freezing? → Prevents intracellular ice crystals Engraftment Neutrophils: ANC > 500 for 3 days Platelets: >20k without transfusion for 7 days

  • Why Low Haptoglobin Isn’t the Smoking Gun We Think It Is

    Most of us were taught to think of a low haptoglobin as a red flag for hemolysis. The logic seems airtight: free hemoglobin spills into the plasma, haptoglobin binds it, and the levels drop. End of story… right? Except it’s not. Clinically, low haptoglobin is one of the least  specific markers we use — and in some patients, it tells you absolutely nothing about hemolysis at all. This post is about those patients. I’m talking about the ones with clear plasma, normal LDH, normal indirect bilirubin, and maybe even a reticulocyte count that couldn’t be less interested in hemolysis. And yet: haptoglobin is low or undetectable. So what else can explain it? Let’s walk through the major non-hemolytic causes — the ones that quietly trip up learners and seasoned clinicians alike. 1. Liver Disease: When the Factory Shuts Down The liver makes haptoglobin. So when the liver is struggling, haptoglobin drops — sometimes dramatically. Patients with cirrhosis often have chronically low haptoglobin levels that normalize after liver transplantation. That’s a pretty clean demonstration that the issue isn’t destruction of haptoglobin, but underproduction. This is why haptoglobin becomes nearly useless for diagnosing hemolysis in anyone with: cirrhosis advanced fatty liver disease hepatitis impaired synthetic function of any cause If the liver can’t make enough haptoglobin in the first place, it can’t drop in response to hemolysis. 2. Genetic Variants: The “Constitutionally Low” Haptoglobin Patient This is the category that surprises people the most. It turns out that baseline haptoglobin varies widely between individuals, and genetics alone account for nearly half of that variability. A genome-wide association study identified rs2000999  as a major determinant of circulating haptoglobin, explaining 45% of the genetic influence on baseline levels. Another variant, rs12162087 , has been linked specifically to constitutionally low haptoglobin — especially in individuals with the homozygous reference genotype (GG). These people may always have low haptoglobin, even in the complete absence of hemolysis. You could check their plasma a hundred times and misdiagnose them every time unless you recognize this pattern. 3. Pregnancy: Physiology Masquerading as Pathology Pregnancy reshapes the proteomic landscape in ways we don’t always appreciate. Haptoglobin levels drop significantly in pregnancy, especially during the second trimester, and may even become undetectable. By the third trimester, levels often drift back toward normal — another reminder that trimester-specific reference intervals actually matter. A low haptoglobin in a pregnant patient means essentially nothing without a clinical and laboratory context. And yes — you can truly see an undetectable value with no  hemolysis at all. 4. Recent Blood Transfusion: A Quiet, Temporary Dip There are documented cases of undetectable haptoglobin within 12 hours of transfusion even when all other hemolysis markers are completely normal and the plasma is visually clear. This isn’t hemolysis. It’s simply a redistribution phenomenon combined with assay dynamics. The takeaway: a low haptoglobin immediately after transfusion should not be over-interpreted. 5. Malnutrition, Allergic Reactions, and Seizure Disorders These conditions appear less frequently in textbooks but are well-described contributors to low haptoglobin. Mechanisms differ: Malnutrition → impaired hepatic protein synthesis Allergic reactions → acute consumption or immune-modulated shifts Seizure disorders → transient metabolic changes lowering haptoglobin They’re not common causes, but they’re real — and they matter when your labs don’t fit the hemolysis story. 6. A Note on Inflammation (The Curveball) Haptoglobin is an acute-phase reactant, which means inflammation, infection, or malignancy usually increase  its levels. But here’s the critical nuance: Being an acute-phase reactant doesn’t protect haptoglobin from being depleted in hemolysis. In other words, a patient can have: very high haptoglobin from inflammation, and still develop a low haptoglobin if hemolysis is severe enough. Inflammation pushes the baseline up, hemolysis pulls it down — and the net result depends entirely on which force wins. This is why haptoglobin is a good marker in uncomplicated  cases and a confusing one in complex ones. So What Do We Do With a Low Haptoglobin? We contextualize it. If the plasma is clear, the LDH and bilirubin are normal, and the reticulocyte count is unremarkable, you’re likely not dealing with hemolysis — regardless of what the haptoglobin is doing. Low haptoglobin is a supportive hemolysis marker, not a diagnostic one. And understanding these alternate causes protects us from over-calling hemolysis and chasing ghosts. Closing Thoughts Haptoglobin is often taught as a binary test, but its real-world behavior is anything but binary. A low value raises a question — it doesn’t deliver an answer. The more we understand its limitations, the better we become at interpreting the whole clinical picture instead of anchoring on a single number.

  • A Practical Guide to Using AI Tools for Literature Searches

    AI tools are showing up everywhere in medicine right now — in our inboxes, in meetings, and quietly in the background as we prepare talks or look up unfamiliar territory. Many of us are experimenting with them in real time, often between consults or after a busy clinic day, trying to figure out what they’re actually good at and how to use them without creating extra work. One place where AI can be genuinely helpful is in orienting yourself to a clinical question — especially when you need a quick overview before diving deeper. Over the past year, I’ve found that pairing AI tools with traditional verification steps has made my own literature searches faster and more organized, while still keeping the process grounded in real evidence. Since many colleagues are exploring these tools too, I thought I’d share the simple workflow I’ve settled into. Nothing here is prescriptive; it’s just what I’ve found useful as a clinician who wants speed and  reliability. What AI Can Do Well (and Why It’s Helpful) AI can be a surprisingly helpful companion when you’re approaching a clinical topic. It can: Summarize large volumes of text quickly Highlight themes or connections across papers Provide a starting point when you’re approaching a topic you haven’t revisited in a while Help double-check that you’re not missing obvious papers Turn unstructured information into something more organized AI isn’t a replacement for reading source papers, but it can  make it easier to start with some structure already in place. My Three-Step Workflow 1. Start With OpenEvidence OpenEvidence has become my go-to for initial orientation. It’s built specifically for medical literature and has content agreements with NEJM and JAMA, which helps anchor it in reputable sources. What I appreciate most is that every statement comes with a citation, and you can click directly into the underlying study. Two very practical notes: It’s free for medical professionals, which makes it easy to recommend. There’s also a mobile app, which is surprisingly handy when you’re on service and need to look something up between cases. For me, OpenEvidence gives a quick landscape of what has  been studied, what hasn’t , and where the evidence feels solid versus sparse. Website: https://www.openevidence.com/ 2. Cross-Check and Structure With Elicit I don’t use Elicit for every question, but I often reach for it when I’m working on publications, talks, or anything where I need to be comprehensive. Elicit is trained on a broader scientific corpus, which means it sometimes pulls in studies that OpenEvidence misses or adds contextual pieces that help round out the picture. Its real strengths are: generating tables from search results extracting sample sizes and primary outcomes grouping related studies summarizing PDFs you upload If OpenEvidence helps me understand the landscape, Elicit helps me organize and structure that landscape — especially when multiple study designs or subtopics are in play. Website: https://elicit.com/ 3. Verify With a DOI Check (My Favorite 10-Second Step) Once I’ve identified the key papers, I take the DOI or PubMed ID and paste it directly into Mendeley, which will automatically fetch the citation metadata and abstract. A few reasons I rely on this step: It confirms the paper exists The metadata is correct The journal, year, and authors match The abstract aligns with the AI summary Not all reference managers can fetch metadata from just a DOI or PubMed ID — but Mendeley can, and Mendeley is free, which makes it a great option if you need an accessible verification tool. This small step has saved me more than once from citing a misattributed or nonexistent paper. A Gentle Note on Limitations AI tools are still evolving, and so are we. They can miss studies, overstate certainty, or conflate adjacent concepts. That’s not a failure — just a reminder that they’re best used alongside our clinical judgment and our usual habits of checking primary sources. For me, the workflow above keeps things balanced: AI helps with speed and structure, and the DOI check keeps everything grounded in reality. When This Workflow Helps Most I reach for this system when: Preparing for a meeting or protocol discussion Refreshing a topic I haven’t touched in a while Getting oriented before reading more deeply Drafting a talk, manuscript, or background section Checking whether references actually exist before citing them This workflow is flexible: I use it for everything from quick orientation to deeper literature reviews. The steps stay the same; the depth just changes depending on the question. This has become my primary approach to reviewing the literature -- it fuses speed with reliability in a way that fits how we practice today. I still use PubMed when I need to dive deeper into a particular thread, but the core workflow starts here. Closing Thoughts AI is becoming part of everyday clinical practice, and most of us are learning as we go. My hope is that sharing this workflow helps demystify the process a bit and gives you a reliable and practical starting point if you’re exploring these tools yourself. If you’ve found other strategies or tools that work well for you, I’d genuinely love to hear them — we’re all figuring this out together.

  • When Dilution Becomes Dangerous: Why We Don’t Use Depletion Exchange in High-Risk Patients

    There are days in Transfusion Medicine when the most interesting teaching moments arrive quietly — between phone calls, in the apheresis unit hallway, or as someone leans back in a rolling chair and says, “Okay, but why can’t we just do a depletion exchange here?” Today it came up while troubleshooting an inpatient red cell exchange on a Sickle Cell patient who was a lot sicker than he’d been two weeks earlier. One person suggested adding a depletion phase to improve the efficiency of the run. And that’s when the conversation shifted — away from algorithms and toward physiology, which is where these decisions actually live. Because the truth is simple: Depletion exchange works beautifully — until it doesn’t. And the people for whom it can go wrong are exactly the ones who can’t afford a period of reduced oxygen delivery. What Exactly Is a Depletion Exchange? Before diving into the “why not,” it’s worth being clear about what a depletion exchange actually is — because the term gets thrown around loosely, and not everyone pictures the same thing. A depletion exchange , also known as isovolemic hemodilution red cell exchange, is a specific variant of automated RCE in which the procedure begins with a hemodilution phase. The sequence looks like this: First, patient RBCs are removed. The device takes off red cells from the circulating volume. Simultaneously, the machine replaces the removed volume with crystalloid or 5% albumin. This maintains volume  (isovolemia) but not oxygen-carrying capacity. After the patient’s hematocrit is intentionally lowered, the machine proceeds with the regular red cell exchange phase — removing patient cells and replacing them with donor RBCs. The rationale is straightforward: By lowering the patient’s starting hematocrit, each donor unit becomes more “effective” at reducing HbS%, so fewer units are needed. It increases efficiency, reduces donor exposure, and improves the geometry of the exchange. But there is a catch — and it’s the one people forget: You are creating a temporary period of reduced oxygen delivery. Isovolemic ≠ iso-oxygenating. For most stable outpatients, that’s fine. For others, it’s the wrong physiologic bet. Once you see the mechanics laid out like that — the intentional dip in hematocrit, the temporary thinning of oxygen delivery — the real issue isn’t the technology at all. It’s the patient . And there are certain patients whose physiology simply can’t afford that moment of dilution. 1. Acutely Ill Inpatients: No Physiologic Room to Fall We see this all the time: The patient with acute chest. The patient with sepsis layered on top of pain crisis. The patient who walked in hypoxic and is now teetering at 94% on oxygen. These patients are already running on borrowed reserve. Even a short-lived decrease in hematocrit can widen the gap between “holding steady” and “crashing.” Their tissues are extracting everything they can. Their compensatory mechanisms are maxed. A brief dilution phase risks exactly what we’re trying to prevent: worse perfusion, more ischemia, more instability. So we skip depletion. Not because we can’t do it, but because they can’t afford the physiologic tax. 2. Pregnancy: Two Patients, One Oxygen Supply Pregnancy is its own cardiovascular universe — high output, reduced systemic vascular resistance, compressed venous return, and a placenta that is exquisitely sensitive to maternal perfusion changes. The math is simple: Lower maternal Hct → lower uteroplacental oxygen delivery. Even transiently. Even “just during the depletion phase.” And when oxygen delivery falters, the fetus feels it first. In the interest of safety, we do not perform depletion exchanges in pregnant patients. 3. Cardiac Patients: Tightly Balanced at Baseline Then there are the patients with cardiac histories — and the patients with cardiac histories they don’t know they have yet. In cardiology, the pendulum has swung back toward liberal transfusion strategies for acute coronary syndromes, with several recent studies showing improved outcomes when hemoglobin is kept closer to 10 g/dL rather than drifting down into the restrictive ranges. The reason is simple and intuitive: the ischemic myocardium hates anemia. Coronary perfusion is already limited; oxygen extraction is already maxed. Any additional dip in oxygen-carrying capacity — even brief — can worsen supply–demand mismatch. And that’s the core problem with depletion exchange in this population. The machine keeps the volume  steady, yes — but it cannot shield the myocardium from the temporary but real drop in perfusion during the dilution phase. It’s a moment the heart has no margin to absorb. So for these patients, we choose safety. Exchange only. Slow and steady. So Why Do We Do Depletion at All? Because when it’s safe — in stable outpatients without physiologic red flags — it is  useful. It can make the exchange more efficient. It can reduce donor exposure. It can improve the final HbS% with fewer units. But the moment someone is acutely ill, pregnant, or carrying cardiac risk, those advantages don’t justify even a temporary hit to oxygen delivery. Apheresis Isn’t Just a Machine. It’s Physiology. That was really the take-home from today’s conversation. Our protocols can get so algorithmic that it’s easy to forget the body isn’t following the same neat logic tree. There’s a human being on the other side of the circuit — one who may be running out of compensatory room. So when we pick the exchange modality, we aren’t just choosing a setting on the instrument. We’re declaring what we think the patient can physiologically tolerate. For some, dilution is a gift. For others, it’s a risk not worth taking. And the art — the part that never shows up in the software — is knowing the difference.

  • AI as a Second Reader, Not a Second Brain: What We’re Getting Wrong in Pathology AI Adoption

    Introduction: The Problem With the "Second Brain" Metaphor Artificial intelligence in pathology and laboratory medicine is often marketed with an irresistible promise: a second brain that will spot what humans miss, automate the tedious parts of practice, and bring order to the overwhelming volume of data moving through modern health systems. It’s a compelling metaphor—but also a deeply misleading one. The truth is simpler and far more useful: most AI tools in lab medicine today are not second brains. They are second readers. They assist. They triage. They flag patterns. They highlight outliers. They nudge clinicians toward questions worth asking. This is not a limitation—it is the sweet spot of responsible AI. The problem is that our metaphors, expectations, and sometimes our implementation strategies haven’t caught up with this reality. When we treat assistive AI as if it were autonomous, we misjudge both its power and its risks. This piece reframes AI in pathology and transfusion medicine through a more grounded, clinically realistic lens: AI as a second reader—never the primary decision-maker. Assistive vs Autonomous AI: Why the Distinction Matters In public conversations, "AI" tends to be treated as a single monolithic category. But in clinical practice, the distinction between assistive and autonomous systems is foundational. Assistive AI Assistive AI tools support human decision-making without replacing it. They: flag abnormal cells or slide regions for review, surface unusual utilization patterns, predict inventory needs, identify potential bleeding risks or outlier transfusion practices, augment quality control workflows. The human remains the final decision-maker. The AI's role is advisory. Autonomous AI Autonomous AI, by contrast, can issue a clinical interpretation without human confirmation. The classic example is FDA-cleared autonomous diabetic retinopathy screening, where the system renders a result independently. Pathology is not there—and ethically, operationally, and scientifically, it shouldn’t aspire to be. Tissue interpretation, pre-analytic variability, complex clinical context, and downstream consequences place pathology squarely in the domain of human-in-the-loop practice. Moreover, the limitations of autonomous AI make full automation particularly risky in this field. Even state‑of‑the‑art large models exhibit irreducible error rates, including hallucinations that arise not from software bugs but from the fundamental way probabilistic systems generate outputs. OpenAI and other major developers have acknowledged that hallucinations are inevitable  in current-generation AI—an acceptable risk for drafting emails, but not for diagnosing malignancy. In pathology, an autonomous error is not a benign failure mode; it is a misdiagnosis. Slides vary between institutions, stains differ, scanners introduce artifacts, and rare entities can be misclassified with absolute confidence. The model does not know when it does not know. Human-in-the-loop practice is therefore not a philosophical preference but a safety requirement. Current professional sentiment reflects this: most pathologists are cautiously optimistic about assistive AI but deeply wary of autonomous systems. The field understands that algorithms can elevate quality and efficiency, but they cannot—and should not—bear sole responsibility for interpreting tissue, integrating clinical nuance, or adjudicating uncertainty. Why the distinction matters Marketing narratives blur the line between assistive and autonomous. Operationally, this creates two dangerous extremes: over-trust: assuming the model "knows" more than it does, under-trust: dismissing or ignoring helpful signals because expectations were unreasonable. Treating AI as a second reader  helps calibrate our expectations and clarifies the respective responsibilities of humans and machines. Workflow, Not Math: The Hidden Barriers to Clinical Integration Technical performance is rarely the limiting factor for AI deployment in lab medicine. More often, the barriers are operational and workflow-driven. Pre-analytic variability No algorithm, however elegant, can overcome poor input. Hemolysis, mislabeled samples, incomplete clinical information, and inconsistent sample handling all degrade model performance. "Garbage in, garbage out" is not cynicism; it is clinical reality. LIS/EMR integration An AI flag that never reaches the transfusion physician or technologist in a usable format is functionally irrelevant. Many promising tools fail not because they are inaccurate, but because they exist outside the everyday workflow. Alert fatigue If an AI model surfaces insights the same way EMR pop-ups surface medication alerts, clinicians will click through them reflexively. Effective AI must blend into the workflow — not interrupt it. Staff training AI disagreement is a liminal space. When a model flags an unexpected pattern, what is the technologist supposed to do? Without clear protocols, the burden on staff increases rather than decreases. Model stewardship Who revalidates the model yearly? Who monitors drift? Who owns threshold adjustments? Governance is critical and cannot be an afterthought. These challenges are not exciting, but they are what determine whether an AI tool genuinely helps clinicians — or becomes abandoned. The Hype Cycle Problem AI in medicine moves in predictable hype cycles. When expectations are unrealistic, three harms follow: 1. Overpromising leads to disillusionment When leadership expects instant automation, disappointment is inevitable. This can poison the well for future tools that are more modest but more practical. 2. Steps get skipped Proper change management, validation, and staff training take time. Under the pressure of hype, institutions try to "roll out" tools before anyone understands how to use them. 3. Trust becomes polarized Some clinicians embrace AI uncritically. Others reject it entirely. Neither posture produces safe patient care. Reframing AI as a second reader  helps temper the hype and brings expectations back into alignment with clinical workflow and real-world constraints. What Safe, Responsible AI Actually Looks Like Clear intended use Every AI tool must answer one question precisely: What is the intended use? Ambiguous purpose leads to ambiguous outcomes. Human-in-the-loop structure High-impact clinical decisions — transfusion thresholds, rejection of critical values, or product allocation — should never be automated fully. AI highlights patterns; humans interpret them. Local validation Models must be calibrated to local population characteristics, including major demographic differences, high-obesity populations, rare disease prevalence, and unique practice patterns. Ongoing monitoring Performance changes over time. Drift is real. Monitoring is not optional. Defined failure modes Clinicians need clarity: When should I ignore this model?  Understanding limits is as important as understanding utility. Explainability (pragmatic, not academic) Technologists and clinicians need broad insight into why a model fires — high-level logic is sufficient. Full algorithmic transparency is not required. Together, these guardrails ensure that AI functions as a clinically meaningful assistant, not an unpredictable black box. A Transfusion Medicine Lens: Where AI Actually Delivers Value Transfusion medicine offers a prime example of how AI should function in practice: as a second reader that enhances safety and efficiency. Utilization and stewardship AI can identify patterns of overuse or underuse, highlight outlier ordering habits, or flag cases where restrictive thresholds are inconsistently applied. But humans — transfusion physicians, technologists, PBM programs — interpret and respond to these patterns. Inventory and product management Platelet forecasting, rare phenotype prediction, and resource allocation are well-suited to assistive AI. The model surfaces the signal; the human makes the plan. Risk prediction Predictive models for bleeding, DHTR risk, TRALI likelihood, or massive transfusion activation can bring subtle risk factors to the surface. They augment human judgment but do not replace it. These examples demonstrate the core argument of this piece: AI helps most when it supports human cognition without competing with it. Conclusion: Getting the Metaphor Right AI in pathology and laboratory medicine is not a second brain—and expecting it to be one sets everyone up for failure. It is a second reader. A pattern spotter. A triage assistant. A flagger of outliers. A partner in safety and quality. When we ground AI in its true purpose, we can finally deploy it in ways that are meaningful, safe, and sustainable. The challenge is not to automate pathology or transfusion medicine, but to integrate AI into workflows as a thoughtful collaborator. The future of AI in the laboratory will belong to the institutions and clinicians that understand this distinction: Useful AI is not autonomous. It is assistive — and that is exactly where it belongs.

©2023 by Caitlin Raymond. Powered and secured by Wix

bottom of page