Search Results
94 results found with an empty search
- TTP’s Little-Known Cousin: TAMOF as a TTP-Like Process
It was a routine service morning — until it wasn’t. The patient wasn’t crashing in a cinematic way. No massive bleeding. No dramatic hypotension. But the labs were drifting in a direction that felt wrong: platelets falling, creatinine creeping, LDH elevated, hemoglobin sliding just enough to notice. Organ dysfunction without a single unifying explanation. Somewhere between the pattern and the unease it produced, the diagnosis surfaced quietly: TAMOF. What TAMOF Is — and Why It’s Easy to Miss Thrombocytopenia-associated multiple organ failure (TAMOF) occupies an uncomfortable space between entities we think we understand well: DIC, TTP, and sepsis-associated coagulopathy. Because it doesn’t fit cleanly into any of them, it is often mislabeled — or not labeled at all. At its core, TAMOF is a secondary thrombotic microangiopathy driven by systemic inflammation. Systemic inflammation leads to a relative decrease in ADAMTS13, which results in the accumulation of ultra long vWF multimers. When regulatory capacity is insufficient for that inflammatory burden, platelet-rich microthrombi form in the microvasculature, impairing organ perfusion. This is why TAMOF behaves like TTP downstream — even though the trigger is infection or inflammation rather than autoimmunity. It is not primarily a bleeding disorder. It is not primarily a consumptive coagulopathy. It is a microvascular platelet process with systemic consequences. How the Labs Tell the Story TAMOF rarely declares itself with a single decisive test. Instead, it reveals itself through converging laboratory signals, each nudging the differential toward microangiopathy. LDH is often elevated, reflecting both microangiopathic hemolysis and tissue ischemia from small-vessel thrombosis. In isolation, this finding is nonspecific. In context — alongside falling platelets and worsening organ function — it becomes meaningful. Haptoglobin may be low, but normal values do not exclude TAMOF. Inflammatory states raise baseline haptoglobin levels, which can mask hemolysis. Trends and correlations matter more than absolutes. Peripheral smear findings can support the diagnosis, but they are often subtle. Schistocytes may be present — sometimes sparsely, sometimes clearly — depending on the degree of microangiopathic hemolysis. Coagulation studies are especially useful for what they don’t show. In TAMOF, PT and aPTT are often normal or only mildly prolonged, fibrinogen is typically preserved, and bleeding is uncommon. This profile argues against overt DIC and redirects attention away from primary consumptive coagulopathy. ADAMTS13: Severity Without a Shortcut ADAMTS13 activity in TAMOF spans a wide range. Levels may be modestly reduced, markedly decreased, or — in some cases — severely deficient, with accompanying schistocytosis and overt microangiopathic hemolysis. What distinguishes TAMOF from classic immune-mediated TTP is not the absolute ADAMTS13 level, but the mechanism of deficiency. In TAMOF, reduced ADAMTS13 reflects: Consumption during systemic endothelial activation Inflammatory inhibition of enzyme activity Reduced hepatic synthesis of ADAMTS13 Even when ADAMTS13 activity is severely reduced, the process is typically secondary to inflammation or sepsis, rather than autoantibody-mediated. Rigid thresholds can mislead. An ADAMTS13 level interpreted in isolation may obscure the diagnosis rather than clarify it. Context matters. Narrowing the Differential Taken together, this laboratory pattern helps distinguish TAMOF from its closest mimics: Versus DIC: preserved coagulation parameters and a platelet-driven microangiopathic picture argue against primary consumption. Versus classic TTP: an inflammatory trigger and secondary mechanism of ADAMTS13 deficiency point away from immune-mediated disease, even when the downstream effects overlap. Versus “just sepsis”: progressive thrombocytopenia, rising LDH, and organ dysfunction out of proportion to hemodynamics suggest something more than cytokines alone. No single lab makes the diagnosis. But the pattern does. Why Plasma Exchange Makes Sense Once TAMOF is recognized as a thrombotic microangiopathy, the rationale for therapeutic plasma exchange becomes clear. Plasma exchange functions in TAMOF much as it does in TTP: Replenishing ADAMTS13 Removing ultra-large and high-molecular-weight von Willebrand factor multimers Reducing circulating inflammatory mediators that perpetuate endothelial injury The trigger differs. The downstream pathophysiology does not. When TAMOF is dismissed as “just sepsis” or mislabeled as DIC, the opportunity for targeted intervention narrows. Early recognition turns an otherwise nebulous complication into a treatable process. A Final Lab-Centered Takeaway TAMOF is not rare because it is uncommon. It is rare because we don’t look for it. For those of us in laboratory medicine, this is where our value is clearest — not in reporting isolated numbers, but in helping clinicians see how those numbers fit together. Sometimes the diagnosis isn’t hidden. It ’s just fragmented — waiting for someone to assemble the story.
- When At-Home ABO Typing Creates a Family Crisis
I learned something new this week: you can buy an at-home ABO blood typing kit on Amazon. I didn’t know that. And I suspect many transfusion medicine physicians don’t either. I found out when a pediatrician called with a worried question. A newborn’s blood type had been determined appropriately in the hospital: A negative . The mother’s type was known: O negative . The father reported he was O negative , based on an at-home blood typing kit. The parents were now concerned about non-paternity. At first glance, this looks like a classic ABO inheritance problem. Two O parents should not have an A child. But the problem wasn’t genetics — it was data quality. The father’s blood type was not actually known. What at-home ABO typing really tells you Consumer ABO kits perform forward typing only, using fingerstick blood applied to anti-A and anti-B reagents, with visual interpretation by the user. They do not include: Reverse typing Internal concordance checks Trained interpretation Safeguards against weak reactions, drying artifact, or clotting These kits are widely available online and are not FDA-cleared diagnostic tests. They do not reliably determine a person’s blood type. The most likely explanation is also the least dramatic The simplest explanation was that the father is not type O. One particularly plausible possibility is blood group A2. About 20% of people with blood group A are A2, translating to roughly 4–8% of the general population, depending on ancestry. A2 red cells express fewer A antigens and may show weak or absent agglutination with some anti-A reagents, especially outside a controlled laboratory setting. Critically: A2 individuals are identified on reverse typing, by the presence of anti-A1 At-home kits do not include reverse typing Newborn hospital testing does include appropriate confirmatory methods So an A2 father could easily misinterpret a forward-only home test as “O,” while the newborn’s A type is correctly identified. No exotic genetics required. Other mundane failure modes Even without A2: Weak agglutination may be misread as negative Drying artifact can obscure reactions Fingerstick clotting or poor mixing can alter appearance User interpretation error is common, even among trained staff This is precisely why laboratory ABO determination relies on redundancy and safeguards, not a single visual read. Why this matters clinically ABO typing feels deceptively simple. Most people learn their blood type early and treat it as a personal identifier. That familiarity makes it especially vulnerable to misunderstanding. When an at-home test says “O,” people don’t hear: this is a forward type screen without confirmation. They hear: I know my blood type. In this case, a testing limitation nearly became a family crisis. The ethical risk Non-paternity should never be raised on the basis of an unvalidated consumer test. The risk here isn’t the existence of these kits — it’s clinicians being unaware of them and their failure modes. A simple rule If a patient says: "I know my blood type - I tested it at home." The response should be calm and direct: “At-home blood typing kits are not reliable. If needed, we can determine your blood type properly through a laboratory.” No speculation. No escalation. Why transfusion medicine should know this exists This issue won’t appear in hemovigilance reports or quality dashboards. It will surface quietly as: Pediatric questions Awkward counseling conversations Family anxiety Recognizing at-home ABO typing for what it is allows us to de-escalate quickly and prevent harm that has nothing to do with biology. I didn’t know these kits were being marketed. Now I do. And next time, I’ll recognize the problem immediately — not as a mystery of inheritance, but as a reminder that laboratory safeguards are part of the test.
- Board Prep: Overview of Donor Infectious Disease Eligibility
Donor infectious disease eligibility is one of those topics that feels straightforward until you’re asked to explain why a donor with negative testing still isn’t eligible — or why some pathogens get NAT, others don’t, and some only get tested once. This post walks through donor eligibility the way the boards expect you to understand it: as a risk-assessment framework , not just a checklist of tests. What Is Donor Infectious Disease Eligibility? Donor eligibility is the assessment of a donor’s risk of transmitting infectious disease to a recipient. Its purpose is recipient protection and it is based on two pillars: Donor history Laboratory testing This is distinct from donor suitability , which focuses on donor safety (for example, hemoglobin thresholds or procedural tolerance). A donor can be suitable but not eligible — and vice versa. How Donor Eligibility Is Determined The Donor History Questionnaire (DHQ) The DHQ evaluates risks that laboratory testing alone cannot fully capture, including: Symptoms of infection Behavioral risk factors Travel and residence history Exposure history (blood, needles, sexual contact) Testing does not eliminate window-period risk, and emerging pathogens may not yet have validated screening assays. As a result, negative testing does not equal eligibility when exposure risk is recent. The Window Period (Why History Still Matters) The window period is the time between infection and when that infection becomes detectable by testing. Even with modern NAT-based screening, window periods still exist. This is why donor history remains a critical component of eligibility determination. Infectious Disease Screening: The Tests (What We Use and Why) All allogeneic donors undergo infectious disease screening using serologic testing, nucleic acid testing (NAT), or both . The strategy used for each pathogen reflects its biology: duration of viremia, durability of antibody response, prevalence, and the clinical consequences of transmission. HIV-1/2 Serology HIV-1/2 antibody HIV-1 p24 antigen NAT HIV-1 RNA Window Period NAT: 9–11 days Serology: 15–20 days Notes Layered NAT and Ag/Ab screening minimizes window-period transmission, making residual transfusion-transmitted HIV risk extremely low. However, there is no licensed HIV-2 NAT test in the U.S. , so HIV-2 detection relies entirely on serology. Hepatitis B Virus (HBV) Serology HBsAg Anti-HBc NAT HBV DNA Window Period NAT: 20–22 days Serology: 30–38 days Notes HBV has the longest residual transfusion risk among routinely screened viral infections due to low-level, intermittent viremia . Triple-layer testing mitigates occult and low-level infection but does not eliminate risk entirely. Hepatitis C Virus (HCV) Serology Anti-HCV NAT HCV RNA Window Period NAT: 3–5 days Serology: 50–70 days Notes Universal NAT has nearly eliminated window-period HCV transmission. Anti-HCV testing is notorious for false positives , which is why donor re-entry policies matter. HTLV-I/II Serology Anti-HTLV-1/2 NAT Not performed Window Period Serology: 45–60 days Notes HTLV infection is chronic with a durable antibody response, enabling serology-only screening. HTLV-1 is associated with adult T-cell leukemia/lymphoma . The screening strategy reflects low prevalence , not a short window period. West Nile Virus (WNV) Serology Not performed NAT WNV RNA Window Period NAT: 6–10 days Notes Short viremia necessitates NAT-only, seasonally adaptive screening . Serology is not useful for donor screening in acute infection. Syphilis ( Treponema pallidum ) Serology Treponemal antibody test Non-treponemal test NAT Not performed Window Period Serology: 10–30 days Notes T. pallidum survives poorly in refrigerated blood, making transfusion transmission rare but documented. Treponemal antibodies persist long after infection and treatment, which is why syphilis is a key context for donor re-entry . Trypanosoma cruzi (Chagas Disease) Serology Antibody testing only NAT Not performed Window Period Serology: 3–8 weeks Notes Chronic infection with durable antibodies enables one-time serologic screening . Transmission is rare but serious and linked to donors with residence in endemic areas. Babesia Serology Not performed NAT Babesia DNA Window Period NAT: 7–14 days Notes Persistent asymptomatic parasitemia necessitates regional NAT screening . Babesia is a leading cause of fatal transfusion-transmitted infection in the U.S. , with required testing in endemic regions including the Northeast and Upper Midwest . Deferrals: Temporary vs Indefinite Temporary deferrals apply when risk decreases with time Indefinite deferrals apply when risk does not meaningfully decrease Examples include: Temporary: recent tattoo, recent exposure, acute illness Indefinite: HIV, chronic HBV, HCV, HTLV, vCJD risk High-Yield Deferral Periods (Boards Love These) Some deferrals are particularly high yield because they test whether you understand current , risk-based policy rather than outdated rules. Malaria Travel to endemic area (no illness): 3-month deferral Residence in endemic area or prior malaria: 2-year deferral High-Risk Sexual Behavior or Injection Drug Use Universal 3-month deferral Applies regardless of gender or sexual orientation Incarceration >72 hours: 12-month deferral <72 hours: No deferral Tattoos State-licensed facility: No deferral Non-state-licensed facility: 3-month deferral vCJD-Related Risks Residence in Great Britain or Europe: No deferral Use of bovine growth hormone: Indefinite deferral Cadaveric dura mater transplant: Indefinite deferral Donor Eligibility Potpourri (The Real-World Stuff) Donor Re-Entry Donor re-entry allows individuals with false-positive screening tests to become eligible to donate again. This process: Is pathogen-specific Is regulated by the FDA Requires repeat testing on a subsequent donation Confirmed infections generally preclude re-entry, with syphilis (after full treatment) being the main exception. Product Look-Backs Product look-backs occur when a donor is later found to have a positive infectious disease test. All donations during a defined prior period must be investigated to determine: Whether products were transfused Whether recipient notification or testing is required For boards: Look-backs are mandated for HIV-1/2 and HCV The required look-back period is 12 months prior to the positive test Special Donor Populations Directed Donors Must meet the same infectious disease eligibility criteria No relaxation of standards If unused, products may be returned to general inventory Autologous Donors Minimum hemoglobin: >11 g/dL Collection must occur >72 hours before surgery Requires physician order Infectious disease testing may vary by institutional policy If unused, products are discarded Regulatory Oversight: Who Sets the Rules? FDA Establishes laws and regulations 21 CFR 630 governs donor infectious disease testing AABB Interprets FDA regulations into operational standards Maintains the Donor History Questionnaire Understanding who regulates what matters — especially when policies change. Consolidated Board Pearls The Basics Define the window period → Time between infection and detection What does DHQ stand for? → Donor History Questionnaire Infectious Disease Screening Name 3 pathogen classes not directly tested → Most bacteria, most parasites, prion diseases Screening Tests Most common fatal transfusion-associated infection? → Babesia HIV-1/2 NAT window period? → 9–11 days Virus with longest residual transfusion risk? → HBV Deferrals Universal deferral period for high-risk behavior? → 3 months Deferral for incarceration <72 hours? → None Deferral for cadaveric dura mater transplant? → Indefinite Eligibility Potpourri Mandated look-back period for HIV? → 12 months What is donor re-entry? → Process allowing donors with false-positive tests to donate again If a directed unit is unused, must it be discarded? → No, it may enter general inventory
- Board Prep: Introduction to Stem Cell Collection and Transplant
Stem cell collection sits at the intersection of hematology, immunology, and procedural medicine. It’s conceptually simple — collect enough hematopoietic stem cells to reconstitute marrow — but operationally complex, with decisions at every step that affect engraftment, toxicity, and long-term outcomes. This post walks through stem cell collection from a practical, systems-level perspective: what we collect, where it comes from, how we mobilize it, and what determines whether a transplant succeeds. The Big Picture: What Are We Collecting? At the center of stem cell transplantation are hematopoietic stem cells (HSCs) — most commonly identified clinically as CD34-positive cells . These cells are capable of: Self-renewal Differentiation into all mature blood lineages Clinically, we collect them for three main purposes: Autologous transplant , where patients receive their own cells back after myeloablative therapy Allogeneic transplant , where donor cells replace a recipient’s marrow Marrow rescue following intensive chemotherapy While multiple sources exist, modern practice overwhelmingly favors peripheral blood collection. Where Stem Cells Come From Peripheral Blood Peripheral blood stem cells are now the dominant source for both autologous and allogeneic transplants. They: Yield higher CD34+ cell counts Engraft faster than bone marrow Are collected via apheresis rather than surgery The tradeoff, particularly in the allogeneic setting, is a higher risk of graft-versus-host disease (GVHD) . Bone Marrow Bone marrow harvests are obtained directly from the iliac crests under anesthesia. Compared with peripheral collections, they: Require invasive access Contain more red blood cell contamination Carry higher risk of contamination with skin flora They are used less frequently but remain relevant in specific clinical contexts. Cord Blood Cord blood is largely peripheral to apheresis practice but remains board-relevant. It is: Cryopreserved and banked long-term More tolerant of HLA mismatch Limited by lower total cell dose, sometimes requiring multiple units or ex vivo expansion Mobilization: Getting Stem Cells Into the Blood Under normal conditions, hematopoietic progenitor cells reside in the bone marrow niche, where adhesion molecules and chemokine gradients keep them anchored and quiescent. Mobilization disrupts that relationship. Key mechanisms include: CXCR4–CXCL12 (SDF-1α) signaling , which tethers stem cells to marrow stroma Soluble factors such as stem cell factor Proteases and neurotransmitter-mediated signals The most commonly used mobilizing agent is G-CSF , which indirectly alters the marrow microenvironment and increases circulating CD34+ cells. Plerixafor (AMD3100, Mozobil) works differently: it directly inhibits CXCR4, rapidly releasing stem cells into the peripheral circulation. This is particularly useful in poor mobilizers. How We Collect Stem Cells Apheresis Peripheral blood stem cells are collected via leukapheresis , using continuous-flow cell separators. The procedure: Processes large blood volumes Uses ACD-A as the anticoagulant Selectively collects mononuclear cells enriched for CD34+ cells This is the most common and operationally efficient collection method. Bone Marrow Harvest Bone marrow collection involves multiple passes through skin and cortical bone. Compared with apheresis, it: Has higher contamination risk Produces products with more RBCs Carries procedural risks such as bleeding and post-procedure anemia How Much Is Enough? Target Cell Dose Cell dose matters — both for engraftment speed and downstream complications. Autologous transplant Minimum effective dose: ~2 × 10⁶ CD34+ cells/kg Optimal dose: 4–6 × 10⁶ CD34+ cells/kg Allogeneic transplant Similar target range Higher doses improve engraftment but increase GVHD risk Collection strategies often balance donor safety, collection efficiency, and the marginal benefit of additional cells. Complications of Stem Cell Collection Citrate Toxicity (Most Common) ACD-A chelates calcium, leading to hypocalcemia. Symptoms range from: Perioral tingling and paresthesias Tetany Cardiac arrhythmias in severe cases Management includes oral or IV calcium supplementation and slowing the collection rate. Vascular Access Issues Central venous catheters carry risks of: Infection Thrombosis Bleeding Donor-Specific Issues Allogeneic donors may experience G-CSF-related side effects, most commonly bone pain and headache. Donor safety always takes precedence over collection yield. Bone Marrow Harvest Complications These include local site pain, bruising, hematoma formation, and anemia. Autologous vs Allogeneic Collection: Why the Difference Matters Autologous transplants avoid GVHD but lack graft-versus-tumor effects. Allogeneic transplants introduce immunologic risk — but also therapeutic benefit. This balance drives donor selection, conditioning regimens, and post-transplant monitoring. Infectious Disease Testing and Product Handling All stem cell products require infectious disease screening, including: HIV HBV HCV HTLV Syphilis Product handling differs by transplant type: Autologous products are typically cryopreserved Allogeneic products may be infused fresh or frozen Cryopreservation Basics DMSO is the most common cryoprotectant Controlled-rate freezing precisely regulates temperature to prevent intracellular ice crystal formation Passive freezing uses insulated containers and −80 °C storage but offers less control Engraftment: The Endpoints Everyone Cares About Boards — and clinicians — care deeply about engraftment definitions: Neutrophil engraftment: ANC > 500 for 3 consecutive days Platelet engraftment: Platelets > 20,000 without transfusion support for 7 days These metrics anchor post-transplant monitoring and outcome reporting. Consolidated Board Pearls Stem Cell Sources Which source has the most CD34+ cells? → Peripheral blood Highest GVHD risk? → Peripheral blood Faster engraftment than marrow? → Yes Mobilization Mechanism of plerixafor? → CXCR4 inhibition Most commonly used mobilizing agent? → G-CSF Collection Highest contamination risk with skin commensals? → Bone marrow harvest Most common collection method? → Apheresis Target Dose Minimum effective dose? → 2 × 10⁶ CD34+ cells/kg Benefit of higher dose? → Faster engraftment Risk of higher dose? → GVHD Apheresis Complications Most common anticoagulant? → ACD-A Mechanism? → Calcium chelation Most common side effect? → Hypocalcemia Treatment? → Calcium supplementation Most common G-CSF side effect? → Bone pain Autologous vs Allogeneic Risk of allogeneic transplant? → GVHD Benefit? → Graft-versus-tumor effect Product Handling Most common cryoprotectant? → DMSO Why controlled-rate freezing? → Prevents intracellular ice crystals Engraftment Neutrophils: ANC > 500 for 3 days Platelets: >20k without transfusion for 7 days
- Why Low Haptoglobin Isn’t the Smoking Gun We Think It Is
Most of us were taught to think of a low haptoglobin as a red flag for hemolysis. The logic seems airtight: free hemoglobin spills into the plasma, haptoglobin binds it, and the levels drop. End of story… right? Except it’s not. Clinically, low haptoglobin is one of the least specific markers we use — and in some patients, it tells you absolutely nothing about hemolysis at all. This post is about those patients. I’m talking about the ones with clear plasma, normal LDH, normal indirect bilirubin, and maybe even a reticulocyte count that couldn’t be less interested in hemolysis. And yet: haptoglobin is low or undetectable. So what else can explain it? Let’s walk through the major non-hemolytic causes — the ones that quietly trip up learners and seasoned clinicians alike. 1. Liver Disease: When the Factory Shuts Down The liver makes haptoglobin. So when the liver is struggling, haptoglobin drops — sometimes dramatically. Patients with cirrhosis often have chronically low haptoglobin levels that normalize after liver transplantation. That’s a pretty clean demonstration that the issue isn’t destruction of haptoglobin, but underproduction. This is why haptoglobin becomes nearly useless for diagnosing hemolysis in anyone with: cirrhosis advanced fatty liver disease hepatitis impaired synthetic function of any cause If the liver can’t make enough haptoglobin in the first place, it can’t drop in response to hemolysis. 2. Genetic Variants: The “Constitutionally Low” Haptoglobin Patient This is the category that surprises people the most. It turns out that baseline haptoglobin varies widely between individuals, and genetics alone account for nearly half of that variability. A genome-wide association study identified rs2000999 as a major determinant of circulating haptoglobin, explaining 45% of the genetic influence on baseline levels. Another variant, rs12162087 , has been linked specifically to constitutionally low haptoglobin — especially in individuals with the homozygous reference genotype (GG). These people may always have low haptoglobin, even in the complete absence of hemolysis. You could check their plasma a hundred times and misdiagnose them every time unless you recognize this pattern. 3. Pregnancy: Physiology Masquerading as Pathology Pregnancy reshapes the proteomic landscape in ways we don’t always appreciate. Haptoglobin levels drop significantly in pregnancy, especially during the second trimester, and may even become undetectable. By the third trimester, levels often drift back toward normal — another reminder that trimester-specific reference intervals actually matter. A low haptoglobin in a pregnant patient means essentially nothing without a clinical and laboratory context. And yes — you can truly see an undetectable value with no hemolysis at all. 4. Recent Blood Transfusion: A Quiet, Temporary Dip There are documented cases of undetectable haptoglobin within 12 hours of transfusion even when all other hemolysis markers are completely normal and the plasma is visually clear. This isn’t hemolysis. It’s simply a redistribution phenomenon combined with assay dynamics. The takeaway: a low haptoglobin immediately after transfusion should not be over-interpreted. 5. Malnutrition, Allergic Reactions, and Seizure Disorders These conditions appear less frequently in textbooks but are well-described contributors to low haptoglobin. Mechanisms differ: Malnutrition → impaired hepatic protein synthesis Allergic reactions → acute consumption or immune-modulated shifts Seizure disorders → transient metabolic changes lowering haptoglobin They’re not common causes, but they’re real — and they matter when your labs don’t fit the hemolysis story. 6. A Note on Inflammation (The Curveball) Haptoglobin is an acute-phase reactant, which means inflammation, infection, or malignancy usually increase its levels. But here’s the critical nuance: Being an acute-phase reactant doesn’t protect haptoglobin from being depleted in hemolysis. In other words, a patient can have: very high haptoglobin from inflammation, and still develop a low haptoglobin if hemolysis is severe enough. Inflammation pushes the baseline up, hemolysis pulls it down — and the net result depends entirely on which force wins. This is why haptoglobin is a good marker in uncomplicated cases and a confusing one in complex ones. So What Do We Do With a Low Haptoglobin? We contextualize it. If the plasma is clear, the LDH and bilirubin are normal, and the reticulocyte count is unremarkable, you’re likely not dealing with hemolysis — regardless of what the haptoglobin is doing. Low haptoglobin is a supportive hemolysis marker, not a diagnostic one. And understanding these alternate causes protects us from over-calling hemolysis and chasing ghosts. Closing Thoughts Haptoglobin is often taught as a binary test, but its real-world behavior is anything but binary. A low value raises a question — it doesn’t deliver an answer. The more we understand its limitations, the better we become at interpreting the whole clinical picture instead of anchoring on a single number.
- A Practical Guide to Using AI Tools for Literature Searches
AI tools are showing up everywhere in medicine right now — in our inboxes, in meetings, and quietly in the background as we prepare talks or look up unfamiliar territory. Many of us are experimenting with them in real time, often between consults or after a busy clinic day, trying to figure out what they’re actually good at and how to use them without creating extra work. One place where AI can be genuinely helpful is in orienting yourself to a clinical question — especially when you need a quick overview before diving deeper. Over the past year, I’ve found that pairing AI tools with traditional verification steps has made my own literature searches faster and more organized, while still keeping the process grounded in real evidence. Since many colleagues are exploring these tools too, I thought I’d share the simple workflow I’ve settled into. Nothing here is prescriptive; it’s just what I’ve found useful as a clinician who wants speed and reliability. What AI Can Do Well (and Why It’s Helpful) AI can be a surprisingly helpful companion when you’re approaching a clinical topic. It can: Summarize large volumes of text quickly Highlight themes or connections across papers Provide a starting point when you’re approaching a topic you haven’t revisited in a while Help double-check that you’re not missing obvious papers Turn unstructured information into something more organized AI isn’t a replacement for reading source papers, but it can make it easier to start with some structure already in place. My Three-Step Workflow 1. Start With OpenEvidence OpenEvidence has become my go-to for initial orientation. It’s built specifically for medical literature and has content agreements with NEJM and JAMA, which helps anchor it in reputable sources. What I appreciate most is that every statement comes with a citation, and you can click directly into the underlying study. Two very practical notes: It’s free for medical professionals, which makes it easy to recommend. There’s also a mobile app, which is surprisingly handy when you’re on service and need to look something up between cases. For me, OpenEvidence gives a quick landscape of what has been studied, what hasn’t , and where the evidence feels solid versus sparse. Website: https://www.openevidence.com/ 2. Cross-Check and Structure With Elicit I don’t use Elicit for every question, but I often reach for it when I’m working on publications, talks, or anything where I need to be comprehensive. Elicit is trained on a broader scientific corpus, which means it sometimes pulls in studies that OpenEvidence misses or adds contextual pieces that help round out the picture. Its real strengths are: generating tables from search results extracting sample sizes and primary outcomes grouping related studies summarizing PDFs you upload If OpenEvidence helps me understand the landscape, Elicit helps me organize and structure that landscape — especially when multiple study designs or subtopics are in play. Website: https://elicit.com/ 3. Verify With a DOI Check (My Favorite 10-Second Step) Once I’ve identified the key papers, I take the DOI or PubMed ID and paste it directly into Mendeley, which will automatically fetch the citation metadata and abstract. A few reasons I rely on this step: It confirms the paper exists The metadata is correct The journal, year, and authors match The abstract aligns with the AI summary Not all reference managers can fetch metadata from just a DOI or PubMed ID — but Mendeley can, and Mendeley is free, which makes it a great option if you need an accessible verification tool. This small step has saved me more than once from citing a misattributed or nonexistent paper. A Gentle Note on Limitations AI tools are still evolving, and so are we. They can miss studies, overstate certainty, or conflate adjacent concepts. That’s not a failure — just a reminder that they’re best used alongside our clinical judgment and our usual habits of checking primary sources. For me, the workflow above keeps things balanced: AI helps with speed and structure, and the DOI check keeps everything grounded in reality. When This Workflow Helps Most I reach for this system when: Preparing for a meeting or protocol discussion Refreshing a topic I haven’t touched in a while Getting oriented before reading more deeply Drafting a talk, manuscript, or background section Checking whether references actually exist before citing them This workflow is flexible: I use it for everything from quick orientation to deeper literature reviews. The steps stay the same; the depth just changes depending on the question. This has become my primary approach to reviewing the literature -- it fuses speed with reliability in a way that fits how we practice today. I still use PubMed when I need to dive deeper into a particular thread, but the core workflow starts here. Closing Thoughts AI is becoming part of everyday clinical practice, and most of us are learning as we go. My hope is that sharing this workflow helps demystify the process a bit and gives you a reliable and practical starting point if you’re exploring these tools yourself. If you’ve found other strategies or tools that work well for you, I’d genuinely love to hear them — we’re all figuring this out together.
- When Dilution Becomes Dangerous: Why We Don’t Use Depletion Exchange in High-Risk Patients
There are days in Transfusion Medicine when the most interesting teaching moments arrive quietly — between phone calls, in the apheresis unit hallway, or as someone leans back in a rolling chair and says, “Okay, but why can’t we just do a depletion exchange here?” Today it came up while troubleshooting an inpatient red cell exchange on a Sickle Cell patient who was a lot sicker than he’d been two weeks earlier. One person suggested adding a depletion phase to improve the efficiency of the run. And that’s when the conversation shifted — away from algorithms and toward physiology, which is where these decisions actually live. Because the truth is simple: Depletion exchange works beautifully — until it doesn’t. And the people for whom it can go wrong are exactly the ones who can’t afford a period of reduced oxygen delivery. What Exactly Is a Depletion Exchange? Before diving into the “why not,” it’s worth being clear about what a depletion exchange actually is — because the term gets thrown around loosely, and not everyone pictures the same thing. A depletion exchange , also known as isovolemic hemodilution red cell exchange, is a specific variant of automated RCE in which the procedure begins with a hemodilution phase. The sequence looks like this: First, patient RBCs are removed. The device takes off red cells from the circulating volume. Simultaneously, the machine replaces the removed volume with crystalloid or 5% albumin. This maintains volume (isovolemia) but not oxygen-carrying capacity. After the patient’s hematocrit is intentionally lowered, the machine proceeds with the regular red cell exchange phase — removing patient cells and replacing them with donor RBCs. The rationale is straightforward: By lowering the patient’s starting hematocrit, each donor unit becomes more “effective” at reducing HbS%, so fewer units are needed. It increases efficiency, reduces donor exposure, and improves the geometry of the exchange. But there is a catch — and it’s the one people forget: You are creating a temporary period of reduced oxygen delivery. Isovolemic ≠ iso-oxygenating. For most stable outpatients, that’s fine. For others, it’s the wrong physiologic bet. Once you see the mechanics laid out like that — the intentional dip in hematocrit, the temporary thinning of oxygen delivery — the real issue isn’t the technology at all. It’s the patient . And there are certain patients whose physiology simply can’t afford that moment of dilution. 1. Acutely Ill Inpatients: No Physiologic Room to Fall We see this all the time: The patient with acute chest. The patient with sepsis layered on top of pain crisis. The patient who walked in hypoxic and is now teetering at 94% on oxygen. These patients are already running on borrowed reserve. Even a short-lived decrease in hematocrit can widen the gap between “holding steady” and “crashing.” Their tissues are extracting everything they can. Their compensatory mechanisms are maxed. A brief dilution phase risks exactly what we’re trying to prevent: worse perfusion, more ischemia, more instability. So we skip depletion. Not because we can’t do it, but because they can’t afford the physiologic tax. 2. Pregnancy: Two Patients, One Oxygen Supply Pregnancy is its own cardiovascular universe — high output, reduced systemic vascular resistance, compressed venous return, and a placenta that is exquisitely sensitive to maternal perfusion changes. The math is simple: Lower maternal Hct → lower uteroplacental oxygen delivery. Even transiently. Even “just during the depletion phase.” And when oxygen delivery falters, the fetus feels it first. In the interest of safety, we do not perform depletion exchanges in pregnant patients. 3. Cardiac Patients: Tightly Balanced at Baseline Then there are the patients with cardiac histories — and the patients with cardiac histories they don’t know they have yet. In cardiology, the pendulum has swung back toward liberal transfusion strategies for acute coronary syndromes, with several recent studies showing improved outcomes when hemoglobin is kept closer to 10 g/dL rather than drifting down into the restrictive ranges. The reason is simple and intuitive: the ischemic myocardium hates anemia. Coronary perfusion is already limited; oxygen extraction is already maxed. Any additional dip in oxygen-carrying capacity — even brief — can worsen supply–demand mismatch. And that’s the core problem with depletion exchange in this population. The machine keeps the volume steady, yes — but it cannot shield the myocardium from the temporary but real drop in perfusion during the dilution phase. It’s a moment the heart has no margin to absorb. So for these patients, we choose safety. Exchange only. Slow and steady. So Why Do We Do Depletion at All? Because when it’s safe — in stable outpatients without physiologic red flags — it is useful. It can make the exchange more efficient. It can reduce donor exposure. It can improve the final HbS% with fewer units. But the moment someone is acutely ill, pregnant, or carrying cardiac risk, those advantages don’t justify even a temporary hit to oxygen delivery. Apheresis Isn’t Just a Machine. It’s Physiology. That was really the take-home from today’s conversation. Our protocols can get so algorithmic that it’s easy to forget the body isn’t following the same neat logic tree. There’s a human being on the other side of the circuit — one who may be running out of compensatory room. So when we pick the exchange modality, we aren’t just choosing a setting on the instrument. We’re declaring what we think the patient can physiologically tolerate. For some, dilution is a gift. For others, it’s a risk not worth taking. And the art — the part that never shows up in the software — is knowing the difference.
- AI as a Second Reader, Not a Second Brain: What We’re Getting Wrong in Pathology AI Adoption
Introduction: The Problem With the "Second Brain" Metaphor Artificial intelligence in pathology and laboratory medicine is often marketed with an irresistible promise: a second brain that will spot what humans miss, automate the tedious parts of practice, and bring order to the overwhelming volume of data moving through modern health systems. It’s a compelling metaphor—but also a deeply misleading one. The truth is simpler and far more useful: most AI tools in lab medicine today are not second brains. They are second readers. They assist. They triage. They flag patterns. They highlight outliers. They nudge clinicians toward questions worth asking. This is not a limitation—it is the sweet spot of responsible AI. The problem is that our metaphors, expectations, and sometimes our implementation strategies haven’t caught up with this reality. When we treat assistive AI as if it were autonomous, we misjudge both its power and its risks. This piece reframes AI in pathology and transfusion medicine through a more grounded, clinically realistic lens: AI as a second reader—never the primary decision-maker. Assistive vs Autonomous AI: Why the Distinction Matters In public conversations, "AI" tends to be treated as a single monolithic category. But in clinical practice, the distinction between assistive and autonomous systems is foundational. Assistive AI Assistive AI tools support human decision-making without replacing it. They: flag abnormal cells or slide regions for review, surface unusual utilization patterns, predict inventory needs, identify potential bleeding risks or outlier transfusion practices, augment quality control workflows. The human remains the final decision-maker. The AI's role is advisory. Autonomous AI Autonomous AI, by contrast, can issue a clinical interpretation without human confirmation. The classic example is FDA-cleared autonomous diabetic retinopathy screening, where the system renders a result independently. Pathology is not there—and ethically, operationally, and scientifically, it shouldn’t aspire to be. Tissue interpretation, pre-analytic variability, complex clinical context, and downstream consequences place pathology squarely in the domain of human-in-the-loop practice. Moreover, the limitations of autonomous AI make full automation particularly risky in this field. Even state‑of‑the‑art large models exhibit irreducible error rates, including hallucinations that arise not from software bugs but from the fundamental way probabilistic systems generate outputs. OpenAI and other major developers have acknowledged that hallucinations are inevitable in current-generation AI—an acceptable risk for drafting emails, but not for diagnosing malignancy. In pathology, an autonomous error is not a benign failure mode; it is a misdiagnosis. Slides vary between institutions, stains differ, scanners introduce artifacts, and rare entities can be misclassified with absolute confidence. The model does not know when it does not know. Human-in-the-loop practice is therefore not a philosophical preference but a safety requirement. Current professional sentiment reflects this: most pathologists are cautiously optimistic about assistive AI but deeply wary of autonomous systems. The field understands that algorithms can elevate quality and efficiency, but they cannot—and should not—bear sole responsibility for interpreting tissue, integrating clinical nuance, or adjudicating uncertainty. Why the distinction matters Marketing narratives blur the line between assistive and autonomous. Operationally, this creates two dangerous extremes: over-trust: assuming the model "knows" more than it does, under-trust: dismissing or ignoring helpful signals because expectations were unreasonable. Treating AI as a second reader helps calibrate our expectations and clarifies the respective responsibilities of humans and machines. Workflow, Not Math: The Hidden Barriers to Clinical Integration Technical performance is rarely the limiting factor for AI deployment in lab medicine. More often, the barriers are operational and workflow-driven. Pre-analytic variability No algorithm, however elegant, can overcome poor input. Hemolysis, mislabeled samples, incomplete clinical information, and inconsistent sample handling all degrade model performance. "Garbage in, garbage out" is not cynicism; it is clinical reality. LIS/EMR integration An AI flag that never reaches the transfusion physician or technologist in a usable format is functionally irrelevant. Many promising tools fail not because they are inaccurate, but because they exist outside the everyday workflow. Alert fatigue If an AI model surfaces insights the same way EMR pop-ups surface medication alerts, clinicians will click through them reflexively. Effective AI must blend into the workflow — not interrupt it. Staff training AI disagreement is a liminal space. When a model flags an unexpected pattern, what is the technologist supposed to do? Without clear protocols, the burden on staff increases rather than decreases. Model stewardship Who revalidates the model yearly? Who monitors drift? Who owns threshold adjustments? Governance is critical and cannot be an afterthought. These challenges are not exciting, but they are what determine whether an AI tool genuinely helps clinicians — or becomes abandoned. The Hype Cycle Problem AI in medicine moves in predictable hype cycles. When expectations are unrealistic, three harms follow: 1. Overpromising leads to disillusionment When leadership expects instant automation, disappointment is inevitable. This can poison the well for future tools that are more modest but more practical. 2. Steps get skipped Proper change management, validation, and staff training take time. Under the pressure of hype, institutions try to "roll out" tools before anyone understands how to use them. 3. Trust becomes polarized Some clinicians embrace AI uncritically. Others reject it entirely. Neither posture produces safe patient care. Reframing AI as a second reader helps temper the hype and brings expectations back into alignment with clinical workflow and real-world constraints. What Safe, Responsible AI Actually Looks Like Clear intended use Every AI tool must answer one question precisely: What is the intended use? Ambiguous purpose leads to ambiguous outcomes. Human-in-the-loop structure High-impact clinical decisions — transfusion thresholds, rejection of critical values, or product allocation — should never be automated fully. AI highlights patterns; humans interpret them. Local validation Models must be calibrated to local population characteristics, including major demographic differences, high-obesity populations, rare disease prevalence, and unique practice patterns. Ongoing monitoring Performance changes over time. Drift is real. Monitoring is not optional. Defined failure modes Clinicians need clarity: When should I ignore this model? Understanding limits is as important as understanding utility. Explainability (pragmatic, not academic) Technologists and clinicians need broad insight into why a model fires — high-level logic is sufficient. Full algorithmic transparency is not required. Together, these guardrails ensure that AI functions as a clinically meaningful assistant, not an unpredictable black box. A Transfusion Medicine Lens: Where AI Actually Delivers Value Transfusion medicine offers a prime example of how AI should function in practice: as a second reader that enhances safety and efficiency. Utilization and stewardship AI can identify patterns of overuse or underuse, highlight outlier ordering habits, or flag cases where restrictive thresholds are inconsistently applied. But humans — transfusion physicians, technologists, PBM programs — interpret and respond to these patterns. Inventory and product management Platelet forecasting, rare phenotype prediction, and resource allocation are well-suited to assistive AI. The model surfaces the signal; the human makes the plan. Risk prediction Predictive models for bleeding, DHTR risk, TRALI likelihood, or massive transfusion activation can bring subtle risk factors to the surface. They augment human judgment but do not replace it. These examples demonstrate the core argument of this piece: AI helps most when it supports human cognition without competing with it. Conclusion: Getting the Metaphor Right AI in pathology and laboratory medicine is not a second brain—and expecting it to be one sets everyone up for failure. It is a second reader. A pattern spotter. A triage assistant. A flagger of outliers. A partner in safety and quality. When we ground AI in its true purpose, we can finally deploy it in ways that are meaningful, safe, and sustainable. The challenge is not to automate pathology or transfusion medicine, but to integrate AI into workflows as a thoughtful collaborator. The future of AI in the laboratory will belong to the institutions and clinicians that understand this distinction: Useful AI is not autonomous. It is assistive — and that is exactly where it belongs.
- Medicine’s Favorite Misdiagnosis: The Difficult Patient
I’ve been thinking a lot about attribution bias lately — the reflex to explain someone’s behavior by pointing to their character instead of their circumstances. In medicine, this isn’t just a cognitive shortcut. It’s one of our favorite misdiagnoses, and it often shows up in the form of a single, damning label: the difficult patient. Two encounters from my own practice keep coming back to me. 1. “The Meanest Person I’ve Ever Met.” That was the handoff. The wife was “the meanest person I’ve ever met.” “Confrontational.” “Always angry.” “Impossible to deal with.” This is a common setup: a pre-labeled human wrapped in warning tape, delivered with the expectation that I will treat her like a hazard. But her husband was lying in an ICU bed because of a botched procedure that left him paralyzed. She was navigating trauma, grief, and a system that — because of medicolegal anxieties — had decided to keep her at arm’s length and speak around her instead of to her. Every door she knocked on had a sign that said You may enter; we will not tell you anything of substance. When I met her, I didn’t find the “meanest person.” I found a woman trying to save what was left of her life. She wasn’t hostile. She was frantic. She wasn’t aggressive. She was afraid. She wasn’t difficult. She was drowning. Nothing about her behavior was surprising once you considered the situation. 2. “Behavioral Issues” in an Incarcerated Patient The second case came wrapped in a different set of labels: “behavioral issues,” “noncompliant,” “gets angry,” “hard to talk to.” An incarcerated Black man with sickle cell disease — a combination that, in the hospital, often guarantees dehumanization from the start. He’d been spoken to over the shoulder, not face-to-face. Guards in the doorway. Clinicians darting in and out, clipboards between them and him. A patient assessed through a frame of suspicion before a single word was exchanged. So I did something radical in its simplicity: I sat down. I looked him in the eyes. I spent twenty minutes listening. He was delightful. Honest. Funny. Thoughtful. A person. The “behavioral issues” vanished the moment the assumptions did. The Failure Isn’t the Patient. It’s the Attribution. This is attribution bias at its most damaging: mistaking trauma for personality, mistaking fear for hostility, mistaking systemic failure for individual flaw. In medicine, we love tidy trait-based stories — she’s mean, he’s manipulative, they’re noncompliant — because traits feel permanent. Predictable. Containable. But traits are the least accurate predictors of behavior, especially in crisis. Most human behavior is not driven by character. It is driven by emotional state. It ’s driven by: fear pain powerlessness being unheard being dismissed being rushed being judged being visibly feared feeling unsafe All of these are situational. All of them are correctable. None of them are personality traits. Fixed Mindset Medicine vs. Growth Mindset Humanity Attribution bias is rooted in a fixed mindset: the belief that people behave the way they do because of unchangeable internal qualities. But humans are not static. Neuroscience makes this painfully clear. Our prefrontal cortex — the part that lets us reason, regulate, pause, and plan — is resource-hungry and fragile. When people are in crisis, their frontal lobes go offline and their limbic systems take the wheel. They become emotional, reactive, short-fused, protective. Not because they’re “bad” or “difficult,” but because they’re human. In other words: Behavior = situation × current state × available resources (not “behavior = personality”). A growth mindset — the belief that behavior is modifiable and context-dependent — is not a soft, feel-good philosophy. It’s a neuroscientific reality. It also makes us better clinicians. The Stories We Tell Shape the Medicine We Practice Once we label someone as “difficult,” we stop asking the essential questions: What happened to them? What are they afraid of? What do they need to feel safe? What system-level failures are shaping this interaction? And maybe the hardest one: Who would I like to be in their situation? That question alone could dismantle half of the attribution bias in our hospitals. The Truth Behind Most “Difficult Patients” If I’ve learned anything, it’s this: Patients are almost never difficult because of who they are. They are difficult because of what they’re going through —and because of how the system is treating them. Change the situation, and the behavior changes. Change the framing, and the person emerges. Change how we show up, and the whole encounter transforms. We don’t have “difficult patients.” We have difficult circumstances — and patients doing their best within them.
- Plasma Chasers and the Quiet Rituals of Apheresis
Two different patients. Two plasma exchange treatments. Two nurses asking me, gently and matter-of-factly, the same question: “Do you want to chase with some plasma?” Before becoming an attending, I had never heard the phrase. It wasn’t part of residency, fellowship, ASFA courses, or any protocol I’d ever followed. It certainly isn’t in textbooks. The first time someone asked me, I wondered whether this was a regional term or a long-standing tradition I’d somehow missed. What I’ve realized is that “plasma chasers” aren’t a formal practice at all—they’re a local solution to a real physiologic concern, passed down through experience rather than evidence. And once I understood that, the whole thing made much more sense. Two Encounters, Two Decisions The first patient had an endomyocardial biopsy two days before their TPE. That felt like a clear “yes.” Even a small pericardial bleed can turn into tamponade quickly, and albumin-only exchanges temporarily lower fibrinogen in exactly the wrong moment. The second patient had a chest-tube exchange two days prior. Compressible, external, and not associated with catastrophic rebleeding after 48 hours. That one was a comfortable “no.” Both decisions felt reasonable. But afterward, I found myself thinking about why the question exists in the first place — and why different centers use different rituals to manage uncertainty. Ambiguity Breeds Ritual During my PhD years, I saw how easily small rituals form in the lab. The postdoc who said you had to swirl counter-clockwise for best DNA yields. The technician who swore PCR only worked if she spun down tubes twice. The graduate student who insisted cells behaved better if passaged on Tuesdays. None of these traditions were harmful. They were simply the human response to complex systems with hidden variables. When outcomes are unpredictable and stakes feel high, it’s natural to reach for anything that offers a sense of control. Clinical medicine is no different. Apheresis has many moving parts, physiology we can’t always observe directly, and very little high-quality evidence for the fine details of practice. It’s not surprising that different institutions develop their own habits — some sound, some questionable, some simply inherited. “Plasma chasers” live right in that space. What the Survey Data Actually Tell Us Before I wrote this, I went looking for anything peer-reviewed about plasma chasers specifically. There isn’t anything — not a single survey or guideline entry. But there is a published ASFA-linked survey (Zantek et al., J Clin Apher 2018) about hemostasis management and replacement fluid decisions. And the results were eye-opening: When a patient had major surgery just one day earlier, 8.9% of respondents still used albumin-only replacement, a much higher percentage than I expected. For minor procedures one day prior, 49.5% used albumin-only, and 50.5% included some or all plasma. That’s about a 50/50 split. For a patient scenario with no bleeding risk, 94.7% used albumin-only. Which is still short of 100% like I expected. To me, that’s fascinating. It shows how inconsistent — and how intuitive — these decisions really are. Clinicians are already making judgment calls about post-procedure bleeding risk every day, even without formal algorithms. Plasma chasers are simply a more granular version of that same instinct: Does this patient need some factors right now? Could a small bleed matter? The Framework That Actually Makes Sense When I strip away the inherited rituals, peer pressure, institutional memory, and “this is how we do it here,” the physiologic picture becomes surprisingly straightforward. Use a plasma chaser when: The patient had an endomyocardial biopsy < 72 hours. The patient had a renal biopsy < 72 hours. There is a fresh injury in a space where even a small bleed can be dangerous before it becomes obvious. These are the scenarios where a little post-exchange factor support truly makes sense. Consider partial FFP replacement when: A patient has severe allergic reactions to plasma but still needs some factor replacement. A patient is highly citrate-sensitive, and full FFP carries risks. Use full FFP replacement when: The indication is TTP. There is active or recent major bleeding. There is a high-risk coagulopathy. Use albumin-only when: A small bleed won’t be catastrophic, such as with chest tubes, lumbar punctures, and GI biopsies. There’s no compelling reason for factor support. This framework isn’t mystical. It isn’t ritualistic. It’s just physiology, risk, and common sense. Why I’m Writing About This I’m not criticizing the practice of plasma chasers. In many ways, I admire the quiet wisdom embedded in these unofficial patterns of care. They represent clinicians trying to do the safest thing for their patients in a landscape where evidence is incomplete. But I also believe there’s value in naming the uncertainty, reflecting on it, and disentangling ritual from reasoning. I don’t think we talk enough about the gray zones in our specialty — the places where we make decisions based on physiology, pattern recognition, and a little bit of fear of the worst-case scenario. And I think there’s a kind of comfort in acknowledging that these instincts come from somewhere real. Because in the end, apheresis is full of places where the science is incomplete, and the art of medicine steps in — not as hoodoo, but as thoughtful, experience-guided care.
- When TACO Runs Hot: Rethinking Fever in Transfusion-Associated Circulatory Overload
For years, transfusion-associated circulatory overload (TACO) has been framed as a purely hemodynamic problem — a case of too much blood, too fast. But hemovigilance data are challenging that simplicity. A growing body of work suggests that in a significant subset of patients, TACO runs hot. Yes, fever. Not chills from contamination, not cytokine-release fever from a leukocyte-rich product, but true fever within hours of transfusion — sometimes the only obvious clue that something is wrong. And it’s not rare: recent studies suggest that 30–40 % of TACO cases involve fever, a rate higher than for allergic transfusion reactions with fever. [1- 3] Beyond Volume: A Hotter Kind of Overload Classically, TACO is defined by acute respiratory distress and hydrostatic pulmonary edema within six to twelve hours after transfusion. But the presence of fever doesn’t fit that simple model of mechanical overload. Research by Parmar et al. (2017) and others shows that these fevers aren’t linked to patient age, product age, or reaction severity — and in most cases they’re new-onset, not continuations of pre-existing fever. [1] Together with bedside biovigilance data showing inflammatory features in some TACO cases, [2 - 4] this has led to a re-imagining of the syndrome: TACO may be part hemodynamic, part inflammatory. The Two-Hit Hypothesis: Volume Meets Inflammation The two-hit hypothesis of “inflammatory TACO” frames the reaction as a meeting of two vulnerabilities: First hit: a susceptible patient — one with heart failure, renal disease, positive fluid balance, or critical illness that limits their ability to tolerate volume. Second hit: the transfusion itself, delivering not only volume but also biologically active mediators — cytokines, storage-lesion byproducts, and shifts in colloid osmotic pressure. This combination may tip the endothelium into dysfunction, increasing capillary permeability and producing pulmonary edema beyond what simple volume overload would explain. It also helps account for “hot-TACO” cases after even a single unit of blood. [2 - 4] Clinical Confusion: When “Hot-TACO” Mimics TRALI Fever blurs the lines. In a febrile, hypoxic patient post-transfusion, most clinicians first suspect TRALI or sepsis. Yet as multiple studies and the revised international case definition emphasize, [1, 3, 5] the presence of fever doesn’t exclude TACO. If there are clear hydrostatic findings — positive fluid balance, elevated BNP or NT-proBNP, echocardiographic evidence of elevated filling pressures, or improvement with diuretics — TACO should remain high on the list even when fever is present. Diagnostic Pearls: Sorting the Hot from the Heavy When TACO and TRALI overlap, these clues help steer the differential: 🕒 Timing: TACO usually develops within 6 hours, but may be delayed up to 12. TRALI is classically within 6 hours and not relieved by diuretics. 💧 Volume response: Improvement with diuretics or fluid restriction supports TACO. ❤️ BNP / NT-proBNP: Ratios > 1.5–2× pre-transfusion favor hydrostatic overload. 🫁 Chest imaging: TACO shows cardiomegaly and vascular redistribution ; TRALI typically presents with bilateral non-cardiogenic infiltrates. 🧪 Inflammatory markers: Fever alone doesn’t rule out TACO, but a marked cytokine surge (e.g., IL-8, IL-6) suggests TRALI or sepsis. Ultimately, distinguishing hot-TACO from other febrile transfusion reactions depends on pattern recognition rather than a single test. The key is to remember that not all TACO is “cold.” Sometimes, the circuit overload burns a little. References Parmar N et al. Vox Sanguinis. 2017;112(1):70-78. Andrzejewski C et al. Transfusion. 2012;52(11):2310-20. Wiersum-Osselton JC et al. Lancet Haematology. 2019;6(7):e350-e358. Bulle EB et al. Blood Reviews. 2022;52:100891. Delaney M et al. Lancet. 2016;388(10061):2825-2836.
- When the Textbook Walks Through the Door: IgA Deficiency and Transfusion Practice
A patient was admitted with a congestive heart failure exacerbation. Their hemoglobin was drifting downward — nothing dramatic, but enough to warrant a type and screen. The result wasn’t surprising: a known warm autoantibody. What was surprising was the note that popped up beside it — “Requires washed RBCs.” We looked into it. The patient’s IgA level was reported as < 5 mg/dL on two separate occasions — a true, complete selective IgA deficiency. No history of anaphylactic reactions, no documentation of transfusion reactions at all. Still, the washed requirement persisted, a permanent flag carried forward through admissions like a family heirloom no one quite questioned. The Spectrum of IgA Deficiency Selective IgA deficiency is the most common primary immunodeficiency, occurring in roughly 1 in 300 people, though the term encompasses a spectrum. Many individuals have low but detectable levels of IgA and remain entirely asymptomatic. A complete deficiency — defined by an undetectable IgA level on at least two separate occasions — is far less common. (This definition is used by the European Society for Immunodeficiencies and the Immune Deficiency Foundation.) Only a fraction of these individuals go on to form anti-IgA antibodies, which have been implicated in allergic or anaphylactic transfusion reactions. The Rare Meets the Real The classic teaching looms large in every pathology and transfusion board prep book: the IgA-deficient patient who develops life-threatening anaphylaxis after receiving a standard blood component. But outside the exam room, this scenario is exceedingly rare.The true incidence of anti-IgA–mediated anaphylaxis is unknown and appears extremely low. The literature contains only a handful of case reports and small series describing such reactions, mostly in patients with severe IgA deficiency and detectable anti-IgA antibodies [1–4]. A comprehensive review identified just 23 cases of anaphylaxis in immunodeficient patients receiving IVIG over several decades [2]. Even among those with measurable anti-IgA, many tolerate blood products and immunoglobulin infusions without incident [1]. The association between anti-IgA antibodies and anaphylaxis remains controversial — suggesting that other, still-uncharacterized modulators of immune reactivity may determine who reacts and who does not. Larger studies are needed to clarify the true risk and mechanisms [1]. In short: the event is exceptional in clinical practice. And for our particular patient — elderly, volume-sensitive, admitted for heart failure — the most likely transfusion complication would not be anaphylaxis at all, but TACO. The same physiology that brought them into the hospital also raises their risk for fluid overload if transfused. Re-examining the “Requires Washed RBCs” Reflex So where does that leave us? With vigilance, yes — but also with perspective. The patient’s risk for anaphylaxis appears theoretical, not demonstrated. Yet the “washed RBCs” flag carries real-world costs: longer wait times, product scarcity, and potential delays in care. We decided to order an anti-IgA assay to see whether we could safely lift the restriction — a small act of course-correction that might spare the patient unnecessary complexity in future transfusions. Because sometimes the best transfusion practice isn’t about adding more caveats. It ’s about knowing which ones no longer serve the patient. References Rachid R, Bonilla FA. The Journal of Allergy and Clinical Immunology. 2012;129(3):628-34. Williams SJ, Gupta S. Archivum Immunologiae et Therapiae Experimentalis. 2017;65(1):11-19. Salama A et al. Transfusion. 2004;44(4):509-11. Ahrens N et al. Clinical and Experimental Immunology. 2008;151(3):455-8.











