top of page

Search Results

94 results found with an empty search

  • Jehovah's Witnesses and Blood: The Guidance Changed. The Complexity Didn't.

    On March 20, 2026, the Governing Body of Jehovah's Witnesses issued Governing Body Update #2. In a video address, member Gerrit Lösch announced that members may now decide for themselves whether to have their own blood drawn, stored, and later reinfused during medical or surgical care. The prohibition on allogeneic transfusion — receiving blood from another person — remains firmly in place. But preoperative autologous deposit, long explicitly forbidden, has been moved into the "personal conscience" category. The theological rationale was concise: "The Bible does not comment on the use of a person's own blood in medical and surgical care." I've been thinking about this a lot since it dropped. Not just as a news item, but as a transfusion medicine physician who has spent years navigating the clinical and ethical complexity that Jehovah's Witness patients bring to the blood bank. This policy shift is significant. It's also worth understanding clearly — because the coverage so far has been long on theological analysis and short on what any of this actually means from where I sit. What We're Talking About, Clinically Preoperative autologous donation (PAD) is exactly what it sounds like. A patient donates their own blood — typically between six weeks and five days before a scheduled surgery — which is processed and stored at a blood bank or hospital transfusion service. If transfusion becomes necessary during or after the procedure, the patient receives their own blood back. If it isn't needed, the unit is discarded. PAD is not a new technique. It's been around for decades. Its advantages are real: no risk of alloimmunization, no risk of transfusion-transmitted infection, lower likelihood of immune-mediated transfusion reactions. Its drawbacks are also real: preoperative phlebotomy can induce or worsen anemia, and the blood still requires the same processing and storage infrastructure as allogeneic donations. It is not a casual or universally available option. More on that in a moment. The Conscience Zone Was Already a Patchwork Here's what I find genuinely fascinating about this update: it's being covered like a dramatic reversal, but the conscience zone was already wide before March 20th. "Conscience zone" is my shorthand for the category of practices the Watch Tower Society has long designated as individual decisions — neither mandated nor prohibited, left to each member to resolve according to their own beliefs. Intraoperative cell salvage, acute normovolemic hemodilution, cardiopulmonary bypass, dialysis, epidural blood patches — all individual-decision items for years. The zone is wider now. But it was already wide. More importantly: the official doctrine has never fully captured what actually happens in clinical practice. I've cared for Jehovah's Witness patients who would accept platelets. I've worked with patients who would accept directed donations from members of their own congregation. I've seen patients draw their own lines in places the official guidance didn't put them — navigating their faith and their medical situation in ways that were entirely their own. Jehovah's Witness patient care has always been variable, because the patients are people, not policy documents. What this update does is formalize something that experienced clinicians already knew: there is no single answer to "what will my Jehovah's Witness patient accept?" There never was. The conscience zone just got wider, which means the conversation at the bedside just got more important. What This Means for the Blood Bank So what actually changes operationally? Potentially quite a bit — for patients who want to pursue PAD and have access to it. Blood banks that offer autologous donation programs will need to be prepared for Jehovah's Witness patients presenting for preoperative collection. This isn't a simple extension of existing workflows. Autologous units carry specific labeling requirements and storage handling. There are consent considerations unique to this population — patients will need clear information about the anemia risk, the storage logistics, and the fact that unused units are discarded rather than entering the general blood supply. For some Jehovah's Witness patients, that last point may matter doctrinally. Surgeons and anesthesiologists planning cases involving Jehovah's Witness patients will need to update their conversations. The reflexive assumption that a Jehovah's Witness patient will decline all banked blood products is no longer accurate. These patients may now arrive at the OR with autologous units available — but only if someone asked, offered, and made the referral in time. The window for PAD is finite. A patient referred for major elective surgery with a two-week lead time cannot take advantage of this option. And that's before we get to the institutional side. Not every hospital has an autologous donation program. Not every blood bank has the capacity or infrastructure. The patients most likely to benefit are those undergoing planned, elective procedures at well-resourced academic medical centers — which is not the only place Jehovah's Witness patients receive surgical care. The Practical Limits of Personal Conscience This is where I want to pump the brakes on the more celebratory takes I've seen. The framing of this update — each Christian must decide for themselves — positions the change as an expansion of individual autonomy. And in a doctrinal sense, it is. But autonomy without access isn't really autonomy. Jehovah's Witnesses number approximately 9.2 million worldwide, across more than 200 countries. The infrastructure to support preoperative autologous donation does not exist uniformly across those settings. In much of the world, the option the Governing Body has now made permissible is simply not available. The theological door has opened, but the operational corridor behind it is narrow and unevenly distributed. There's also the question of social pressure, which former members have been vocal about. The update frames this as conscience — but conscience operates inside a community. The Watch Tower Society has a long history of framing individual decisions within a framework of spiritual accountability. Moving something to the "personal decision" category is not the same as removing the social weight attached to that decision. A patient who now technically may accept PAD is making that choice in a social and ecclesiastical context that still shapes what choices feel available. That's not a reason to dismiss the update. It matters that the prohibition has been lifted. But clinical teams working with Jehovah's Witness patients should not assume that "it's now allowed" translates automatically into "patients will feel free to accept it." The conversation still requires care, privacy, and time. Where This Leaves Us The transfusion medicine community has spent decades developing expertise in bloodless surgical programs, autologous techniques, and the clinical and ethical navigation of Jehovah's Witness patient care. That expertise doesn't become less relevant now — if anything, it becomes more so. What this update requires from us is updated fluency: knowing what changed, understanding the practical and doctrinal distinctions that remain, and meeting patients where they actually are rather than where the policy says they could be. The conscience zone just got wider. Our job is to help patients navigate it — without assuming the map is simpler than it is.

  • A Primer on Hereditary Hemochromatosis for the Overworked Fellow

    I was reviewing charts on the hemochromatosis protocol during my transfusion medicine fellowship when I came across a patient with iron overload severe enough to require ongoing therapeutic phlebotomy — and a completely wild-type HFE panel. No C282Y. No H63D. No S65C. Just normal. I had just finished writing the service guide, which included a brief section on HFE alleles and genotypes. I had written a sentence about this exact scenario: “Occasionally you will see patients with iron overload and a WT HFE locus. This probably means they have another type of HH.” I had written that sentence and moved on. I had no idea what it actually meant. So I went down the rabbit hole. What I found reframed everything I thought I knew about hemochromatosis — and I think it’ll do the same for you. Hemochromatosis Is a Hepcidin Story Here is the reframe: hereditary hemochromatosis is not, at its core, a story about HFE. It’s a story about hepcidin. Hepcidin is a small peptide produced by hepatocytes, and it is the master regulator of iron homeostasis. The mechanism is elegant. Hepcidin binds to ferroportin — the only known iron exporter in the human body — and tags it for internalization and degradation. When hepcidin is high, ferroportin disappears from the cell surface. Iron stays trapped inside enterocytes, macrophages, and hepatocytes. When hepcidin is low, ferroportin is abundant. The gut absorbs iron without restraint. In hereditary hemochromatosis, regardless of the gene involved, the unifying pathophysiology is hepcidin deficiency relative to iron burden. The iron accumulates because the hormone that should be putting the brakes on iron absorption isn’t doing its job. HFE is not hepcidin. HFE is one of several upstream signals that tell the liver to make hepcidin in the first place. And that distinction explains everything. The Sensing Circuit Think of hepcidin production as the output of a sensing circuit. The liver is constantly asking: how much iron is out there? The answer comes from multiple inputs, and several proteins are involved in integrating those signals. HFE, transferrin receptor 2 (TFR2), and hemojuvelin (HJV) all participate in sensing transferrin saturation and stimulating hepcidin expression. HJV acts as a BMP co-receptor, and both HFE and TFR2 modulate downstream BMP/SMAD signaling. Mutations in any of them produce the same functional consequence: the liver underestimates iron burden, hepcidin production is insufficient, and ferroportin runs unchecked. HAMP is the gene that encodes hepcidin itself. Mutations here skip the sensing problem entirely — you’re not impairing the signal circuit, you’re eliminating the signal. SLC40A1 encodes ferroportin. Mutations here operate at the other end of the pathway entirely, at the effector rather than the sensor. And as we’ll get to, ferroportin disease is its own special category. The Four Types, and Why They’re Not All the Same Type 1 — HFE This is the one we learn in medical school and then assume is the whole story. HFE mutations are the most common cause of HH, with C282Y homozygosity the genotype most strongly associated with clinical disease. Onset is typically in late adulthood, often amplified by additional iron-loading exposures like alcohol use or chronic ineffective erythropoiesis. Menstruating individuals are partially protected by blood losses until menopause. Penetrance is lower than we historically believed — many C282Y homozygotes never develop symptomatic disease. Compound heterozygosity (C282Y/H63D) causes milder disease. H63D homozygosity milder still. S65C, the least common of the HFE alleles, is associated with mild to moderate iron overload when homozygous, and a single copy is generally not enough on its own to cause clinically significant disease. A single copy of any HFE allele typically isn’t sufficient. Type 2 — HJV or HAMP Here is where things escalate. Type 2, also called juvenile hemochromatosis, presents in the first or second decade of life. Type 2A involves HJV, Type 2B involves HAMP. Both are autosomal recessive, both are rare, and both are aggressive. Because iron accumulation begins in childhood, end-organ damage — particularly cardiac and endocrine — accumulates early. Without treatment, fatal cardiomyopathy by the third decade of life is not a hypothetical. This is not a disease you find incidentally on routine iron studies in a 50-year-old. A fellow who has only ever managed Type 1 may not be thinking about HH in a young patient with unexplained iron overload, elevated transferrin saturation, and a normal HFE panel. That blind spot can have real consequences. Type 3 — TFR2 Type 3 HH is caused by mutations in TFR2 — one of those upstream sensors feeding into the hepcidin circuit — and is intermediate in severity and onset, typically presenting in early adulthood. It is autosomal recessive and rare, with most reported cases from Mediterranean populations. Clinically it resembles Type 1 more than Type 2, though it tends to present earlier. If Type 1 is the late-night slow burn, Type 3 is the same fire with an earlier start time. Type 4 — SLC40A1 (Ferroportin Disease) Type 4 is the most mechanistically interesting, and the one most likely to trip you up. Type 4A is a loss-of-function mutation in ferroportin. Iron accumulates preferentially in macrophages rather than parenchymal cells, because ferroportin is how macrophages export the iron they’ve scavenged from senescent red blood cells. When ferroportin doesn’t work, that iron is trapped. Serum ferritin can be markedly elevated — because ferritin leaks from iron-laden macrophages — while serum iron and transferrin saturation are low. This is the opposite pattern from classic HH. Patients may also become anemic with phlebotomy more quickly than expected, because their macrophages can’t release stored iron to support erythropoiesis. Type 4B is a gain-of-function mutation that makes ferroportin resistant to hepcidin. The brake exists; the car just doesn’t respond to it. This behaves more like classic HH: elevated transferrin saturation, parenchymal iron loading, and good response to phlebotomy. Both subtypes are autosomal dominant — which means a family history may be easier to elicit than in the recessive types, and a single pathogenic allele is enough. Back to the Wild-Type When you encounter iron overload with a normal HFE panel, the differential isn’t just “secondary causes.” Depending on the clinical picture — especially the patient’s age, the pattern of iron deposition, and family history — it’s worth asking whether you’re looking at Type 2, 3, or 4. Extended genetic testing panels exist. A hematologist or geneticist may be a useful colleague. And then there’s the patient I encountered who had wild-type results across the full panel — not just HFE, but HJV, HAMP, TFR2, and SLC40A1 as well. No known pathogenic variant anywhere in the circuit. Just iron overload that didn’t have a name we could give it yet. The most likely explanation is a mutation in a gene we haven’t characterized — which is to say, the circuit we’ve described is probably not complete. The bigger takeaway, though, is the same one that started this post. Hemochromatosis is a disease of hepcidin deficiency. Once you see it that way, the genetics stop feeling like rote memorization and start feeling like variations on a theme. HFE, HJV, HAMP, TFR2, SLC40A1 — they’re all part of the same story. Some are upstream sensors, one is the signal itself, one is the effector. The iron accumulates because somewhere in the circuit, the brake is broken. A wild-type HFE result doesn’t mean there’s no hemochromatosis. It means you need to look upstream, downstream — or possibly somewhere we haven’t mapped yet.

  • More Is Not More: Hepcidin and the Counterintuitive Science of Iron Dosing

    In my last post on donor iron deficiency, I buried the most interesting part. Most of the piece covered what the field has established: donation depletes iron, ferritin screening is underutilized, the HEIRS and STRIDE trials make a reasonable case for supplementation, and the AABB has recommendations in place. All of that holds. But near the end, almost as a footnote, I mentioned that the original HEIRS trial used daily iron dosing — and that subsequent evidence suggests daily dosing may actually inhibit absorption by triggering the release of hepcidin. I've been thinking about that footnote ever since. It deserves more than a footnote. A Brief Introduction to Your Iron Gatekeeper Hepcidin is a small peptide hormone made by the liver, and its job is to regulate how much iron enters circulation. When iron stores are adequate, hepcidin is secreted, binds to ferroportin — the channel that exports iron from cells into the bloodstream — and shuts the door. When stores are low, hepcidin falls and the door opens. It is an elegant feedback loop, and under normal circumstances it works well. What makes hepcidin relevant to donor supplementation is a less intuitive property: it also responds acutely to oral iron ingestion. A single dose of 60 mg or more of elemental iron — roughly what you find in a standard over-the-counter supplement — triggers a hepcidin spike that sets in within hours and persists for approximately 24 hours before returning to baseline. While hepcidin is elevated, absorption from any subsequent dose is meaningfully suppressed. The implication for how we advise donors to supplement follows directly from this. The Problem With 'Take Iron Daily' The instinct to recommend daily iron supplementation is understandable. More doses, more iron in, faster repletion. It is the same logic that leads to split dosing — take it twice a day to maximize the total amount ingested. Both approaches are intuitive. Both are, at least partially, counterproductive. A 2015 study by Moretti and colleagues, published in Blood , was among the first to characterize this effect in humans. They showed that a morning iron supplement triggers sufficient hepcidin elevation to reduce absorption from a dose given later the same day — and that the response persists into the following morning. Split dosing compounded the problem: dividing the daily dose produced higher hepcidin and lower fractional absorption per dose, not better total uptake. The 2017 Stoffel et al. trial in Lancet Haematology  tested the logical alternative prospectively. Women randomized to alternate-day supplementation absorbed significantly more iron — both in fractional terms (21.8% vs. 16.3%) and in total — compared to those taking supplements daily. Allowing hepcidin to return to baseline between doses improved the efficiency of each one. Subsequent work confirmed that morning timing matters too: hepcidin follows a circadian pattern and is lower in the morning, making that the optimal window before the post-dose spike closes the door again. The practical upshot is that a donor who dutifully takes iron every morning may be absorbing less than one who takes the same dose every other morning. The body's own regulatory response is working against the intervention. What the Data Don't Yet Tell Us The alternate-day evidence is compelling, but almost none of it was generated in blood donor populations specifically. Most studies enrolled iron-depleted or iron-deficient women — a related but not identical context. Donors vary considerably in baseline iron status, sex, age, donation frequency, and the degree of deficiency at the time of supplementation. Whether the absorption advantage of alternate-day dosing holds consistently across this range is not yet established. The 2024 meta-analysis of daily versus alternate-day iron dosing added a useful wrinkle: baseline inflammation appears to modulate the benefit. Elevated hepcidin from inflammatory states may blunt the absorption advantage of spacing doses, since the favorable window is already partially closed before the first pill is swallowed. This is not a fringe concern in a donor population that includes people with subclinical inflammatory conditions. The dose question is also genuinely unresolved. HEIRS used daily supplementation; we now know that was suboptimal from an absorption standpoint. The alternate-day studies suggest that doubling the per-dose amount on an alternate schedule can achieve comparable or greater total iron uptake — but this has not been validated prospectively in donors. We are, in effect, extrapolating from better-designed absorption studies to a population that hasn't been directly studied under the revised paradigm. And then there is the infection question I raised in the earlier post. Oral iron has been shown to acutely elevate bacterial growth in human serum in iron-sufficient subjects. Whether this translates to iron-deficient donors — in whom the physiologic context is substantially different — remains unknown. No donor supplementation trial to date has tracked infection as an outcome. That gap is worth sitting with. Where This Leaves Us The case for addressing donor iron deficiency is solid. The case for doing it thoughtfully — rather than defaulting to daily supplementation because it seems like the obvious approach — is getting stronger. Hepcidin is not a curiosity. It is a central regulator of iron homeostasis, and it does not stop working just because we want our donors to replete faster. Any supplementation strategy that ignores it is, at minimum, less efficient than it could be, and possibly counterproductive at the margins. The AABB recommendations provide a reasonable framework. What they do not yet specify — with good evidence behind them — is the optimal schedule. Alternate-day morning dosing is the best current answer from the absorption literature. Whether that translates directly to the donor context, and at what dose, is work that still needs doing. In the meantime, it seems worth updating the footnote.

  • Flying Blind: TPE for Acute Kernicterus in Crigler-Najjar Syndrome

    Introduction One of the most humbling experiences in medicine is when a consult comes in and you realize the textbook has nothing for you. I had one of those recently — a 21-year-old with Crigler-Najjar syndrome type 1 and chronic kernicterus, averbal at baseline, who presented to an outside hospital with an infection and altered mental status. Her at-home bili lights were unavailable, and her bilirubin climbed from a baseline of around 24 to 32 mg/dL. She was transferred for ICU-level care and started on continuous phototherapy, which brought her bilirubin down from 32 to 29 — but her mental status didn’t budge. The concern was acute-on-chronic kernicterus, and now she was being transferred to us for therapeutic plasma exchange. Lord almighty, did I have a hard time coming up with a game plan. Crigler-Najjar Syndrome: A Primer For the uninitiated, Crigler-Najjar syndrome type 1 is a rare genetic disorder in which the enzyme responsible for conjugating bilirubin in the liver — UGT1A1 — is absent or nonfunctional. Without conjugation, unconjugated bilirubin accumulates in the blood. Unlike the common, transient jaundice seen in newborns, this is a lifelong condition. The mainstay of treatment is phototherapy, often for 10 to 16 hours daily, which isomerizes bilirubin into a water-soluble form that can be excreted without conjugation. The only definitive cure is liver transplantation, though gene therapy trials are underway. When bilirubin rises above a patient’s baseline — due to infection, fasting, or loss of access to phototherapy — the risk of acute bilirubin encephalopathy, or kernicterus, becomes very real. What the Literature Says (and Doesn’t Say) So, does therapeutic plasma exchange (TPE) have a role in acute kernicterus for Crigler-Najjar patients? I went to the literature to find out. What I found was… underwhelming. TPE is not listed as a primary indication in the ASFA guidelines for Crigler-Najjar syndrome. The evidence that does exist consists of scattered case reports and case series, and in every single one, plasmapheresis is treated as an afterthought — mentioned almost in passing as something that was done during a crisis, without rigorous evaluation of its contribution to the outcome. A 10-year-old with CN1 who developed kernicterus during streptococcal pharyngitis was treated with plasmapheresis, intensive phototherapy, and antibiotics, and recovered without neurologic sequelae. A 23-year-old man with CN1 who developed acute hepatitis from infectious mononucleosis received plasmapheresis to prevent neurological decline. A 2-month-old with a bilirubin of 30 mg/dL and signs of encephalopathy underwent plasmapheresis and urgent liver transplantation. Two 17-year-old boys with bilirubins in the 30s received intermittent plasmapheresis over a prolonged hospitalization. In none of these reports is there a standardized protocol. In none of them is TPE the focus of the study. It’s always a side note. Borrowing from the Acute Liver Failure Literature That left me with some very practical questions and very few answers. What exchange volume should I use? What replacement fluid? How often? The Crigler-Najjar literature doesn’t say. So I looked to the closest analogy I could find: the acute liver failure literature. In acute liver failure (ALF), high-volume plasma exchange (HVPE) has become a first-line therapy, based on a landmark 2016 randomized trial by Larsen and colleagues showing improved survival. HVPE in that context is defined as 8 to 12 liters of exchange, or about 15% of ideal body weight, which works out to roughly 2.5 to 3 plasma volumes. The replacement fluid is fresh frozen plasma, because ALF patients have severe coagulopathy and need factor replacement. A subsequent 2022 trial showed that even standard-volume plasma exchange — 1.5 to 2 plasma volumes — was effective and potentially safer with respect to cerebral edema. But here’s the critical difference: in ALF, the liver can potentially recover. In Crigler-Najjar, the enzyme deficiency is permanent. Bilirubin production continues at roughly 4 to 5 mg/dL per day, and studies have shown that bilirubin rebounds within 24 hours after plasma exchange. TPE in this context is a temporizing measure at best — buying time while you maximize phototherapy and, if indicated, arrange for transplant evaluation. My Approach I also had to consider the replacement fluid question carefully. The ALF literature uses FFP because those patients need clotting factors. My patient didn’t have liver synthetic dysfunction — her liver makes everything except functional UGT1A1. What she needed was bilirubin removal, and albumin is the primary carrier of unconjugated bilirubin in the blood. On the other hand, some FFP in the replacement fluid provides additional albumin and maintains oncotic pressure. I ultimately decided on a one-time TPE with a 50/50 mix of albumin and plasma — a pragmatic decision born more from first principles than from evidence, because the evidence simply doesn’t exist for this specific scenario. I also recommended maximizing phototherapy — exposing as much skin surface area as possible and using as many bili light devices as they could get their hands on. Phototherapy remains the workhorse of bilirubin management in CN1, and TPE without concurrent aggressive phototherapy is unlikely to make a meaningful dent. When Evidence Runs Out The broader point here is one that I think resonates with anyone who practices in a niche or rare-disease space: sometimes the literature leaves you on your own. You can search every database, pull every case report, and still end up making decisions based on pathophysiology, first principles, and clinical judgment rather than evidence-based protocols. That’s not a comfortable place to be, but it’s an honest one. A Call for Better Evidence For Crigler-Najjar patients in acute crisis, I think there’s a real need for better evidence on the role of TPE. What volume? What fluid? What schedule? Does it actually change outcomes, or does it just change numbers on a screen? These are questions that case reports can’t answer. Given the rarity of the condition, a multi-center registry or collaborative case series with standardized TPE protocols would be a reasonable starting point. In the meantime, if you get this consult, know that you’re not going to find a protocol waiting for you. You’re going to have to reason through it. And if you come up with something better than what I did, I’d love to hear about it.

  • Does TAMOF Exist? Revisiting a Diagnosis I Thought I Understood

    Earlier this year, I wrote a piece about TAMOF — thrombocytopenia-associated multiple organ failure — and the case for recognizing it as a TTP-like process driven by secondary ADAMTS13 deficiency. I described the lab pattern. I walked through the differential. I made what I believed was a compelling argument that TAMOF is underrecognized, and that plasma exchange is a rational intervention once the diagnosis is made. I stand by that piece. The pathophysiology is real. The lab pattern is real. The clinical scenarios I described are ones that pathologists and intensivists encounter regularly. But since writing it, I’ve spent more time with the literature surrounding TAMOF, and I’ve come to appreciate something I didn’t fully reckon with the first time around: the question isn’t just whether TAMOF is real. The question is whether TAMOF is useful — and that turns out to be a much harder thing to answer. The Case I Made In my earlier article, I described TAMOF as occupying an uncomfortable space between DIC, TTP, and sepsis-associated coagulopathy. The central idea was that systemic inflammation can drive a relative deficiency of ADAMTS13, leading to accumulation of ultra-large von Willebrand factor multimers and platelet-rich microthrombi in the microvasculature. The downstream effect looks like TTP — organ ischemia, thrombocytopenia, elevated LDH, sometimes schistocytes — even though the trigger is sepsis or inflammation rather than autoimmunity. I emphasized that the lab pattern tells the story: falling platelets, rising LDH, preserved coagulation parameters, and organ dysfunction out of proportion to hemodynamics. I argued that recognizing this pattern opens the door to therapeutic plasma exchange, and that missing it leaves patients undertreated. None of that is wrong, exactly. But it is incomplete. The Questions I Didn’t Ask The concept of TAMOF was primarily developed by Nguyen and Carcillo, with foundational work published in Critical Care  in 2006. From the beginning, TAMOF was described not as a single disease but as a clinical phenotype — an umbrella encompassing TTP, DIC, and secondary thrombotic microangiopathy in critically ill patients. The unifying feature was new-onset thrombocytopenia coinciding with multiple organ failure, and the proposed mechanism was microvascular thrombosis. This is where the first tension appears. If TAMOF includes DIC, TTP, and secondary TMA under one label, what does the label add? Each of those entities already has its own diagnostic criteria, its own pathophysiology, and — critically — its own treatment approach. DIC is a consumptive coagulopathy driven by tissue factor. TTP is an autoantibody-mediated deficiency of ADAMTS13. Secondary TMA is a broader category of inflammation-driven microangiopathy. These are not the same process. Grouping them under one name risks implying a mechanistic unity that does not exist. In my first article, I focused on the subset of TAMOF that behaves like TTP — the secondary TMA piece, where inflammation drives ADAMTS13 deficiency and platelet-vWF-mediated thrombosis predominates. That is a real phenomenon. But by calling it TAMOF rather than secondary TMA, I may have inadvertently adopted a framework that obscures more than it clarifies. The Evidence Problem The therapeutic implication of recognizing TAMOF is plasma exchange. This was central to my earlier piece: once you see the microangiopathy, the rationale for plasma exchange follows logically. Remove the ultra-large vWF multimers. Replenish ADAMTS13. Reduce inflammatory mediators. The biological plausibility is sound. The evidence base, however, is thin. The landmark pediatric trial randomized just ten children — five to plasma exchange, five to standard therapy. The results were encouraging: plasma exchange restored ADAMTS13 activity and was associated with organ failure resolution. But a trial of ten patients, however well-designed, cannot establish standard of care. Subsequent studies have been retrospective, observational, or limited to small cohorts. The Turkish TAMOF Network described outcomes in 42 children but could not even measure ADAMTS13 levels due to unavailability of the assay. A prospective multicenter experience published more recently found lower 28-day mortality in children treated with plasma exchange, but the authors themselves concluded that a randomized clinical trial is necessary to establish a causal relationship. The American Society for Apheresis gives plasma exchange in sepsis with multiple organ failure a Category III recommendation — meaning the optimum role is not established and decision-making should be individualized. This is not an endorsement. It is an acknowledgment that we don’t know enough. The ADAMTS13 Problem In my earlier piece, I described ADAMTS13 as spanning a wide range in TAMOF and cautioned against rigid thresholds. I still think that’s right. But I underappreciated a more fundamental issue: we do not yet know whether reduced ADAMTS13 in sepsis is the cause of organ dysfunction or simply a marker of disease severity. This distinction matters enormously. If reduced ADAMTS13 is pathogenic — if it is actively driving microthrombi formation and organ ischemia — then replenishing it through plasma exchange is a targeted intervention. But if ADAMTS13 is reduced because the patient is severely ill, because inflammation broadly suppresses hepatic synthesis and accelerates consumption of many proteins, then treating the ADAMTS13 level may be treating a surrogate rather than the disease itself. Moreover, ADAMTS13 activity in TAMOF is typically reduced but not severely deficient. In classic TTP, levels are usually below 10%. In sepsis-associated secondary TMA, levels are more often in the 20–60% range. This is an important gray zone. Plenty of critically ill patients with sepsis have mildly reduced ADAMTS13, and most of them do not have a clinically meaningful microangiopathic process. The specificity of this biomarker, in this context, is genuinely uncertain. The Diagnostic Boundary Problem TAMOF is diagnosed by a triad: new-onset thrombocytopenia below 100,000, at least two failing organs, and elevated LDH. The problem is that this triad describes an enormous proportion of critically ill septic patients. Thrombocytopenia in the ICU is common — present in up to 40–50% of patients, depending on the threshold used. LDH elevation is nearly ubiquitous in critical illness. And organ failure is, almost by definition, why these patients are in the ICU in the first place. If the diagnostic criteria capture too many patients, the label loses its power to identify those who would specifically benefit from targeted intervention. A diagnosis that applies to half the ICU is not a diagnosis. It is a description. What I Think Now I want to be careful here, because I don’t think the answer is that TAMOF is meaningless or that my earlier article was misguided. The pathophysiology I described — inflammation-driven ADAMTS13 deficiency leading to platelet-vWF-mediated microvascular thrombosis — is supported by autopsy data, by biomarker studies, and by the clinical observation that some septic patients develop a microangiopathic picture that does not fit neatly into DIC. That phenomenon is real, and it deserves a name. But I think the name might be doing some work that the evidence hasn’t earned yet. TAMOF as an umbrella term bundles together mechanistically distinct processes and implies they share a common therapeutic target. TAMOF as a diagnostic entity relies on criteria so broad that they risk capturing patients who don’t have a true microangiopathy at all. And TAMOF as a justification for plasma exchange rests on studies that, while promising, remain small, largely retrospective, and without a definitive randomized trial. What I wrote before was an argument for recognition — for seeing the pattern and acting on it. What I’d add now is an argument for precision. The lab pattern I described is still the right place to start. Converging signals of microangiopathy in a septic patient should prompt the question: is there a thrombotic microangiopathic process driving this patient’s organ failure? But the answer to that question should lead to a specific diagnosis — secondary TMA, DIC, or something else — not to a catch-all label that may prematurely close the differential. The Lab’s Role, Revisited I ended my first article with the line: “TAMOF is not rare because it is uncommon. It is rare because we don’t look for it.” I still believe that’s true — but I’d frame it differently now. What’s underrecognized isn’t necessarily TAMOF as a discrete entity. What’s underrecognized is the broader phenomenon of secondary thrombotic microangiopathy in the critically ill, and the role that laboratory medicine plays in distinguishing it from DIC, from “just sepsis,” and from true TTP. That distinction requires more than a label. It requires the kind of contextual interpretation that has always been the core competency of the pathologist: not just reporting numbers, but assembling them into a story that changes management. The controversy around TAMOF is not really about whether the biology is real. It is about whether we have the right framework to describe it, the right criteria to diagnose it, and the right evidence to treat it. On all three counts, the honest answer is: not yet. But “not yet” is not the same as “no.” It means we have more work to do. And for those of us in the lab, that work starts with being willing to question our own frameworks — even the ones we just finished building.

  • Transfusion Medicine: The Clinical Engine Behind the Blood Bank

    When clinicians say, “I called the blood bank,” they usually mean one of two things: they need blood products, or something about a transfusion doesn’t feel right. Those are not the same situation — and they are not handled the same way. At most institutions, the blood bank laboratory and the Transfusion Medicine service operate as an integrated system. They overlap. They collaborate constantly. But they serve different functions. That difference matters. The Blood Bank: Technical and Operational Safety The blood bank laboratory is responsible for: ABO/Rh typing and antibody screens Crossmatching and compatibility testing Investigating serologic incompatibilities Preparing and issuing blood components Maintaining regulatory and quality standards It is the operational engine of transfusion safety. It ensures the right product reaches the right patient efficiently and in compliance with strict regulatory frameworks. But laboratory testing alone does not answer every clinical question. Transfusion Medicine: Clinical Judgment in Real Time The Transfusion Medicine service provides physician-level consultation when transfusion decisions become complex or when adverse events occur. We are consulted for: Suspected transfusion reactions Hemolysis or unexpected serologic findings Complex alloimmunization cases Risk–benefit discussions in high-risk scenarios Questions about product selection beyond routine ordering When a patient develops hypotension, hypoxia, fever, or laboratory evidence of hemolysis during or after a transfusion, the question is no longer simply, “What do the labs show?” It becomes: Is this TRALI, TACO, hemolysis, sepsis, or something unrelated? Should additional products be given? Does this event require reporting or product quarantine? The laboratory can detect hemolysis. It cannot diagnose TRALI. These are clinical determinations that require integration of history, timing, exam findings, imaging, and laboratory data. When to Involve Transfusion Medicine A practical rule of thumb: If you are unsure whether what you are seeing “counts” as a transfusion reaction — involve us. Early consultation allows: Real-time clinical assessment Guidance on stopping versus continuing transfusion Appropriate laboratory evaluation Accurate documentation in the EMR Prevention of downstream complications Waiting until the picture is unmistakable often means the patient has already deteriorated further than necessary. The threshold should be low — particularly for severe allergic reactions, suspected hemolysis, respiratory compromise, or unexplained instability during transfusion. Why Role Clarity Matters Conflating the laboratory function with clinical consultation can create blind spots. If a reaction is reported only as a technical issue, important clinical context may be missed. Without coordinated physician involvement, transfusion reactions are more likely to be under-recognized, misclassified, or inconsistently documented. That affects more than a single patient encounter. It impacts hemovigilance data, quality reporting, and our ability to learn from adverse events. Transfusion is one of the most common procedures performed in hospitalized patients. It is also one of the few therapies that requires laboratory and clinical teams to function as a tightly integrated unit in real time. Clear roles within that integration improve patient safety. A Collaborative Model This is not about separation. It is about alignment. The blood bank laboratory ensures technical and regulatory safety. The Transfusion Medicine physician provides clinical oversight and interpretation. They are complementary functions within the same safety system. If you are ordering routine blood for a stable patient, the laboratory will manage the process seamlessly. If a transfusion becomes clinically complicated — or something simply does not make sense — physician-level Transfusion Medicine consultation should be part of the response. Transfusion Medicine is not just a laboratory process. It is a clinical service embedded within it. And when in doubt, call.

  • Practicing at the Edge of ABO: Navigating Rare A Subgroups

    There are moments in transfusion medicine when the most uncomfortable part of a case isn’t the serology — it’s the realization that the literature can’t quite tell you what to do. Recently, on service, I encountered a patient with a rare A subgroup and a cold-reacting anti-A1. Genotyping suggested either an Aw allele or an Ael allele. The immediate question was practical and deceptively simple: Is it safe to transfuse group A red cells, or should we restrict the patient to group O? What followed was a familiar exercise for anyone who practices in the margins of evidence: reading case reports, revisiting mechanism, and trying to decide how much uncertainty is acceptable when the downside is fatal hemolysis. Along the way, one thing became clear: not all weak A phenotypes are biologically — or clinically — interchangeable. In particular, A3 is not the same as Aw, and neither is the same as Ael. Yet they are often discussed together, sometimes implicitly treated as a single category. That shortcut matters. The problem with “weak A” as a single bucket In everyday blood bank practice, weak A phenotypes are often grouped together for operational reasons: they may present as ABO discrepancies, require additional testing, or trigger conservative transfusion strategies. But biologically, these phenotypes arise through very different mechanisms, and those differences shape how we should think about transfusion risk. Here’s a simplified comparison. A3 vs Aw vs Ael — why the distinction matters Feature A3 Aw Ael Typical serologic pattern Mixed-field agglutination with anti-A Weak or very weak anti-A; variable No agglutination with anti-A Detectable without elution? Yes Often yes (weak) No Detectable by adsorption–elution Usually not needed Sometimes Required Underlying mechanism Reduced or mosaic expression; often promoter/splicing effects Hypomorphic A transferase with allele-in-trans–dependent expression Near-null expression, often due to early truncation of A transferase Degree of A antigen exposure Present on a subset of RBCs Variable; can be extremely low Trace only Evidence base for transfusion safety Relatively robust (dominates “weak A” literature) Sparse, case-based Extremely limited Theoretical risk of allo-anti-A Low Uncertain Plausible (no incidence data) Why A3 is different A3 is classically defined by mixed-field agglutination with anti-A: some red cells express A antigen clearly, others do not. Importantly, A antigen is present and visible without elution. From an immunologic standpoint, this matters. The immune system has likely been exposed to A antigen throughout life. Unsurprisingly, much of the reassuring transfusion experience for “weak A” phenotypes comes from cohorts dominated by A3 and similar variants. When people say, “We transfuse A all the time in weak A and nothing happens,” they are often — implicitly — talking about A3. Ael: a fundamentally different phenotype Ael occupies the opposite end of the spectrum. These phenotypes typically arise from premature termination codons early in the ABO A transferase gene. Routine serology shows no A antigen at all; detection requires adsorption–elution, and even then only trace amounts are found. In practical terms, most circulating red cells are immunologically indistinguishable from group O. Does this mean patients with Ael will  form allo-anti-A? No one knows. The literature does not report an incidence. But mechanistically, the conditions that support immune tolerance to A antigen are clearly not the same as in A1, A2, or A3 phenotypes. This is where the phrase “absence of evidence is not evidence of absence”  stops being academic. Aw: the uncomfortable middle Aw phenotypes are what make this topic genuinely hard. Unlike A3, Aw is not a mixed-field phenotype by default. And unlike Ael, it is not uniformly silent. Instead, expression depends heavily on the allele in trans. One of the most striking demonstrations of this comes from maternal–child discordance cases, where the same  Aw allele produced: essentially no detectable A antigen when paired with an O allele, and robust A expression when paired with a B allele. In other words, Aw can look immunologically like Ael in one context and A2 or stronger in another. When you encounter Aw in the blood bank, you are not just dealing with “weak A.” You are dealing with context-dependent A expression, and that uncertainty follows you into transfusion decisions. What about hemolysis and antibodies? Most anti-A1 antibodies are cold, IgM, and clinically insignificant. That’s true — until it isn’t. Case reports exist of hemolytic transfusion reactions involving anti-A1 when thermal amplitude is broad or when additional risk factors are present. These cases are rare, but they loom large precisely because the denominator is so poorly defined. What we don’t  have are: incidence data for allo-anti-A formation in Ael or Aw individuals, outcome studies stratified by molecular subgroup, or prospective evidence that transfusing group A is uniformly safe across all weak A genotypes. So when clinicians default to group O in these cases, it’s not ignorance. It’s an acknowledgment of uncertainty. Conservatism isn’t a failure of evidence — it’s stewardship In my case, we chose to transfuse group O red cells while we waited for expert input. That decision wasn’t driven by panic or dogma. It was driven by a simple question: If I’m wrong, what happens to the patient? In transfusion medicine, the cost of being wrong is asymmetric. Hemolysis is rare — until it isn’t — and when it happens, it’s unforgettable. Until we have better data, it is reasonable to treat A3, Aw, and Ael differently, even if our SOPs and textbooks sometimes collapse them into the same category. Closing thought Somewhere between genotype, phenotype, and patient safety is a space where we practice medicine without a net. That’s not a failure of science. That’s where judgment lives. And sometimes, judgment looks like a unit of group O. Please see this related post for an update on this case: https://www.bloodbytesbeyond.com/post/anti-a1-in-practice-not-in-theory

  • Anti-A1 in Practice, Not in Theory

    After I published a recent post about a patient with a rare A subgroup and a cold-reacting anti-A1, I did what transfusion medicine quietly trains us all to do when the literature runs thin: I picked up the phone. The case itself was straightforward to describe and uncomfortable to decide. Genotyping suggested either an Aw  allele or an Ael  allele. Serology favored Aw , with faint agglutination detectable without elution. The patient also had a cold-reacting anti-A1. The question was simple and not at all academic: should we transfuse group A red cells, or restrict the patient to group O? In the absence of clear guidance, we chose conservatively while seeking expert input. That decision felt reasonable — but incomplete. So I reached out to colleagues at a reference laboratory to ask how they actually think about cases like this, not in theory, but in practice. What the Literature Teaches — and Where It Stops If you search anti-A1 and hemolysis, you will find what all of us find: case reports. Some are dramatic. A few involve hemolytic transfusion reactions. Many emphasize the same features — broad thermal amplitude, high titers, or unusual clinical contexts such as malignancy. What you will not find are incidence data. You won’t find outcome studies stratified by molecular subgroup. You won’t find a denominator large enough to tell you how often cold-reacting anti-A1 actually causes harm in routine transfusion practice. Case reports are essential—they define what can  happen. But they are also blunt instruments. They warn us without telling us how often to expect trouble, or how to weigh that risk against competing obligations like inventory stewardship. That gap is where reference labs live. What the Reference Lab Actually Looks At One of the most useful things about consulting a reference laboratory is learning which variables matter most  when time and data are limited. In this case, three themes came up repeatedly. 1. Thermal amplitude and titer matter more than genotype In practice, the single most important question is not whether the patient has Aw  versus Ael , but whether the anti-A1 reacts at 30 °C or higher. Cold-reacting anti-A1 antibodies that react only below 30 °C are overwhelmingly benign in real-world experience. Hemolysis in this setting is extraordinarily rare, particularly in otherwise stable patients. When reactions do occur, they tend to involve antibodies with broader thermal amplitude or very high titers that permit binding at warmer temperatures. This is not because genotype is irrelevant, but because thermal amplitude and titer are the only tools we currently have that correlate, however imperfectly, with clinical significance. 2. Malignancy-associated cases don’t generalize well Several of the most concerning reports of anti-A1–mediated hemolysis come from patients with malignancy, particularly myelodysplastic syndromes. These cases behave differently for a reason. In malignancy, the ABO glycosyltransferase genes may be epigenetically silenced or otherwise disrupted. Antigen expression can change or disappear entirely, and patients may transiently form potent antibodies against antigens they once expressed. These antibodies can be atypical, high-titer, and clinically significant — and may abate once the underlying disease is treated or after transplant. Those cases are real, but they are not representative of the average patient with a cold-reacting anti-A1. Treating them as such inflates perceived risk. 3. Group A is transfused more often than people realize Perhaps the most grounding insight was this: reference labs see these cases frequently, and group A red cells are routinely transfused to patients with cold-reacting anti-A1 without incident. That comfort does not come from theory. It comes from volume—from seeing the same scenario play out safely again and again. When reactions are limited to temperatures below 30 °C and the patient is not undergoing hypothermia or critically ill, the expectation is that transfusion will be tolerated. Group O remains an option — but not a default. Where Conservatism Still Makes Sense None of this means that caution is misguided. In fact, reference labs are often more  conservative in specific situations: Patients who are critically ill or have minimal physiologic reserve Antibodies with reactivity approaching 30 °C Very high titers, even if technically “cold” Planned hypothermia or cardiac surgery In those contexts, avoiding even low-grade hemolysis may matter more than inventory conservation, and the threshold for using group O appropriately drops. The key distinction is that conservatism becomes a choice , not an automatic rule. Judgment Is a Team Sport Case reports teach us what can go wrong. Reference labs teach us how often it does — and under what conditions. Clinicians have to integrate both, along with patient context and resource stewardship, to make decisions that are defensible even when the evidence is incomplete. That’s not a failure of science. It’s the practice of medicine. Sometimes, after all that, the answer is still group O. But now, at least, I know why—and when it doesn’t have to be.

  • Where Autonomy Ends: Directed Donation, COVID Myths, and the Ethics of Saying No

    Today we had a case that many transfusion services will recognize. A patient scheduled for surgery requested a directed blood donation. The reason given was concern about receiving blood from donors who had received a COVID-19 vaccine. The answer was no. She returned with a revised request: this time citing religious preference and psychological comfort. Again, the answer was no. Afterward, I had a long discussion with a resident — thoughtful, patient-centered, and clearly uncomfortable with refusing a request framed in ethical language. I don’t think I convinced them. And that matters, because this is exactly the kind of scenario where kindness  and ethics  feel deceptively close, and where “just accommodating” can feel easier than holding the line. So let’s be explicit about why the answer was no—and why it needed to be. What the Evidence Actually Says About COVID Vaccination and Blood Safety The fear driving these requests is understandable—but it is not evidence-based. There is no evidence that blood from donors who were vaccinated against COVID-19—or previously infected with SARS-CoV-2—poses increased risk to transfusion recipients. The strongest data come from a large recipient-linked study published in Transfusion  in 2025 (Roubinian et al.). Investigators examined 7,773 transfusion recipients across 8,715 hospitalizations, directly linking over 34,000 plasma and platelet units to donor vaccination and infection status. They assessed outcomes people worry about most: thrombosis, increased respiratory support, and hospital mortality. They found no association — not with vaccinated donors, not with previously infected donors, not with recent vaccination, recent infection, or high antibody titers (Roubinian et al., Transfusion , 2025). Concerns about transfusion-transmitted SARS-CoV-2 have likewise failed to materialize. While viral RNA can be transiently detected in blood during infection, infectious virus has not been recovered, and no cases of transfusion-transmitted COVID-19 have been documented. This is why donor vaccination status is not tracked or used in blood allocation. So when patients request “non-vaccinated blood,” they are not asking for something safer. They are asking for something different , based on a belief that the data do not support. What Directed Donation Is Actually For Directed donation exists—but for narrow medical reasons, not reassurance. Historically, it was used before modern infectious disease testing. Today, it is reserved for specific clinical indications, such as: Patients with rare blood types or antigen profiles Situations where compatible community donors are unavailable Selected pediatric or immunologic scenarios where compatibility constraints are real Outside of these circumstances, directed donation does not improve safety. In fact, it often makes things worse. A 2025 multidisciplinary consensus analysis in Annals of Internal Medicine  (Jacobs et al.) concluded that directed donation for nonmedical reasons — such as donor vaccination status or personal belief — introduces patient safety risks, operational burden, and societal harm without evidence of benefit. Why Directed Donation Increases Risk and Cost (Even When Everyone Means Well) The most persistent misconception about directed donation is that it is, at worst, harmless. It is not. Directed donation systematically increases risk, cost, and error—and it does so in predictable ways. First, donor risk. Directed donations disproportionately rely on first-time donors, who have consistently higher rates of infectious disease marker positivity than repeat community donors (Dorsey et al., Transfusion , 2013). In addition, directed donors are often under emotional or social pressure, which reduces the accuracy of donor health-history reporting—critical because all testing has a window period (Jacobs et al., Ann Intern Med , 2025). Second, immunologic risk. When directed donors are family members, additional hazards appear: HLA alloimmunization, transfusion-associated graft-versus-host disease (necessitating irradiation), TRALI risk, and complications relevant to future transplantation or pregnancy (Jacobs et al., 2025; Weaver et al., Pediatrics , 2023). Community blood is deliberately immunologically “boring.” Directed blood is not. Third, error and logistics. Modern transfusion safety depends on standardization. Directed units require special scheduling, labeling, tracking, storage, and coordination across multiple systems. Each deviation from routine workflow increases the risk of mislabeling, misidentification, expiration, delay, or waste. This is a human-factors problem, not a personnel problem (Jacobs et al., 2025). Fourth, reliability. Directed donation assumes ideal timing: donors qualify, donate on schedule, units clear testing, surgeries proceed as planned, and blood needs match exactly. In reality, donors are deferred, units expire, surgeries change, and emergencies don’t wait. When directed units fail, patients still receive community blood — often under more urgent conditions. Fifth, cost. Directed donation is substantially more expensive: additional recruitment, separate processing and inventory, irradiation, staff time, and higher wastage rates. Who pays is often unclear — the patient, the hospital, the blood center, or all three. There is no evidence these costs improve outcomes (Jacobs et al., 2025). Finally, system-level harm. Blood is a shared resource. Normalizing directed donation diverts donors from the community supply, worsens shortages, delays care, and privileges patients with social capital and access. It also implicitly validates misinformation — suggesting that some donors’ blood is inherently safer without evidence. Where Autonomy Applies—and Where It Does Not This is where the ethical line must be drawn clearly. Religious objection to blood transfusion itself is ethically valid. Competent adults may refuse blood products entirely, even if refusal carries serious risk. That is autonomy. But autonomy does not extend to requesting blood from donors with preferred personal characteristics absent medical necessity. Religion and moral frameworks may motivate people to donate blood altruistically to the community supply (Maghsudlu & Nasizadeh, 2011; Gillum & Masters, 2010). They do not create a right to receive blood from a chosen category of donors. Once belief-based donor preferences are accommodated, medicine implicitly endorses them. That opens the door to discriminatory requests — vaccination status today, race or gender tomorrow — and undermines decades of ethical progress in transfusion medicine (Jacobs et al., 2025). Respecting patients does not require validating unfounded fears or restructuring safety systems around them. The Uncomfortable Truth What made this case difficult wasn’t the policy—it was the discomfort. Saying no feels unkind. Especially when requests are reframed in ethical language. Especially when anxiety is real. Especially when the temptation is to say, “Why not just this once?” But “just this once” is never neutral. Every exception teaches something: about evidence, about safety, about whose fears medicine will legitimize. Transfusion medicine exists precisely because we learned—often painfully—that systems protect patients better than intentions. So yes, we said no. Twice. Not because we dismiss religion. Not because we don’t care about comfort. But because our ethical obligation is to protect patients, preserve trust in the blood supply, and practice medicine grounded in evidence — not fear. And sometimes, that means holding the line clearly, calmly, and without apology. References Roubinian NH, Greene J, Spencer BR, et al. Blood donor SARS-CoV-2 infection or vaccination and adverse outcomes in plasma and platelet transfusion recipients. Transfusion.  2025;65(3):485–495.doi:10.1111/trf.18159 Jacobs JW, Booth GS, Lewis-Newby M, et al. Medical, societal, and ethical considerations for directed blood donation in 2025. Annals of Internal Medicine.  2025;178:1021–1026.doi:10.7326/ANNALS-25-00815 Dorsey KA, Moritz ED, Steele WR, et al. A comparison of HIV, HCV, HBV, and HTLV marker rates for directed versus volunteer blood donations to the American Red Cross, 2005–2010. Transfusion.  2013;53:1250–1256.doi:10.1111/j.1537-2995.2012.03904.x Weaver MS, Yee MEM, Lawrence CE, Matheny Antommaria AH, Fasano RM. Requests for directed blood donations. Pediatrics.  2023;151(3):e2022058183.doi:10.1542/peds.2022-058183 Maghsudlu M, Nasizadeh S. Iranian blood donors’ motivations and their influencing factors. Transfusion Medicine.  2011;21(4):247–255.doi: 10.1111/j.1365-3148.2011.01077.x Gillum RF, Masters KS. Religiousness and blood donation: Findings from a national survey. Journal of Health Psychology.  2010;15(2):163–172.doi: 10.1177/1359105309345171

  • Extracorporeal Photopheresis Schedules: A Practical Guide for Trainees

    Schedules, Evidence, and Real-World Alternatives One of the most common questions I get from residents rotating through apheresis or transplant is deceptively simple: “How often do we do extracorporeal photopheresis?” The honest answer is: it depends —and not in a hand-wavy way. ECP schedules vary by disease, acuity, and goals of therapy, and the evidence actually supports very different approaches for acute GVHD, chronic GVHD, and cutaneous T-cell lymphoma. Add in newer targeted agents like ruxolitinib and belumosudil, and the question becomes not just how often , but why ECP at all . Let’s walk through what we know, what we don’t, and how to explain this clearly to trainees. First: What an “ECP cycle” actually means Before getting into frequency, it helps to define the unit of treatment. Traditionally, one ECP cycle = treatment on two consecutive days. This convention dates back to the original FDA-approved protocols for cutaneous T-cell lymphoma and has persisted across indications. UK consensus statements and most international guidelines still define ECP this way—whether the cycles are weekly, every two weeks, or monthly. Importantly, this two-day structure is not based on randomized comparisons showing superiority over alternate-day or single-day schedules. It’s a mix of historical precedent, logistics, and immunologic plausibility: delivering two closely spaced infusions of apoptotic, photoactivated leukocytes may amplify the tolerogenic signal that drives regulatory T-cell expansion. There are  data supporting single-day, higher-volume ECP protocols—especially when access, staffing, or infection risk is a concern—but we do not have evidence that every-other-day (QOD) schedules improve outcomes. In practice, QOD would increase patient burden without a demonstrated benefit. So when residents ask, “Why two days in a row?” the most accurate answer is: Because that’s how ECP has been studied, standardized, and operationalized—not because it’s the only biologically plausible option. Acute GVHD: Intensive up front, then stop For acute GVHD, the signal is fairly consistent across studies: front-load the intensity. Most consensus guidelines support: Weekly ECP, usually as two consecutive days per week For about 8 weeks With no routine maintenance once a response is achieved Real-world and pediatric studies vary in how aggressive they start—some using twice-weekly or even three-times-weekly treatments early on—but the theme is the same: hit hard early, then taper or discontinue. Response rates across these studies fall in the 55–65% range early, with higher cumulative response by 8–12 weeks. The key teaching point for trainees is this: Acute GVHD behaves like an inflammatory emergency. ECP works best when used intensively and early—not as a slow burn. Chronic GVHD: Lower intensity, much longer runway Chronic GVHD is a different disease biologically and clinically, and ECP schedules reflect that. Typical regimens include: Two consecutive days every 2 weeks With tapering to monthly treatments based on response Over 12–18 months, sometimes longer Large series using bimonthly schedules report response rates approaching 80–90%, especially for skin and mucocutaneous disease. Importantly, longer duration of therapy appears to correlate with better outcomes, even when early responses are modest. This is a critical mindset shift for residents: Chronic GVHD is not about rapid control—it’s about sustained immune retraining. Stopping ECP too early is one of the most common reasons for perceived “failure.” CTCL / Sézary syndrome: Slow and steady For cutaneous T-cell lymphoma, ECP remains a preferred therapy in major guidelines, either alone or in combination. The classic approach is: Two consecutive days every 2–4 weeks With the expectation that responses take months, not weeks This is often frustrating for trainees (and patients), but it mirrors the biology of the disease. CTCL responds to cumulative immunomodulation, not rapid cytoreduction. “If ruxolitinib works so well… why ECP?” This is the question residents are really  asking now. Ruxolitinib is FDA-approved and guideline-endorsed as first-line therapy for steroid-refractory acute and chronic GVHD. Belumosudil has strong data in later-line chronic GVHD. So where does ECP fit? The short answer: toxicity, durability, and complementarity. Ruxolitinib (JAK1/2 inhibition) is highly effective but commonly causes cytopenias and increases infection risk. Belumosudil (ROCK2 inhibition) targets fibrosis and immune imbalance, particularly useful in sclerotic chronic GVHD. ECP, by contrast, is remarkably safe—minimal cytopenias, low infection risk, and steroid-sparing over time. That safety profile matters. ECP is often favored: When cytopenias limit ruxolitinib When infections are active or recurrent As combination therapy, where emerging data suggest better long-term control than ruxolitinib alone In other words, ECP isn’t obsolete—it’s strategic. What I tell residents to remember If I had to distill this into a few teaching pearls: ECP is not one schedule—it’s a framework. Acute GVHD → intensive, short-term. Chronic GVHD → prolonged, maintenance-oriented. Two consecutive days is convention, not dogma. ECP’s value is safety, durability, and synergy—not speed. And perhaps most importantly: If you’re asking how often to do ECP, you’re already asking the right question. The answer lives at the intersection of disease biology, patient tolerance, and what you’re trying to achieve.

  • Thrombosis and Extracorporeal Photopheresis: What the Risk Actually Looks Like

    Extracorporeal photopheresis (ECP) has one of the best safety reputations in procedural medicine. It’s been used for decades. Hundreds of thousands of treatments. Indications ranging from cutaneous T-cell lymphoma to chronic graft-versus-host disease. And yet, every so often, the same question resurfaces: Does ECP increase the risk of thrombosis? The short answer is: there is a signal, but it’s small, context-dependent, and often misunderstood. The longer answer is more interesting—and more useful. Where the concern comes from In 2018, the FDA issued a letter to healthcare providers warning of reported cases of venous thromboembolism (VTE), including pulmonary embolism, in patients undergoing ECP with the THERAKOS CELLEX system. That sentence alone has done a lot of quiet work over the years. What often gets lost is why  the FDA issued the letter and what it actually said . The warning was based on post-marketing reports, not on prospective trials or large cohort studies. The FDA described seven pulmonary emboli and two deep vein thromboses, all occurring in patients treated for chronic GVHD. Two of the pulmonary emboli were fatal. The mean time to event was about 1.7 days, leading to the phrasing that events occurred “during or shortly after” treatment sessions. Importantly, the FDA did not conclude that ECP causes thrombosis. The language was careful: ECP may  increase risk, based on timing and clustering in a vulnerable population. That distinction matters. What the published literature shows (and doesn’t) If you go looking for thrombosis in the ECP literature, you’ll find… very little. Across more than 30 years of published experience: Thrombotic events are rare Most reported cases are catheter-associated, not systemic Large case series and reviews consistently emphasize ECP’s excellent safety profile Coagulation parameters remain stable during treatment, even with long-term therapy Laboratory studies show platelet activation after UVA/8-MOP exposure—but without aggregation or downstream thrombotic effects In pediatric cohorts, multicenter studies, and long-term follow-up reports, thrombosis appears as an isolated complication, not a recurring pattern. That doesn’t mean the FDA signal was wrong. It means the signal exists in a space the literature hasn’t fully interrogated. The missing denominator problem One of the hardest things about post-marketing safety signals is that they arrive without context. We don’t know: How many total ECP treatments occurred during the reporting window Whether events clustered around central venous access How immobility, inflammation, infection, or baseline hypercoagulability contributed Whether similar patients not receiving ECP had comparable short-term VTE rates And chronic GVHD patients—who made up all reported cases—already carry a high baseline risk of thrombosis. When a population is fragile enough, even a neutral intervention can appear suspicious if you look only at timing. So where does that leave us? A reasonable, evidence-based position looks something like this: ECP is not a high-thrombosis procedure There is a small regulatory safety signal, concentrated in a very high-risk population Timing alone does not establish causality Access-related thrombosis likely explains a meaningful fraction of reported events Clinicians should remain alert—but not alarmist This is not a story of a dangerous therapy being uncovered. It’s a story of how safety signals emerge, how they should be interpreted, and how nuance gets flattened over time. Why this matters ECP is often used when options are limited. Overstating risk can quietly narrow access to a therapy that is otherwise well-tolerated and effective. At the same time, ignoring regulatory signals entirely isn’t good medicine either. The work, as always, is in the middle: understanding who  might be at risk, when  vigilance matters most, and how  to contextualize rare events without letting fear do the thinking. Bottom line: If thrombosis were a common or intrinsic complication of ECP, we would know by now. What we have instead is a small, signal-level warning that deserves clarity—not amplification. And clarity is something we can still build.

  • When to Culture a Product: AABB vs BEST Guidelines

    How the BEST Criteria Updated a Decade-Old AABB Approach to Septic Transfusion Reactions One of the most uncomfortable questions in transfusion medicine is deceptively simple: When should we culture the patient and the blood product after a transfusion reaction? Culture too often, and you trigger false positives, unnecessary lookbacks, and wasted resources.Culture too conservatively, and you risk missing a true septic transfusion reaction — one of the most dangerous complications we manage. For years, many institutions have relied on guidance from an AABB Association Bulletin published in 2014. But in 2019, a large multicenter study fundamentally challenged whether those criteria are sensitive enough for real-world practice. This post walks through what changed, why it matters, and what the tradeoff actually is. The AABB 2014 Bulletin: Safety Through Clinical Vigilance The 2014 AABB Association Bulletin on suspected bacterial contamination of platelets was written with a clear goal:don’t miss sepsis. Its framework is intentionally broad and clinically driven. In short, it recommends investigation when: A patient develops fever ≥38°C with a ≥1°C rise, plus  at least one associated symptom (rigors, hypotension, tachycardia, dyspnea, etc.), or There is any  clinical change that raises concern for sepsis — even without fever Importantly, the bulletin acknowledges: Fever may be absent in neutropenic or immunosuppressed patients Antipyretics may blunt temperature rise Symptoms may be delayed This guidance reflects its era. In 2014, the dominant concern was under-recognition of septic transfusion reactions, especially with gram-positive organisms. The solution was education, vigilance, and a low threshold to act. What the bulletin did not  do was define: Objective thresholds for hypotension or tachycardia How to systematically account for antipyretic use How well these criteria actually perform in practice That gap mattered more than we realized. The Problem: How Well Do the AABB Criteria Actually Work? In 2019, investigators from the BEST (Biomedical Excellence for Safer Transfusion) Collaborative asked a hard question: If we apply the AABB criteria to real-world transfusion reactions, how many culture-positive cases do we actually detect? Using data from nearly 800,000 transfusions across 20 centers, they found that the answer was… not many. When evaluated empirically: The AABB criteria detected only ~40% of culture-positive reactions The majority of reactions that ultimately yielded positive cultures never met AABB triggers Reliance on fever and subjective symptom reporting was a major limitation In other words, the system was doing exactly what it was designed to do — but that design was missing cases. The BEST Criteria: Trading Specificity for Sensitivity (On Purpose) Rather than discarding the AABB framework, the BEST investigators asked: What small, evidence-based changes would catch more cases? They tested three modifications, all of which improved detection: 1. Isolated High Fever Counts A temperature ≥39°C with a ≥1°C rise triggered culture even without other symptoms. Why? Because multiple international criteria already recommended this — and AABB did not. 2. Objective Vital Sign Definitions Instead of relying on checkbox reporting: Hypotension required both an absolute BP threshold and a percentage drop Tachycardia required ≥100 bpm and a significant increase from baseline This mattered because provider-reported vital sign abnormalities were frequently inaccurate. 3. Antipyretics Matter If a patient received antipyretics before transfusion, absence of fever could not be used to rule out sepsis when other concerning signs were present. This was not a philosophical change — it reflected basic physiology. Did It Work? Yes — and predictably. When all three modifications were combined into the BEST criteria: Sensitivity improved to ~70–75% Specificity decreased to ~45% Crucially: there were no cases detected by AABB that BEST missed In other words, BEST caught substantially more potential septic reactions — at the cost of more cultures and more false positives. This was not an accident. It was a conscious tradeoff. The Real Debate: False Positives vs Missed Sepsis Critics of broader culturing thresholds often raise legitimate concerns: Positive product cultures trigger supplier notification Co-components may be quarantined or destroyed Many positive cultures do not correlate with patient infection All of that is true. But the BEST authors make a different argument: In a passive surveillance system, missing cases is the greater danger. Septic transfusion reactions are rare, difficult to adjudicate, and often masked by critical illness. Fever is unreliable. Cultures are imperfect. But hypotension requiring pressors, shock, or unexplained deterioration are not benign signals, even when temperature is normal. The BEST criteria reflect a shift from: “Culture when sepsis is obvious” to “Culture when sepsis is plausible and high-risk.” Where This Leaves Us The AABB 2014 bulletin is not wrong .It is incomplete by modern standards. The BEST criteria don’t replace clinical judgment — they formalize what experienced clinicians already know: Fever is not required for sepsis Antipyretics obscure key signals Objective thresholds matter Sensitivity matters more than comfort when stakes are high Institutions now face a choice: Accept fewer cultures and higher miss rates, or Accept more cultures to reduce the risk of missing a true septic transfusion reaction That choice is about risk tolerance, not right vs wrong. But it should be made using current evidence, not decade-old assumptions. Bottom line If you are still relying solely on fever-centric AABB criteria from 2014, you are almost certainly missing cases. The BEST criteria offer a data-driven update that reflects how septic transfusion reactions actually present — messy, masked, and dangerous. In transfusion medicine, that tradeoff is worth naming out loud.

©2023 by Caitlin Raymond. Powered and secured by Wix

bottom of page