top of page

Search Results

98 results found with an empty search

  • How I Actually Use AI: A Case for Augmented Intelligence

    The discourse has two settings, and both are wrong   Pick up any piece about artificial intelligence in medicine and you will find one of two arguments. Either AI is going to revolutionize clinical practice by automating diagnosis and replacing physician judgment, or AI is a dangerous, hallucinating black box that no responsible clinician should touch. Both camps are loud. Both camps are largely arguing past the actual experience of physicians who use these tools.   The thing both arguments have in common is that they imagine AI as an autonomous agent — something that acts independently, makes decisions, and produces outputs you simply accept or reject wholesale. That framing drives the fear and the hype equally. And it doesn't describe how I use AI, or how I think physicians should use it.   There's a better frame. It's called augmented intelligence, and the distinction matters.   What augmented intelligence actually means   Augmented intelligence is not a euphemism for AI with better PR. It describes a specific relationship between the human and the tool: the AI amplifies your thinking, your drafting, your analysis — and you retain intellectual ownership of the output. You are the decision-maker. You direct the work. You evaluate what comes back and correct it when it's wrong. The AI doesn't publish anything. You do.   This is meaningfully different from autonomous AI, which operates independently and generates outputs without ongoing human oversight. The distinction isn't just philosophical — it has real implications for how you build your workflow, how you evaluate output quality, and where accountability sits.   In augmented intelligence, accountability never leaves the physician. That's not a limitation. It's the point.   What this looks like in practice   I use AI tools daily. I use Claude for writing and coding: editing blog posts, structuring arguments, generating diagrams, iterating on prose. I use Gemini for personal assistant tasks — scheduling, reminders, quick lookups. Different tools, different jobs, same underlying principle.   When I'm drafting a post, I bring the idea, the clinical knowledge, the interpretive framework, and the editorial judgment. Claude proposes structure, generates prose, and produces things like SVG diagrams that I couldn't efficiently produce by hand. I read everything. I correct errors — and there are always errors, some subtle. I rewrite passages that don't sound like me or aren't correct. I verify factual claims against primary sources.   The post that goes up is mine. The thinking is mine. The AI accelerated the production of a written artifact that represents my analysis. It did not perform the analysis.   This workflow is only valuable if I maintain that discipline. The moment I start publishing AI output I haven't critically evaluated, I've stopped practicing augmented intelligence and started practicing something more like delegation to a very fluent but unreliable assistant. Those are not the same thing.   The oversight imperative   Anyone who works in laboratory medicine already understands this intuitively, even if they haven't applied the framework to AI.   We do not report analyzer results without understanding what the analyzer did. We run QC. We investigate flags. We understand the assay's limitations, its interference profile, the conditions under which it fails. When a result looks wrong, we don't shrug and report it — we investigate. The instrument is a tool. We are responsible for the result.   AI output requires exactly the same critical scrutiny. The distinctive failure mode of large language models is not that they produce obviously garbled output — it's that they produce fluent, confident, plausible-sounding output that is wrong. A traditional analyzer error usually looks like an error. An AI hallucination often doesn't. It reads like a normal sentence. It cites a study that doesn't exist in the same register as one that does.   This is why oversight isn't optional. It's not a hedge for cautious people. It's the minimum standard for using the tool responsibly. If you're accepting AI output without evaluating it, you're not practicing augmented intelligence. You're practicing something with no quality control, and in medicine, we know exactly how that ends.   The case for engaging now   I understand the instinct to wait. The tools are changing fast. The evidence base for clinical AI is immature. The regulatory landscape is unclear. Sitting it out feels like the prudent move.   But physicians who opt out aren't avoiding risk — they're just outsourcing the learning curve. Someone is going to set the norms for how AI gets used in your institution, your specialty, your practice environment. It will either be clinicians who have hands-on experience with the tools and understand their limitations, or it will be administrators, vendors, and policy-makers who don't see patients.   The physicians who engage critically now — who build workflows with real oversight, who learn where the tools fail, who can articulate what responsible use actually looks like — are the ones who will be positioned to shape those norms. The ones who wait will have AI handed to them later, implemented by people who weren't asking the right questions.   I'd rather be in the first group. I'd rather have colleagues in medicine who are in the first group.   Augmented intelligence, done right, is not about ceding judgment to a machine. It's about using a powerful tool with the same rigor we bring to every other tool in medicine. We validate. We monitor. We maintain accountability. That's not fear-mongering and it's not hype. It's just good practice.

  • Granulocyte Transfusions for the Overworked Fellow

    The patient you can't ignore   Picture the consult. Profound neutropenia — ANC in the double digits. Documented fungal infection. Forty-eight hours of broad-spectrum antifungals and still febrile. The primary team is running out of moves.   Someone suggests granulocyte transfusions.   You nod. You place the consult. You mobilize a donor. And somewhere in the back of your mind, a small voice asks: does this actually work?   That voice deserves an answer. The honest answer, unfortunately, is that we're not sure.   Why the idea makes sense   The logic is clean. Neutrophils kill bacteria and fungi. If a patient has no neutrophils — from chemotherapy, from bone marrow failure, from a primary immunodeficiency like chronic granulomatous disease — they can't mount an effective innate immune response. So we give them neutrophils from the outside.   It's the same rationale as any component transfusion: if the patient can't make enough of something critical, and the deficit is causing harm, we try to make up the difference. We do it with red cells. We do it with platelets. Why not neutrophils?   The problem is that logic and evidence are different things. And in transfusion medicine, we have a long history of confusing the two.   The evidence, such as it is   To be clear: we have been trying to answer this question for a long time. There are decades of trials in the granulocyte literature. The field has not been idle. The issue is not a lack of effort — it's that the evidence we've accumulated is genuinely hard to interpret.   Early trials from the 1970s and 1980s showed some promising signals, but they were small, underpowered, and conducted before the era of modern antimicrobial therapy. Patient populations were heterogeneous. Organisms were different. Underlying diseases were different. Comparing across trials is difficult, and drawing conclusions from any individual one is precarious.   More recently, the RING trial — the Resolving Infection in Neutropenia with Granulocytes trial — made a serious attempt to answer the question with a properly designed randomized controlled trial. It was larger and more rigorous than anything that came before. It had a mortality endpoint. It was the study the field needed.   It did not show a survival benefit.   But here's where honest interpretation matters. The RING trial's negative result doesn't necessarily mean granulocytes don't work. The trial faced a fundamental problem: dose. The doses actually delivered to patients were lower than what was considered potentially therapeutic, in part because of the inherent variability in granulocyte collection. Donors were stimulated with G-CSF and dexamethasone, yields varied between donors, and there was no reliable way to guarantee a therapeutic dose on any given day. If you can't reliably deliver the intervention, you can't interpret the result — at least not cleanly.   This is not a minor methodological quibble. It goes to the heart of what the trial can and cannot tell us. RING is the best evidence we have. It is also evidence that came with a major confounder baked in.   The survival curves didn't look dramatically different. The microbiological response data were encouraging in some subgroups and not in others. Secondary endpoints were mixed. You can read the RING trial and come away thinking granulocytes failed a fair test, or you can come away thinking the test itself wasn't quite fair. Both readings are defensible.   We have not arrived at a definitive answer. We may not for a long time.   The amphotericin rule nobody can fully justify   If you've ever been involved in a granulocyte course, you've heard this: separate the granulocytes from the amphotericin. Don't give them at the same time. Space them out — 12 hours if you can.   This is institutional gospel in most centers that do granulocyte transfusions. It's in the AABB Technical Manual. People follow it without question.   Here's what it's actually based on: one paper from 1981 describing pulmonary toxicity in patients who received concurrent granulocytes and amphotericin B. One paper. There were also some in vitro and animal data that suggested a plausible mechanism. That was enough to generate a widespread practice recommendation.   What happened next is instructive. Subsequent clinical studies — multiple of them — tried to confirm this finding and couldn't. The signal didn't replicate. Patients who received granulocytes and amphotericin close together did not consistently have worse pulmonary outcomes than those in whom the infusions were separated.   And yet the practice persisted. The AABB Technical Manual still recommends separation. Centers still coordinate timing. Fellows still field late-night calls about when the liposomal amphotericin was given and whether there's enough of a window.   This is how medical dogma works. A case series raises concern. The concern gets institutionalized. Later evidence fails to confirm it. The institution doesn't notice.   To be clear: there may still be a real interaction. The absence of evidence is not evidence of absence, and the subsequent studies had their own limitations. Separating infusions is low-cost in most clinical situations. But when someone asks you why, the honest answer is: we're not entirely sure, and the original data that started this practice are weaker than the strength of the recommendation would suggest.   A dose we mostly extrapolated   The conventional therapeutic dose target for granulocyte transfusions is at least 1 × 10¹⁰ granulocytes per transfusion. This number comes from dose-response analyses suggesting that below this threshold, there's minimal ANC increment and possibly minimal clinical effect.   There are a few problems with this.   First, collection yields are highly variable. Donors are stimulated with G-CSF and dexamethasone before apheresis, which significantly increases peripheral neutrophil counts and therefore collection efficiency. But even with stimulation, yields vary substantially between donors. Hitting the 1 × 10¹⁰ target is not guaranteed. The RING trial demonstrated this empirically — actual delivered doses in the trial were often below what was intended.   Second, the dose target itself is derived from indirect data. We're using ANC increment as a surrogate for clinical effect, which assumes the transfused neutrophils are functioning effectively after infusion and trafficking to sites of infection. There's evidence they do — labeled granulocytes have been shown to migrate to infection sites — but this is distinct from demonstrating that the dose-response relationship for ANC increment maps neatly onto a dose-response relationship for survival.   Third, we dose by weight (roughly 0.6 × 10⁹ cells/kg as a lower threshold), but we collect a product whose yield is largely determined by donor biology. You can stimulate better. You can select donors with high baseline neutrophil counts. But you can't fully control what you get. The mismatch between what we target and what we deliver is a persistent feature of granulocyte therapy, not a solvable logistics problem.   What to do with all this uncertainty   Granulocyte transfusions are still used. At centers with the infrastructure to collect and process them — which is not everywhere — they remain an option for patients with severe neutropenia and refractory infections, particularly in the setting of primary immunodeficiencies or when marrow recovery is anticipated. The biological rationale is sound. The clinical experience is real, even if it's hard to quantify in controlled trials.   But we should be honest about what we're doing when we order them. We're making a judgment call in the face of genuine uncertainty. We're not executing a protocol backed by level-one evidence. We're doing what makes mechanistic sense for a patient who is out of other options, knowing that our best randomized trial couldn't definitively prove benefit.   That's okay. Clinical medicine involves a lot of this. The problem isn't uncertainty — it's the pretense of certainty. The fellow who confidently states that granulocytes improve survival is wrong. The fellow who confidently states they don't is also wrong. The right answer is that we tried hard to find out, the trial had a fatal flaw in its ability to deliver the intervention reliably, and we're still waiting for better data.   Knowing the limits of the evidence is not a failure of clinical knowledge. It is the clinical knowledge.

  • Transfusion Medicine: The Invisible Consult Service

    There is a particular kind of email that transfusion medicine physicians learn to recognize. It arrives a day or two after an event — a transfusion reaction, a complicated crossmatch, a patient with antibodies nobody quite knew what to do with. The subject line is something like quick question  or following up , and the body begins: I wasn't sure if I was supposed to call you. You weren't sure if you were supposed to call us. This is not a failure of clinical judgment. It is a failure of visibility — and it is one of the most common problems in transfusion medicine, at almost every institution I have ever encountered. Transfusion medicine occupies a strange position in the hospital ecosystem. We are essential infrastructure — the blood bank is running constantly, processing samples, issuing products, catching incompatibilities before they reach patients — but we are largely invisible to the clinicians ordering the blood. We are the electrical grid. You don't think about us until the lights go out. The problem has two distinct roots, and they compound each other. The first is awareness. Many clinicians — including experienced hospitalists, surgeons, and intensivists — do not know that a transfusion medicine consultation service exists, or that there is a physician available to answer questions around the clock. They know there is a blood bank. They may not know there is a board certified physician attached to the blood bank. The second is uncertainty about when to call. Even clinicians who know we exist often hesitate, unsure whether their situation is "bad enough" to warrant a consult. A patient ran a fever during a transfusion — is that ours? The blood bank flagged an antibody — does someone need to talk to me? There is no obvious threshold, no shared mental model of what transfusion medicine is for beyond the most catastrophic scenarios. The result is a gap. Reactions get managed in isolation. Antibody workups proceed without clinical context. Patients occasionally get the right outcome anyway — and occasionally don't. The febrile non-hemolytic transfusion reaction is a useful illustration of both problems at once. FNHTR is common, manageable, and almost never dangerous. Stop the transfusion, give acetaminophen, observe, document. Most hospitalists handle this appropriately without ever calling anyone. That's correct — FNHTR does not require a transfusion medicine consult. But here is where it gets complicated: FNHTR is a diagnosis of exclusion. You can only call it benign after you've ruled out the things that aren't — acute hemolytic reaction, septic transfusion reaction, early TRALI. The fever threshold matters. The hemodynamic picture matters. The timing matters. And a hospitalist who has never been walked through that differential is making a judgment call without a map. Most of the time, the call is right. But "most of the time" is a fragile foundation for patient safety, and the gap between managed correctly in isolation  and should have called us  is narrower than it looks in the moment. I made a resource. It's linked below — a one-page clinical reference for exactly this decision: when to call transfusion medicine, when to monitor, and what to look for in the five reactions that cannot be missed. It is not a substitute for a consult when you are unsure. That's the other thing I want to say plainly: uncertainty is a valid reason to call.  You do not need to have a confirmed hemolytic reaction in front of you to page transfusion medicine. You just need to be unsure. That's enough. We exist. We are available. We want to hear from you before things go wrong — and that is not a high bar. It is just a call.

  • Your Transfusion Reaction Started in the Processing Facility

    If you trained anything like I did, you learned transfusion medicine in two separate silos. One bucket: processing. Leukoreduction, irradiation, CMV testing, storage conditions, expiration dates. The other bucket: clinical reactions. Febrile nonhemolytic transfusion reactions, allergic reactions, hypotension, TACO, TRALI. Two completely different lectures, two different shelf exam questions, two different mental filing cabinets. Here's the thing. They're the same story told from different ends. Every decision made during processing has a downstream clinical consequence — sometimes immediate, sometimes delayed, sometimes baked into institutional policy so old that nobody remembers why it exists. Understanding transfusion medicine means collapsing those two silos into one. Let me show you what I mean with four examples. Leukoreduction → FNHTRs and CMV A febrile nonhemolytic transfusion reaction, or FNHTR, is defined as a temperature of at least 38°C with a rise of at least 1°C — or rigors — occurring during or within four hours of the cessation of transfusion. Classically, we're taught that FNHTRs result from cytokine buildup in the unit. That teaching is correct, but it skips the part that makes it interesting. During storage, white blood cells in a blood unit don't just sit there. They die, and as they do, they release cytokines — IL-1, IL-6, TNF-α — that accumulate in the unit over time. By the time that bag of red cells or platelets hangs, it may be carrying a meaningful cytokine payload. Infuse it fast enough, and your patient spikes a fever. Not because of anything intrinsically wrong with the unit. Because you just infused a bag of inflammatory soup. Pre-storage leukoreduction — filtering out the white cells before storage, rather than at the bedside — eliminates the problem at its source. The cytokines never accumulate because the cells that produce them are gone. This is not a trivial distinction: universal leukoreduction significantly reduced FNHTR rates. When we moved from selective to universal leukoreduction in the early 2000s, febrile reactions dropped substantially. But leukoreduction's second accomplishment often gets less airtime, and it deserves more. White blood cells are the primary vector for transfusion-transmitted CMV. CMV is a herpesvirus that establishes latency in leukocytes, and in immunocompetent recipients, transfusion-transmitted CMV is generally clinically silent. In immunocompromised patients — transplant recipients, patients with HIV, premature neonates — it can be devastating. For decades, the solution was CMV seronegative blood: test donors, restrict CMV-negative products to high-risk recipients. The problem is that seronegative status is imperfect. Donors in the window period before seroconversion will test negative and still carry latent virus. Leukoreduction offers a mechanistically cleaner solution: remove the cells that harbor the virus, and you've addressed the problem regardless of serologic status. Current evidence supports leukoreduced blood as equivalent to seronegative blood for CMV-safe transfusion. One processing step. Two major clinical problems addressed. Bedside Filtration → Hypotensive Reactions Here's where it gets interesting. If leukoreduction is so effective, why does it matter when  you filter? The shift from bedside to pre-storage leukoreduction wasn't driven purely by logistics, though the workflow advantages are real. It was also driven by a safety signal. Bedside leukoreduction filters activate the contact pathway of coagulation. That activation generates bradykinin, a potent vasodilator. In most patients, bradykinin is rapidly degraded by angiotensin-converting enzyme, or ACE. But in patients on ACE inhibitors, that degradation pathway is blocked. Bradykinin accumulates, blood pressure drops, and you have a hypotensive transfusion reaction with no fever, no urticaria, no obvious allergic trigger. The processing method determined the patient's risk profile. I'll come back to this one — the bradykinin story is deep enough to deserve its own post — but the principle is the same: a decision made upstream in processing showed up at the bedside. Storage Lesion → Neonatal Practice Red blood cells are not static objects. From the moment they're collected, they change. 2,3-DPG — the molecule that facilitates oxygen offloading from hemoglobin — drops within the first two weeks of storage. Potassium leaks out of the cells and accumulates in the supernatant. The cells become less deformable, less able to squeeze through small capillaries. Microparticles shed from the cell membrane. Collectively, these changes are called the storage lesion. In adult patients with normal physiology, the clinical significance of the storage lesion has been debated extensively. Large randomized trials — ABLE, INFORM, RECESS — have largely failed to show meaningful harm from older blood in most adult populations. The cells aren't great, but adults are fairly forgiving. Neonates are less so. A neonate receiving a large-volume transfusion is exposed to every consequence of the storage lesion in concentrated form. Hyperkalemia from stored red cell supernatant can trigger arrhythmias. Impaired oxygen delivery from 2,3-DPG-depleted cells matters when your patient weighs 700 grams. Deformability matters when you're perfusing vessels measured in microns. This is why neonatal transfusion practice looks so different from adult practice. Fresher units are preferred — the evidence that older units are truly catastrophic for neonates is less definitive than the physiologic concern might suggest, but the caution is reasonable given the stakes. Small-volume aliquots, often washed to reduce potassium load. CMV-safe products. And irradiation — which brings us to the fourth thread. Irradiation → TA-GvHD Transfusion-associated graft-versus-host disease, or TA-GvHD, is rare. It is also, when it occurs, nearly universally fatal — mortality exceeds 90%. That combination makes it one of the most important complications in transfusion medicine, and one of the clearest illustrations of why processing decisions are clinical decisions. Here's the mechanism. Cellular blood products contain viable donor T lymphocytes. In an immunocompetent recipient, those donor T cells are recognized as foreign and eliminated. In an immunocompromised recipient — or in certain other vulnerable populations — they aren't. The donor T cells engraft, proliferate, and begin attacking the host's tissues: skin, liver, gut, bone marrow. The host's own immune system, suppressed or naïve, cannot mount a response. The result is a graft-versus-host syndrome with no good treatment options and very few survivors. The at-risk populations are broader than most people initially assume. Congenital immunodeficiencies, hematologic malignancies, stem cell transplant recipients, and neonates are the obvious ones. Less obvious: patients receiving HLA-matched cellular products, or directed donations from first-degree relatives — situations where the donor and recipient share enough HLA antigens that the recipient's immune system fails to recognize the donor T cells as foreign, even in a host who is otherwise immunocompetent. Irradiation prevents TA-GvHD by delivering a targeted dose of gamma or X-ray radiation to the blood product, rendering donor T lymphocytes incapable of proliferation. The cells are still present — irradiation doesn't remove them — but they can't engraft and they can't divide. The threat is neutralized before the product ever reaches the patient. This is about as direct a processing-to-outcome link as exists in transfusion medicine. A near-universally fatal complication, preventable entirely by a modification applied hours or days before transfusion. The clinician at the bedside never touches it. The outcome depends entirely on whether the right box was checked upstream. The Punchline Processing isn't logistics. It's upstream medicine. The decisions made in processing — when to filter, how to store, what modifications to apply — are clinical decisions, even if the clinicians ordering transfusions rarely think of them that way. When a neonate avoids a hyperkalemic arrest, it's because someone understood the potassium curve on stored blood. When an immunocompromised patient doesn't get CMV, it's because of a filter applied hours before the product ever left the refrigerator. When a patient on lisinopril doesn't bottom out their blood pressure, it's because someone switched from bedside to pre-storage leukoreduction and understood why it mattered. When a post-transplant patient doesn't die of TA-GvHD, it's because a box got checked in a processing facility they'll never set foot in. The two silos were always one subject. We just taught them wrong.

  • Jehovah's Witnesses and Blood: The Guidance Changed. The Complexity Didn't.

    On March 20, 2026, the Governing Body of Jehovah's Witnesses issued Governing Body Update #2. In a video address, member Gerrit Lösch announced that members may now decide for themselves whether to have their own blood drawn, stored, and later reinfused during medical or surgical care. The prohibition on allogeneic transfusion — receiving blood from another person — remains firmly in place. But preoperative autologous deposit, long explicitly forbidden, has been moved into the "personal conscience" category. The theological rationale was concise: "The Bible does not comment on the use of a person's own blood in medical and surgical care." I've been thinking about this a lot since it dropped. Not just as a news item, but as a transfusion medicine physician who has spent years navigating the clinical and ethical complexity that Jehovah's Witness patients bring to the blood bank. This policy shift is significant. It's also worth understanding clearly — because the coverage so far has been long on theological analysis and short on what any of this actually means from where I sit. What We're Talking About, Clinically Preoperative autologous donation (PAD) is exactly what it sounds like. A patient donates their own blood — typically between six weeks and five days before a scheduled surgery — which is processed and stored at a blood bank or hospital transfusion service. If transfusion becomes necessary during or after the procedure, the patient receives their own blood back. If it isn't needed, the unit is discarded. PAD is not a new technique. It's been around for decades. Its advantages are real: no risk of alloimmunization, no risk of transfusion-transmitted infection, lower likelihood of immune-mediated transfusion reactions. Its drawbacks are also real: preoperative phlebotomy can induce or worsen anemia, and the blood still requires the same processing and storage infrastructure as allogeneic donations. It is not a casual or universally available option. More on that in a moment. The Conscience Zone Was Already a Patchwork Here's what I find genuinely fascinating about this update: it's being covered like a dramatic reversal, but the conscience zone was already wide before March 20th. "Conscience zone" is my shorthand for the category of practices the Watch Tower Society has long designated as individual decisions — neither mandated nor prohibited, left to each member to resolve according to their own beliefs. Intraoperative cell salvage, acute normovolemic hemodilution, cardiopulmonary bypass, dialysis, epidural blood patches — all individual-decision items for years. The zone is wider now. But it was already wide. More importantly: the official doctrine has never fully captured what actually happens in clinical practice. I've cared for Jehovah's Witness patients who would accept platelets. I've worked with patients who would accept directed donations from members of their own congregation. I've seen patients draw their own lines in places the official guidance didn't put them — navigating their faith and their medical situation in ways that were entirely their own. Jehovah's Witness patient care has always been variable, because the patients are people, not policy documents. What this update does is formalize something that experienced clinicians already knew: there is no single answer to "what will my Jehovah's Witness patient accept?" There never was. The conscience zone just got wider, which means the conversation at the bedside just got more important. What This Means for the Blood Bank So what actually changes operationally? Potentially quite a bit — for patients who want to pursue PAD and have access to it. Blood banks that offer autologous donation programs will need to be prepared for Jehovah's Witness patients presenting for preoperative collection. This isn't a simple extension of existing workflows. Autologous units carry specific labeling requirements and storage handling. There are consent considerations unique to this population — patients will need clear information about the anemia risk, the storage logistics, and the fact that unused units are discarded rather than entering the general blood supply. For some Jehovah's Witness patients, that last point may matter doctrinally. Surgeons and anesthesiologists planning cases involving Jehovah's Witness patients will need to update their conversations. The reflexive assumption that a Jehovah's Witness patient will decline all banked blood products is no longer accurate. These patients may now arrive at the OR with autologous units available — but only if someone asked, offered, and made the referral in time. The window for PAD is finite. A patient referred for major elective surgery with a two-week lead time cannot take advantage of this option. And that's before we get to the institutional side. Not every hospital has an autologous donation program. Not every blood bank has the capacity or infrastructure. The patients most likely to benefit are those undergoing planned, elective procedures at well-resourced academic medical centers — which is not the only place Jehovah's Witness patients receive surgical care. The Practical Limits of Personal Conscience This is where I want to pump the brakes on the more celebratory takes I've seen. The framing of this update — each Christian must decide for themselves — positions the change as an expansion of individual autonomy. And in a doctrinal sense, it is. But autonomy without access isn't really autonomy. Jehovah's Witnesses number approximately 9.2 million worldwide, across more than 200 countries. The infrastructure to support preoperative autologous donation does not exist uniformly across those settings. In much of the world, the option the Governing Body has now made permissible is simply not available. The theological door has opened, but the operational corridor behind it is narrow and unevenly distributed. There's also the question of social pressure, which former members have been vocal about. The update frames this as conscience — but conscience operates inside a community. The Watch Tower Society has a long history of framing individual decisions within a framework of spiritual accountability. Moving something to the "personal decision" category is not the same as removing the social weight attached to that decision. A patient who now technically may accept PAD is making that choice in a social and ecclesiastical context that still shapes what choices feel available. That's not a reason to dismiss the update. It matters that the prohibition has been lifted. But clinical teams working with Jehovah's Witness patients should not assume that "it's now allowed" translates automatically into "patients will feel free to accept it." The conversation still requires care, privacy, and time. Where This Leaves Us The transfusion medicine community has spent decades developing expertise in bloodless surgical programs, autologous techniques, and the clinical and ethical navigation of Jehovah's Witness patient care. That expertise doesn't become less relevant now — if anything, it becomes more so. What this update requires from us is updated fluency: knowing what changed, understanding the practical and doctrinal distinctions that remain, and meeting patients where they actually are rather than where the policy says they could be. The conscience zone just got wider. Our job is to help patients navigate it — without assuming the map is simpler than it is.

  • A Primer on Hereditary Hemochromatosis for the Overworked Fellow

    I was reviewing charts on the hemochromatosis protocol during my transfusion medicine fellowship when I came across a patient with iron overload severe enough to require ongoing therapeutic phlebotomy — and a completely wild-type HFE panel. No C282Y. No H63D. No S65C. Just normal. I had just finished writing the service guide, which included a brief section on HFE alleles and genotypes. I had written a sentence about this exact scenario: “Occasionally you will see patients with iron overload and a WT HFE locus. This probably means they have another type of HH.” I had written that sentence and moved on. I had no idea what it actually meant. So I went down the rabbit hole. What I found reframed everything I thought I knew about hemochromatosis — and I think it’ll do the same for you. Hemochromatosis Is a Hepcidin Story Here is the reframe: hereditary hemochromatosis is not, at its core, a story about HFE. It’s a story about hepcidin. Hepcidin is a small peptide produced by hepatocytes, and it is the master regulator of iron homeostasis. The mechanism is elegant. Hepcidin binds to ferroportin — the only known iron exporter in the human body — and tags it for internalization and degradation. When hepcidin is high, ferroportin disappears from the cell surface. Iron stays trapped inside enterocytes, macrophages, and hepatocytes. When hepcidin is low, ferroportin is abundant. The gut absorbs iron without restraint. In hereditary hemochromatosis, regardless of the gene involved, the unifying pathophysiology is hepcidin deficiency relative to iron burden. The iron accumulates because the hormone that should be putting the brakes on iron absorption isn’t doing its job. HFE is not hepcidin. HFE is one of several upstream signals that tell the liver to make hepcidin in the first place. And that distinction explains everything. The Sensing Circuit Think of hepcidin production as the output of a sensing circuit. The liver is constantly asking: how much iron is out there? The answer comes from multiple inputs, and several proteins are involved in integrating those signals. HFE, transferrin receptor 2 (TFR2), and hemojuvelin (HJV) all participate in sensing transferrin saturation and stimulating hepcidin expression. HJV acts as a BMP co-receptor, and both HFE and TFR2 modulate downstream BMP/SMAD signaling. Mutations in any of them produce the same functional consequence: the liver underestimates iron burden, hepcidin production is insufficient, and ferroportin runs unchecked. HAMP is the gene that encodes hepcidin itself. Mutations here skip the sensing problem entirely — you’re not impairing the signal circuit, you’re eliminating the signal. SLC40A1 encodes ferroportin. Mutations here operate at the other end of the pathway entirely, at the effector rather than the sensor. And as we’ll get to, ferroportin disease is its own special category. The Four Types, and Why They’re Not All the Same Type 1 — HFE This is the one we learn in medical school and then assume is the whole story. HFE mutations are the most common cause of HH, with C282Y homozygosity the genotype most strongly associated with clinical disease. Onset is typically in late adulthood, often amplified by additional iron-loading exposures like alcohol use or chronic ineffective erythropoiesis. Menstruating individuals are partially protected by blood losses until menopause. Penetrance is lower than we historically believed — many C282Y homozygotes never develop symptomatic disease. Compound heterozygosity (C282Y/H63D) causes milder disease. H63D homozygosity milder still. S65C, the least common of the HFE alleles, is associated with mild to moderate iron overload when homozygous, and a single copy is generally not enough on its own to cause clinically significant disease. A single copy of any HFE allele typically isn’t sufficient. Type 2 — HJV or HAMP Here is where things escalate. Type 2, also called juvenile hemochromatosis, presents in the first or second decade of life. Type 2A involves HJV, Type 2B involves HAMP. Both are autosomal recessive, both are rare, and both are aggressive. Because iron accumulation begins in childhood, end-organ damage — particularly cardiac and endocrine — accumulates early. Without treatment, fatal cardiomyopathy by the third decade of life is not a hypothetical. This is not a disease you find incidentally on routine iron studies in a 50-year-old. A fellow who has only ever managed Type 1 may not be thinking about HH in a young patient with unexplained iron overload, elevated transferrin saturation, and a normal HFE panel. That blind spot can have real consequences. Type 3 — TFR2 Type 3 HH is caused by mutations in TFR2 — one of those upstream sensors feeding into the hepcidin circuit — and is intermediate in severity and onset, typically presenting in early adulthood. It is autosomal recessive and rare, with most reported cases from Mediterranean populations. Clinically it resembles Type 1 more than Type 2, though it tends to present earlier. If Type 1 is the late-night slow burn, Type 3 is the same fire with an earlier start time. Type 4 — SLC40A1 (Ferroportin Disease) Type 4 is the most mechanistically interesting, and the one most likely to trip you up. Type 4A is a loss-of-function mutation in ferroportin. Iron accumulates preferentially in macrophages rather than parenchymal cells, because ferroportin is how macrophages export the iron they’ve scavenged from senescent red blood cells. When ferroportin doesn’t work, that iron is trapped. Serum ferritin can be markedly elevated — because ferritin leaks from iron-laden macrophages — while serum iron and transferrin saturation are low. This is the opposite pattern from classic HH. Patients may also become anemic with phlebotomy more quickly than expected, because their macrophages can’t release stored iron to support erythropoiesis. Type 4B is a gain-of-function mutation that makes ferroportin resistant to hepcidin. The brake exists; the car just doesn’t respond to it. This behaves more like classic HH: elevated transferrin saturation, parenchymal iron loading, and good response to phlebotomy. Both subtypes are autosomal dominant — which means a family history may be easier to elicit than in the recessive types, and a single pathogenic allele is enough. Back to the Wild-Type When you encounter iron overload with a normal HFE panel, the differential isn’t just “secondary causes.” Depending on the clinical picture — especially the patient’s age, the pattern of iron deposition, and family history — it’s worth asking whether you’re looking at Type 2, 3, or 4. Extended genetic testing panels exist. A hematologist or geneticist may be a useful colleague. And then there’s the patient I encountered who had wild-type results across the full panel — not just HFE, but HJV, HAMP, TFR2, and SLC40A1 as well. No known pathogenic variant anywhere in the circuit. Just iron overload that didn’t have a name we could give it yet. The most likely explanation is a mutation in a gene we haven’t characterized — which is to say, the circuit we’ve described is probably not complete. The bigger takeaway, though, is the same one that started this post. Hemochromatosis is a disease of hepcidin deficiency. Once you see it that way, the genetics stop feeling like rote memorization and start feeling like variations on a theme. HFE, HJV, HAMP, TFR2, SLC40A1 — they’re all part of the same story. Some are upstream sensors, one is the signal itself, one is the effector. The iron accumulates because somewhere in the circuit, the brake is broken. A wild-type HFE result doesn’t mean there’s no hemochromatosis. It means you need to look upstream, downstream — or possibly somewhere we haven’t mapped yet.

  • More Is Not More: Hepcidin and the Counterintuitive Science of Iron Dosing

    In my last post on donor iron deficiency, I buried the most interesting part. Most of the piece covered what the field has established: donation depletes iron, ferritin screening is underutilized, the HEIRS and STRIDE trials make a reasonable case for supplementation, and the AABB has recommendations in place. All of that holds. But near the end, almost as a footnote, I mentioned that the original HEIRS trial used daily iron dosing — and that subsequent evidence suggests daily dosing may actually inhibit absorption by triggering the release of hepcidin. I've been thinking about that footnote ever since. It deserves more than a footnote. A Brief Introduction to Your Iron Gatekeeper Hepcidin is a small peptide hormone made by the liver, and its job is to regulate how much iron enters circulation. When iron stores are adequate, hepcidin is secreted, binds to ferroportin — the channel that exports iron from cells into the bloodstream — and shuts the door. When stores are low, hepcidin falls and the door opens. It is an elegant feedback loop, and under normal circumstances it works well. What makes hepcidin relevant to donor supplementation is a less intuitive property: it also responds acutely to oral iron ingestion. A single dose of 60 mg or more of elemental iron — roughly what you find in a standard over-the-counter supplement — triggers a hepcidin spike that sets in within hours and persists for approximately 24 hours before returning to baseline. While hepcidin is elevated, absorption from any subsequent dose is meaningfully suppressed. The implication for how we advise donors to supplement follows directly from this. The Problem With 'Take Iron Daily' The instinct to recommend daily iron supplementation is understandable. More doses, more iron in, faster repletion. It is the same logic that leads to split dosing — take it twice a day to maximize the total amount ingested. Both approaches are intuitive. Both are, at least partially, counterproductive. A 2015 study by Moretti and colleagues, published in Blood , was among the first to characterize this effect in humans. They showed that a morning iron supplement triggers sufficient hepcidin elevation to reduce absorption from a dose given later the same day — and that the response persists into the following morning. Split dosing compounded the problem: dividing the daily dose produced higher hepcidin and lower fractional absorption per dose, not better total uptake. The 2017 Stoffel et al. trial in Lancet Haematology  tested the logical alternative prospectively. Women randomized to alternate-day supplementation absorbed significantly more iron — both in fractional terms (21.8% vs. 16.3%) and in total — compared to those taking supplements daily. Allowing hepcidin to return to baseline between doses improved the efficiency of each one. Subsequent work confirmed that morning timing matters too: hepcidin follows a circadian pattern and is lower in the morning, making that the optimal window before the post-dose spike closes the door again. The practical upshot is that a donor who dutifully takes iron every morning may be absorbing less than one who takes the same dose every other morning. The body's own regulatory response is working against the intervention. What the Data Don't Yet Tell Us The alternate-day evidence is compelling, but almost none of it was generated in blood donor populations specifically. Most studies enrolled iron-depleted or iron-deficient women — a related but not identical context. Donors vary considerably in baseline iron status, sex, age, donation frequency, and the degree of deficiency at the time of supplementation. Whether the absorption advantage of alternate-day dosing holds consistently across this range is not yet established. The 2024 meta-analysis of daily versus alternate-day iron dosing added a useful wrinkle: baseline inflammation appears to modulate the benefit. Elevated hepcidin from inflammatory states may blunt the absorption advantage of spacing doses, since the favorable window is already partially closed before the first pill is swallowed. This is not a fringe concern in a donor population that includes people with subclinical inflammatory conditions. The dose question is also genuinely unresolved. HEIRS used daily supplementation; we now know that was suboptimal from an absorption standpoint. The alternate-day studies suggest that doubling the per-dose amount on an alternate schedule can achieve comparable or greater total iron uptake — but this has not been validated prospectively in donors. We are, in effect, extrapolating from better-designed absorption studies to a population that hasn't been directly studied under the revised paradigm. And then there is the infection question I raised in the earlier post. Oral iron has been shown to acutely elevate bacterial growth in human serum in iron-sufficient subjects. Whether this translates to iron-deficient donors — in whom the physiologic context is substantially different — remains unknown. No donor supplementation trial to date has tracked infection as an outcome. That gap is worth sitting with. Where This Leaves Us The case for addressing donor iron deficiency is solid. The case for doing it thoughtfully — rather than defaulting to daily supplementation because it seems like the obvious approach — is getting stronger. Hepcidin is not a curiosity. It is a central regulator of iron homeostasis, and it does not stop working just because we want our donors to replete faster. Any supplementation strategy that ignores it is, at minimum, less efficient than it could be, and possibly counterproductive at the margins. The AABB recommendations provide a reasonable framework. What they do not yet specify — with good evidence behind them — is the optimal schedule. Alternate-day morning dosing is the best current answer from the absorption literature. Whether that translates directly to the donor context, and at what dose, is work that still needs doing. In the meantime, it seems worth updating the footnote.

  • Flying Blind: TPE for Acute Kernicterus in Crigler-Najjar Syndrome

    Introduction One of the most humbling experiences in medicine is when a consult comes in and you realize the textbook has nothing for you. I had one of those recently — a 21-year-old with Crigler-Najjar syndrome type 1 and chronic kernicterus, averbal at baseline, who presented to an outside hospital with an infection and altered mental status. Her at-home bili lights were unavailable, and her bilirubin climbed from a baseline of around 24 to 32 mg/dL. She was transferred for ICU-level care and started on continuous phototherapy, which brought her bilirubin down from 32 to 29 — but her mental status didn’t budge. The concern was acute-on-chronic kernicterus, and now she was being transferred to us for therapeutic plasma exchange. Lord almighty, did I have a hard time coming up with a game plan. Crigler-Najjar Syndrome: A Primer For the uninitiated, Crigler-Najjar syndrome type 1 is a rare genetic disorder in which the enzyme responsible for conjugating bilirubin in the liver — UGT1A1 — is absent or nonfunctional. Without conjugation, unconjugated bilirubin accumulates in the blood. Unlike the common, transient jaundice seen in newborns, this is a lifelong condition. The mainstay of treatment is phototherapy, often for 10 to 16 hours daily, which isomerizes bilirubin into a water-soluble form that can be excreted without conjugation. The only definitive cure is liver transplantation, though gene therapy trials are underway. When bilirubin rises above a patient’s baseline — due to infection, fasting, or loss of access to phototherapy — the risk of acute bilirubin encephalopathy, or kernicterus, becomes very real. What the Literature Says (and Doesn’t Say) So, does therapeutic plasma exchange (TPE) have a role in acute kernicterus for Crigler-Najjar patients? I went to the literature to find out. What I found was… underwhelming. TPE is not listed as a primary indication in the ASFA guidelines for Crigler-Najjar syndrome. The evidence that does exist consists of scattered case reports and case series, and in every single one, plasmapheresis is treated as an afterthought — mentioned almost in passing as something that was done during a crisis, without rigorous evaluation of its contribution to the outcome. A 10-year-old with CN1 who developed kernicterus during streptococcal pharyngitis was treated with plasmapheresis, intensive phototherapy, and antibiotics, and recovered without neurologic sequelae. A 23-year-old man with CN1 who developed acute hepatitis from infectious mononucleosis received plasmapheresis to prevent neurological decline. A 2-month-old with a bilirubin of 30 mg/dL and signs of encephalopathy underwent plasmapheresis and urgent liver transplantation. Two 17-year-old boys with bilirubins in the 30s received intermittent plasmapheresis over a prolonged hospitalization. In none of these reports is there a standardized protocol. In none of them is TPE the focus of the study. It’s always a side note. Borrowing from the Acute Liver Failure Literature That left me with some very practical questions and very few answers. What exchange volume should I use? What replacement fluid? How often? The Crigler-Najjar literature doesn’t say. So I looked to the closest analogy I could find: the acute liver failure literature. In acute liver failure (ALF), high-volume plasma exchange (HVPE) has become a first-line therapy, based on a landmark 2016 randomized trial by Larsen and colleagues showing improved survival. HVPE in that context is defined as 8 to 12 liters of exchange, or about 15% of ideal body weight, which works out to roughly 2.5 to 3 plasma volumes. The replacement fluid is fresh frozen plasma, because ALF patients have severe coagulopathy and need factor replacement. A subsequent 2022 trial showed that even standard-volume plasma exchange — 1.5 to 2 plasma volumes — was effective and potentially safer with respect to cerebral edema. But here’s the critical difference: in ALF, the liver can potentially recover. In Crigler-Najjar, the enzyme deficiency is permanent. Bilirubin production continues at roughly 4 to 5 mg/dL per day, and studies have shown that bilirubin rebounds within 24 hours after plasma exchange. TPE in this context is a temporizing measure at best — buying time while you maximize phototherapy and, if indicated, arrange for transplant evaluation. My Approach I also had to consider the replacement fluid question carefully. The ALF literature uses FFP because those patients need clotting factors. My patient didn’t have liver synthetic dysfunction — her liver makes everything except functional UGT1A1. What she needed was bilirubin removal, and albumin is the primary carrier of unconjugated bilirubin in the blood. On the other hand, some FFP in the replacement fluid provides additional albumin and maintains oncotic pressure. I ultimately decided on a one-time TPE with a 50/50 mix of albumin and plasma — a pragmatic decision born more from first principles than from evidence, because the evidence simply doesn’t exist for this specific scenario. I also recommended maximizing phototherapy — exposing as much skin surface area as possible and using as many bili light devices as they could get their hands on. Phototherapy remains the workhorse of bilirubin management in CN1, and TPE without concurrent aggressive phototherapy is unlikely to make a meaningful dent. When Evidence Runs Out The broader point here is one that I think resonates with anyone who practices in a niche or rare-disease space: sometimes the literature leaves you on your own. You can search every database, pull every case report, and still end up making decisions based on pathophysiology, first principles, and clinical judgment rather than evidence-based protocols. That’s not a comfortable place to be, but it’s an honest one. A Call for Better Evidence For Crigler-Najjar patients in acute crisis, I think there’s a real need for better evidence on the role of TPE. What volume? What fluid? What schedule? Does it actually change outcomes, or does it just change numbers on a screen? These are questions that case reports can’t answer. Given the rarity of the condition, a multi-center registry or collaborative case series with standardized TPE protocols would be a reasonable starting point. In the meantime, if you get this consult, know that you’re not going to find a protocol waiting for you. You’re going to have to reason through it. And if you come up with something better than what I did, I’d love to hear about it.

  • Does TAMOF Exist? Revisiting a Diagnosis I Thought I Understood

    Earlier this year, I wrote a piece about TAMOF — thrombocytopenia-associated multiple organ failure — and the case for recognizing it as a TTP-like process driven by secondary ADAMTS13 deficiency. I described the lab pattern. I walked through the differential. I made what I believed was a compelling argument that TAMOF is underrecognized, and that plasma exchange is a rational intervention once the diagnosis is made. I stand by that piece. The pathophysiology is real. The lab pattern is real. The clinical scenarios I described are ones that pathologists and intensivists encounter regularly. But since writing it, I’ve spent more time with the literature surrounding TAMOF, and I’ve come to appreciate something I didn’t fully reckon with the first time around: the question isn’t just whether TAMOF is real. The question is whether TAMOF is useful — and that turns out to be a much harder thing to answer. The Case I Made In my earlier article, I described TAMOF as occupying an uncomfortable space between DIC, TTP, and sepsis-associated coagulopathy. The central idea was that systemic inflammation can drive a relative deficiency of ADAMTS13, leading to accumulation of ultra-large von Willebrand factor multimers and platelet-rich microthrombi in the microvasculature. The downstream effect looks like TTP — organ ischemia, thrombocytopenia, elevated LDH, sometimes schistocytes — even though the trigger is sepsis or inflammation rather than autoimmunity. I emphasized that the lab pattern tells the story: falling platelets, rising LDH, preserved coagulation parameters, and organ dysfunction out of proportion to hemodynamics. I argued that recognizing this pattern opens the door to therapeutic plasma exchange, and that missing it leaves patients undertreated. None of that is wrong, exactly. But it is incomplete. The Questions I Didn’t Ask The concept of TAMOF was primarily developed by Nguyen and Carcillo, with foundational work published in Critical Care  in 2006. From the beginning, TAMOF was described not as a single disease but as a clinical phenotype — an umbrella encompassing TTP, DIC, and secondary thrombotic microangiopathy in critically ill patients. The unifying feature was new-onset thrombocytopenia coinciding with multiple organ failure, and the proposed mechanism was microvascular thrombosis. This is where the first tension appears. If TAMOF includes DIC, TTP, and secondary TMA under one label, what does the label add? Each of those entities already has its own diagnostic criteria, its own pathophysiology, and — critically — its own treatment approach. DIC is a consumptive coagulopathy driven by tissue factor. TTP is an autoantibody-mediated deficiency of ADAMTS13. Secondary TMA is a broader category of inflammation-driven microangiopathy. These are not the same process. Grouping them under one name risks implying a mechanistic unity that does not exist. In my first article, I focused on the subset of TAMOF that behaves like TTP — the secondary TMA piece, where inflammation drives ADAMTS13 deficiency and platelet-vWF-mediated thrombosis predominates. That is a real phenomenon. But by calling it TAMOF rather than secondary TMA, I may have inadvertently adopted a framework that obscures more than it clarifies. The Evidence Problem The therapeutic implication of recognizing TAMOF is plasma exchange. This was central to my earlier piece: once you see the microangiopathy, the rationale for plasma exchange follows logically. Remove the ultra-large vWF multimers. Replenish ADAMTS13. Reduce inflammatory mediators. The biological plausibility is sound. The evidence base, however, is thin. The landmark pediatric trial randomized just ten children — five to plasma exchange, five to standard therapy. The results were encouraging: plasma exchange restored ADAMTS13 activity and was associated with organ failure resolution. But a trial of ten patients, however well-designed, cannot establish standard of care. Subsequent studies have been retrospective, observational, or limited to small cohorts. The Turkish TAMOF Network described outcomes in 42 children but could not even measure ADAMTS13 levels due to unavailability of the assay. A prospective multicenter experience published more recently found lower 28-day mortality in children treated with plasma exchange, but the authors themselves concluded that a randomized clinical trial is necessary to establish a causal relationship. The American Society for Apheresis gives plasma exchange in sepsis with multiple organ failure a Category III recommendation — meaning the optimum role is not established and decision-making should be individualized. This is not an endorsement. It is an acknowledgment that we don’t know enough. The ADAMTS13 Problem In my earlier piece, I described ADAMTS13 as spanning a wide range in TAMOF and cautioned against rigid thresholds. I still think that’s right. But I underappreciated a more fundamental issue: we do not yet know whether reduced ADAMTS13 in sepsis is the cause of organ dysfunction or simply a marker of disease severity. This distinction matters enormously. If reduced ADAMTS13 is pathogenic — if it is actively driving microthrombi formation and organ ischemia — then replenishing it through plasma exchange is a targeted intervention. But if ADAMTS13 is reduced because the patient is severely ill, because inflammation broadly suppresses hepatic synthesis and accelerates consumption of many proteins, then treating the ADAMTS13 level may be treating a surrogate rather than the disease itself. Moreover, ADAMTS13 activity in TAMOF is typically reduced but not severely deficient. In classic TTP, levels are usually below 10%. In sepsis-associated secondary TMA, levels are more often in the 20–60% range. This is an important gray zone. Plenty of critically ill patients with sepsis have mildly reduced ADAMTS13, and most of them do not have a clinically meaningful microangiopathic process. The specificity of this biomarker, in this context, is genuinely uncertain. The Diagnostic Boundary Problem TAMOF is diagnosed by a triad: new-onset thrombocytopenia below 100,000, at least two failing organs, and elevated LDH. The problem is that this triad describes an enormous proportion of critically ill septic patients. Thrombocytopenia in the ICU is common — present in up to 40–50% of patients, depending on the threshold used. LDH elevation is nearly ubiquitous in critical illness. And organ failure is, almost by definition, why these patients are in the ICU in the first place. If the diagnostic criteria capture too many patients, the label loses its power to identify those who would specifically benefit from targeted intervention. A diagnosis that applies to half the ICU is not a diagnosis. It is a description. What I Think Now I want to be careful here, because I don’t think the answer is that TAMOF is meaningless or that my earlier article was misguided. The pathophysiology I described — inflammation-driven ADAMTS13 deficiency leading to platelet-vWF-mediated microvascular thrombosis — is supported by autopsy data, by biomarker studies, and by the clinical observation that some septic patients develop a microangiopathic picture that does not fit neatly into DIC. That phenomenon is real, and it deserves a name. But I think the name might be doing some work that the evidence hasn’t earned yet. TAMOF as an umbrella term bundles together mechanistically distinct processes and implies they share a common therapeutic target. TAMOF as a diagnostic entity relies on criteria so broad that they risk capturing patients who don’t have a true microangiopathy at all. And TAMOF as a justification for plasma exchange rests on studies that, while promising, remain small, largely retrospective, and without a definitive randomized trial. What I wrote before was an argument for recognition — for seeing the pattern and acting on it. What I’d add now is an argument for precision. The lab pattern I described is still the right place to start. Converging signals of microangiopathy in a septic patient should prompt the question: is there a thrombotic microangiopathic process driving this patient’s organ failure? But the answer to that question should lead to a specific diagnosis — secondary TMA, DIC, or something else — not to a catch-all label that may prematurely close the differential. The Lab’s Role, Revisited I ended my first article with the line: “TAMOF is not rare because it is uncommon. It is rare because we don’t look for it.” I still believe that’s true — but I’d frame it differently now. What’s underrecognized isn’t necessarily TAMOF as a discrete entity. What’s underrecognized is the broader phenomenon of secondary thrombotic microangiopathy in the critically ill, and the role that laboratory medicine plays in distinguishing it from DIC, from “just sepsis,” and from true TTP. That distinction requires more than a label. It requires the kind of contextual interpretation that has always been the core competency of the pathologist: not just reporting numbers, but assembling them into a story that changes management. The controversy around TAMOF is not really about whether the biology is real. It is about whether we have the right framework to describe it, the right criteria to diagnose it, and the right evidence to treat it. On all three counts, the honest answer is: not yet. But “not yet” is not the same as “no.” It means we have more work to do. And for those of us in the lab, that work starts with being willing to question our own frameworks — even the ones we just finished building.

  • Transfusion Medicine: The Clinical Engine Behind the Blood Bank

    When clinicians say, “I called the blood bank,” they usually mean one of two things: they need blood products, or something about a transfusion doesn’t feel right. Those are not the same situation — and they are not handled the same way. At most institutions, the blood bank laboratory and the Transfusion Medicine service operate as an integrated system. They overlap. They collaborate constantly. But they serve different functions. That difference matters. The Blood Bank: Technical and Operational Safety The blood bank laboratory is responsible for: ABO/Rh typing and antibody screens Crossmatching and compatibility testing Investigating serologic incompatibilities Preparing and issuing blood components Maintaining regulatory and quality standards It is the operational engine of transfusion safety. It ensures the right product reaches the right patient efficiently and in compliance with strict regulatory frameworks. But laboratory testing alone does not answer every clinical question. Transfusion Medicine: Clinical Judgment in Real Time The Transfusion Medicine service provides physician-level consultation when transfusion decisions become complex or when adverse events occur. We are consulted for: Suspected transfusion reactions Hemolysis or unexpected serologic findings Complex alloimmunization cases Risk–benefit discussions in high-risk scenarios Questions about product selection beyond routine ordering When a patient develops hypotension, hypoxia, fever, or laboratory evidence of hemolysis during or after a transfusion, the question is no longer simply, “What do the labs show?” It becomes: Is this TRALI, TACO, hemolysis, sepsis, or something unrelated? Should additional products be given? Does this event require reporting or product quarantine? The laboratory can detect hemolysis. It cannot diagnose TRALI. These are clinical determinations that require integration of history, timing, exam findings, imaging, and laboratory data. When to Involve Transfusion Medicine A practical rule of thumb: If you are unsure whether what you are seeing “counts” as a transfusion reaction — involve us. Early consultation allows: Real-time clinical assessment Guidance on stopping versus continuing transfusion Appropriate laboratory evaluation Accurate documentation in the EMR Prevention of downstream complications Waiting until the picture is unmistakable often means the patient has already deteriorated further than necessary. The threshold should be low — particularly for severe allergic reactions, suspected hemolysis, respiratory compromise, or unexplained instability during transfusion. Why Role Clarity Matters Conflating the laboratory function with clinical consultation can create blind spots. If a reaction is reported only as a technical issue, important clinical context may be missed. Without coordinated physician involvement, transfusion reactions are more likely to be under-recognized, misclassified, or inconsistently documented. That affects more than a single patient encounter. It impacts hemovigilance data, quality reporting, and our ability to learn from adverse events. Transfusion is one of the most common procedures performed in hospitalized patients. It is also one of the few therapies that requires laboratory and clinical teams to function as a tightly integrated unit in real time. Clear roles within that integration improve patient safety. A Collaborative Model This is not about separation. It is about alignment. The blood bank laboratory ensures technical and regulatory safety. The Transfusion Medicine physician provides clinical oversight and interpretation. They are complementary functions within the same safety system. If you are ordering routine blood for a stable patient, the laboratory will manage the process seamlessly. If a transfusion becomes clinically complicated — or something simply does not make sense — physician-level Transfusion Medicine consultation should be part of the response. Transfusion Medicine is not just a laboratory process. It is a clinical service embedded within it. And when in doubt, call.

  • Practicing at the Edge of ABO: Navigating Rare A Subgroups

    There are moments in transfusion medicine when the most uncomfortable part of a case isn’t the serology — it’s the realization that the literature can’t quite tell you what to do. Recently, on service, I encountered a patient with a rare A subgroup and a cold-reacting anti-A1. Genotyping suggested either an Aw allele or an Ael allele. The immediate question was practical and deceptively simple: Is it safe to transfuse group A red cells, or should we restrict the patient to group O? What followed was a familiar exercise for anyone who practices in the margins of evidence: reading case reports, revisiting mechanism, and trying to decide how much uncertainty is acceptable when the downside is fatal hemolysis. Along the way, one thing became clear: not all weak A phenotypes are biologically — or clinically — interchangeable. In particular, A3 is not the same as Aw, and neither is the same as Ael. Yet they are often discussed together, sometimes implicitly treated as a single category. That shortcut matters. The problem with “weak A” as a single bucket In everyday blood bank practice, weak A phenotypes are often grouped together for operational reasons: they may present as ABO discrepancies, require additional testing, or trigger conservative transfusion strategies. But biologically, these phenotypes arise through very different mechanisms, and those differences shape how we should think about transfusion risk. Here’s a simplified comparison. A3 vs Aw vs Ael — why the distinction matters Feature A3 Aw Ael Typical serologic pattern Mixed-field agglutination with anti-A Weak or very weak anti-A; variable No agglutination with anti-A Detectable without elution? Yes Often yes (weak) No Detectable by adsorption–elution Usually not needed Sometimes Required Underlying mechanism Reduced or mosaic expression; often promoter/splicing effects Hypomorphic A transferase with allele-in-trans–dependent expression Near-null expression, often due to early truncation of A transferase Degree of A antigen exposure Present on a subset of RBCs Variable; can be extremely low Trace only Evidence base for transfusion safety Relatively robust (dominates “weak A” literature) Sparse, case-based Extremely limited Theoretical risk of allo-anti-A Low Uncertain Plausible (no incidence data) Why A3 is different A3 is classically defined by mixed-field agglutination with anti-A: some red cells express A antigen clearly, others do not. Importantly, A antigen is present and visible without elution. From an immunologic standpoint, this matters. The immune system has likely been exposed to A antigen throughout life. Unsurprisingly, much of the reassuring transfusion experience for “weak A” phenotypes comes from cohorts dominated by A3 and similar variants. When people say, “We transfuse A all the time in weak A and nothing happens,” they are often — implicitly — talking about A3. Ael: a fundamentally different phenotype Ael occupies the opposite end of the spectrum. These phenotypes typically arise from premature termination codons early in the ABO A transferase gene. Routine serology shows no A antigen at all; detection requires adsorption–elution, and even then only trace amounts are found. In practical terms, most circulating red cells are immunologically indistinguishable from group O. Does this mean patients with Ael will  form allo-anti-A? No one knows. The literature does not report an incidence. But mechanistically, the conditions that support immune tolerance to A antigen are clearly not the same as in A1, A2, or A3 phenotypes. This is where the phrase “absence of evidence is not evidence of absence”  stops being academic. Aw: the uncomfortable middle Aw phenotypes are what make this topic genuinely hard. Unlike A3, Aw is not a mixed-field phenotype by default. And unlike Ael, it is not uniformly silent. Instead, expression depends heavily on the allele in trans. One of the most striking demonstrations of this comes from maternal–child discordance cases, where the same  Aw allele produced: essentially no detectable A antigen when paired with an O allele, and robust A expression when paired with a B allele. In other words, Aw can look immunologically like Ael in one context and A2 or stronger in another. When you encounter Aw in the blood bank, you are not just dealing with “weak A.” You are dealing with context-dependent A expression, and that uncertainty follows you into transfusion decisions. What about hemolysis and antibodies? Most anti-A1 antibodies are cold, IgM, and clinically insignificant. That’s true — until it isn’t. Case reports exist of hemolytic transfusion reactions involving anti-A1 when thermal amplitude is broad or when additional risk factors are present. These cases are rare, but they loom large precisely because the denominator is so poorly defined. What we don’t  have are: incidence data for allo-anti-A formation in Ael or Aw individuals, outcome studies stratified by molecular subgroup, or prospective evidence that transfusing group A is uniformly safe across all weak A genotypes. So when clinicians default to group O in these cases, it’s not ignorance. It’s an acknowledgment of uncertainty. Conservatism isn’t a failure of evidence — it’s stewardship In my case, we chose to transfuse group O red cells while we waited for expert input. That decision wasn’t driven by panic or dogma. It was driven by a simple question: If I’m wrong, what happens to the patient? In transfusion medicine, the cost of being wrong is asymmetric. Hemolysis is rare — until it isn’t — and when it happens, it’s unforgettable. Until we have better data, it is reasonable to treat A3, Aw, and Ael differently, even if our SOPs and textbooks sometimes collapse them into the same category. Closing thought Somewhere between genotype, phenotype, and patient safety is a space where we practice medicine without a net. That’s not a failure of science. That’s where judgment lives. And sometimes, judgment looks like a unit of group O. Please see this related post for an update on this case: https://www.bloodbytesbeyond.com/post/anti-a1-in-practice-not-in-theory

  • Anti-A1 in Practice, Not in Theory

    After I published a recent post about a patient with a rare A subgroup and a cold-reacting anti-A1, I did what transfusion medicine quietly trains us all to do when the literature runs thin: I picked up the phone. The case itself was straightforward to describe and uncomfortable to decide. Genotyping suggested either an Aw  allele or an Ael  allele. Serology favored Aw , with faint agglutination detectable without elution. The patient also had a cold-reacting anti-A1. The question was simple and not at all academic: should we transfuse group A red cells, or restrict the patient to group O? In the absence of clear guidance, we chose conservatively while seeking expert input. That decision felt reasonable — but incomplete. So I reached out to colleagues at a reference laboratory to ask how they actually think about cases like this, not in theory, but in practice. What the Literature Teaches — and Where It Stops If you search anti-A1 and hemolysis, you will find what all of us find: case reports. Some are dramatic. A few involve hemolytic transfusion reactions. Many emphasize the same features — broad thermal amplitude, high titers, or unusual clinical contexts such as malignancy. What you will not find are incidence data. You won’t find outcome studies stratified by molecular subgroup. You won’t find a denominator large enough to tell you how often cold-reacting anti-A1 actually causes harm in routine transfusion practice. Case reports are essential—they define what can  happen. But they are also blunt instruments. They warn us without telling us how often to expect trouble, or how to weigh that risk against competing obligations like inventory stewardship. That gap is where reference labs live. What the Reference Lab Actually Looks At One of the most useful things about consulting a reference laboratory is learning which variables matter most  when time and data are limited. In this case, three themes came up repeatedly. 1. Thermal amplitude and titer matter more than genotype In practice, the single most important question is not whether the patient has Aw  versus Ael , but whether the anti-A1 reacts at 30 °C or higher. Cold-reacting anti-A1 antibodies that react only below 30 °C are overwhelmingly benign in real-world experience. Hemolysis in this setting is extraordinarily rare, particularly in otherwise stable patients. When reactions do occur, they tend to involve antibodies with broader thermal amplitude or very high titers that permit binding at warmer temperatures. This is not because genotype is irrelevant, but because thermal amplitude and titer are the only tools we currently have that correlate, however imperfectly, with clinical significance. 2. Malignancy-associated cases don’t generalize well Several of the most concerning reports of anti-A1–mediated hemolysis come from patients with malignancy, particularly myelodysplastic syndromes. These cases behave differently for a reason. In malignancy, the ABO glycosyltransferase genes may be epigenetically silenced or otherwise disrupted. Antigen expression can change or disappear entirely, and patients may transiently form potent antibodies against antigens they once expressed. These antibodies can be atypical, high-titer, and clinically significant — and may abate once the underlying disease is treated or after transplant. Those cases are real, but they are not representative of the average patient with a cold-reacting anti-A1. Treating them as such inflates perceived risk. 3. Group A is transfused more often than people realize Perhaps the most grounding insight was this: reference labs see these cases frequently, and group A red cells are routinely transfused to patients with cold-reacting anti-A1 without incident. That comfort does not come from theory. It comes from volume—from seeing the same scenario play out safely again and again. When reactions are limited to temperatures below 30 °C and the patient is not undergoing hypothermia or critically ill, the expectation is that transfusion will be tolerated. Group O remains an option — but not a default. Where Conservatism Still Makes Sense None of this means that caution is misguided. In fact, reference labs are often more  conservative in specific situations: Patients who are critically ill or have minimal physiologic reserve Antibodies with reactivity approaching 30 °C Very high titers, even if technically “cold” Planned hypothermia or cardiac surgery In those contexts, avoiding even low-grade hemolysis may matter more than inventory conservation, and the threshold for using group O appropriately drops. The key distinction is that conservatism becomes a choice , not an automatic rule. Judgment Is a Team Sport Case reports teach us what can go wrong. Reference labs teach us how often it does — and under what conditions. Clinicians have to integrate both, along with patient context and resource stewardship, to make decisions that are defensible even when the evidence is incomplete. That’s not a failure of science. It’s the practice of medicine. Sometimes, after all that, the answer is still group O. But now, at least, I know why—and when it doesn’t have to be.

©2023 by Caitlin Raymond. Powered and secured by Wix

bottom of page