Search Results
102 results found with an empty search
- The Bloodless Surgery Consult for the Overworked Fellow
The pager goes off. The message reads: “Bloodless surgery consult — patient refusing blood products.” If you are a transfusion medicine fellow and this is your first one, you probably spend a moment staring at your pager wondering exactly what you are supposed to do with that information. You show up, introduce yourself, and are handed a form. It is several pages long. At the top, in clear block letters: WHOLE BLOOD COMPONENTS. Below that, a list of products — red blood cells, platelets, plasma — each followed by two checkboxes. Accept. Reject. It seems simple enough. Then you keep reading. By the time you reach plasma protein fractions, recombinant clotting factors, and thrombopoietin mimetics, you realize this is not a simple form. It is a document that asks a person to think, in advance and in detail, about exactly how much of their own blood — and everyone else’s — they are willing to accept back into their body under duress. The checkboxes are tidy. The clinical reality underneath them is not. Why This Consult Exists The most common reason you will encounter a bloodless surgery consult is a patient who is a Jehovah’s Witness. Members of this faith generally decline transfusion of whole blood and its four primary components — red blood cells, white blood cells, platelets, and plasma — based on a religious interpretation of scriptural passages prohibiting the “taking in” of blood. But the boundaries of that refusal are personal, not prescribed. Individual Jehovah’s Witnesses vary significantly in what they will and will not accept, which is precisely why the form exists and why the consult matters. Not every patient requesting bloodless or transfusion-free care is a Jehovah’s Witness. Some patients have religious objections that are less formalized. Others have philosophical objections to allogeneic blood, concerns about transfusion-transmitted infections, or simply a strong preference to avoid a product they view as high-risk. The label “bloodless surgery” is something of a misnomer — the goal is not zero blood, but zero allogeneic blood. Whether that is achievable depends on the clinical situation, the alternatives available, and what the patient has actually agreed to. The form is the tool that documents that agreement. The conversation is the actual work. The Form, Decoded Walk through the major product categories and the clinical stakes become clearer. Whole blood components — red cells, platelets, plasma, white cells — are the straightforward part. Most patients who have thought carefully about this have already made their decision about these products before you walk in the room. These are the checkboxes they came prepared for. Plasma fractions are where things get philosophically interesting. Albumin is derived from pooled human plasma, fractionated, and heat-treated. Cryoprecipitate is thawed plasma precipitate, rich in fibrinogen and factor VIII. Fresh frozen plasma is essentially unfractionated. A patient might accept albumin but decline FFP, not because they are being inconsistent, but because fractionation changes the product enough to matter to them, even if it does not particularly change the clinical calculus for you. This is not a contradiction you are there to resolve. It is a distinction you are there to understand and document. Autologous techniques — cell saver, acute normovolemic hemodilution, apheresis, dialysis — occupy a fascinating middle ground. Many patients who decline allogeneic blood are entirely comfortable with their own blood being collected, processed through a machine, and returned to them, as long as the circuit remains closed and continuous. The blood never “leaves” them in any meaningful sense. Practically, this means cell saver is often on the table even when packed red blood cells are not, and that distinction matters enormously in a surgical or hemorrhage scenario. Erythropoiesis-stimulating agents, colony-stimulating factors, and thrombopoietin mimetics round out the list. These are pharmacologic scaffolds — tools to build up what the patient has before a major procedure, or to support recovery after one. Some formulations contain albumin as a stabilizer. For some patients, that matters. For others, it does not. You need to know which. The Grey Area Nobody Warns You About: Plasma-Derived Clotting Factors Here is something the form does not make obvious, and that fellows often do not realize until they are standing at the bedside: several of the products in the “clotting factors” section are derived from pooled human plasma. Kcentra — the four-factor prothrombin complex concentrate most of us reach for in warfarin reversal or urgent coagulopathy — is plasma-derived. So is Riastap, the fibrinogen concentrate. Humate-P, which contains both factor VIII and von Willebrand factor, is plasma-derived. These are not recombinant products engineered in a lab. They are fractionated from pooled donor plasma, processed and pathogen-reduced, but fundamentally the same source material as fresh frozen plasma. The processing is different. The origin is not. A brief detour into hemophilia is useful here, because the recombinant versus plasma-derived distinction has a history that most fellows outside of hematology do not fully appreciate. For most of the twentieth century, factor VIII and factor IX concentrates used to treat hemophilia A and B were plasma-derived — pooled from thousands of donors, with all the viral risk that entailed. The consequences in the 1980s were devastating: contaminated plasma-derived concentrates transmitted HIV and hepatitis C to a substantial portion of the hemophilia population before adequate screening and viral inactivation methods existed. That disaster drove the development of recombinant factor products, which began reaching the market in the early 1990s. Today, recombinant factor VIII and factor IX concentrates — including extended half-life versions — are the standard of care for hemophilia in high-income settings. Plasma-derived equivalents still exist and are still used, particularly where recombinant products are less accessible, and in conditions like von Willebrand disease where a plasma-derived product containing both factor VIII and vWF is sometimes preferred. But the field has largely moved on. The relevance for bloodless surgery is this: the products you are most likely to reach for in an acute coagulopathy — Kcentra, Riastap — do not yet have widely available recombinant equivalents. A recombinant fibrinogen concentrate exists in development but is not in routine clinical use. So unlike hemophilia care, which has largely transitioned away from plasma-derived products, the hemostatic toolkit for your typical bleeding surgical patient is still substantially plasma-derived. That gap matters when your patient has declined plasma. Where recombinant options do exist, they matter a great deal. Recombinant factor VIIa (NovoSeven) is produced in baby hamster kidney cells — no human plasma involved. Recombinant factor VIII and factor IX are similarly plasma-free. For a patient whose objection extends to all human blood fractions, these products may be acceptable where plasma-derived concentrates are not. The reverse can also be true: some patients are comfortable with highly processed plasma fractions but draw the line at whole plasma or red cells. You cannot predict which way a given patient will land. The form gives you a framework. The conversation gives you the actual answer. This is one of the more uncomfortable aspects of bloodless surgery medicine: the fellow’s job is not just to document preferences, but to ensure those preferences are genuinely informed. That means being willing to say, politely and clearly, “I want to make sure you know that this product comes from human plasma — is that still acceptable to you?” Most patients appreciate it. Some are surprised. Occasionally, it changes their answer. All of those outcomes are better than the alternative. When the Checkboxes Run Out The form creates legal clarity. It does not always create clinical clarity. Consider a patient who has accepted cell saver but declined cryoprecipitate. Intraoperatively, they develop a coagulopathy. The surgeons look at you. The anesthesiologist looks at you. The patient is not in a position to revisit their checklist. You are not there to override their documented wishes — you are there to help the team understand what options remain, and what their limits are. In practice, this means knowing your alternatives well enough to deploy them quickly. Can you correct a fibrinogen deficit with a fibrinogen concentrate the patient has accepted? What is the hemostatic ceiling of topical procoagulants like fibrin sealants? Is the surgical team using electrocautery aggressively enough? Is there an interventional radiology option? The transfusion medicine fellow in the bloodless surgery consult is not just a documentarian. You are a consultant in the truest sense — someone whose job is to expand the team’s range of options, not just to manage their expectations. And then there are the cases where the options run out. Where the patient is bleeding and the only thing that would reliably help is a product they have refused. You learn to sit with that. You learn that informed refusal is not a failure of medicine. You learn that the consult you did beforehand — the one where you made sure the patient understood exactly what they were declining, and why, and what the alternatives were — was the most important one. What These Consults Teach You Bloodless surgery consults are a masterclass in what blood products actually do. Because you cannot default to transfusion, you have to explain — to the patient, to the team, and to yourself — exactly what each product is for, what happens physiologically without it, and what can plausibly substitute. You will leave your first few of these consults knowing your coagulation cascade better than you did going in. That is an underappreciated upside. You also learn something about the nature of consent itself. Most informed consent in medicine is procedural: sign here, you understand the risks. Bloodless surgery consent is longitudinal. It happens before the procedure, often well before, and it asks the patient to project themselves into scenarios they cannot fully anticipate. It demands that you, as the consultant, be honest about uncertainty — about what the surgery might require, about which alternatives are genuinely equivalent and which are merely adjacent. Accept or Reject The form implies a binary. Accept. Reject. Medicine is almost never that clean. The most useful thing I can tell a fellow going into their first bloodless surgery consult is this: the form is not the point. The point is the conversation that produces it — the one where you find out what the patient actually believes, what they actually understand, and what they are actually willing to accept when the stakes become real. The checkboxes are documentation. The consult is medicine. And if you leave that room feeling like you understood it completely, you probably missed something.
- A Disease Waiting For Its Assay: The History of MOGAD
For roughly twenty years, MOG antibodies were considered noise. Studies kept finding them — in patients with MS, in patients with other demyelinating diseases, in healthy controls. The conclusion the field drew was reasonable: these antibodies probably aren’t doing much. That conclusion was wrong. The antibodies were real. The assay was broken. The protein Myelin oligodendrocyte glycoprotein, MOG, is expressed on the outermost surface of the myelin sheath. Its location matters: it sits on the very outside of oligodendrocytes, fully exposed to the immune system. This makes it a structurally logical target for antibody-mediated attack. Researchers noticed this in the 1980s, when MOG was identified as a potent inducer of experimental autoimmune encephalomyelitis — EAE — the classic animal model used to study multiple sclerosis. MOG-immunized animals developed demyelinating disease. The inference seemed obvious: MOG must be important in human MS, too. That inference was wrong, or at least overstated. But it launched decades of research into MOG antibodies in human demyelinating disease — research that was almost immediately complicated by the tools available to detect them. The assay problem The problem was ELISA. Enzyme-linked immunosorbent assay, ELISA, is a workhorse of antibody detection. It works by coating a solid surface with the antigen of interest — in this case, MOG protein — and then exposing it to patient serum. If antibodies are present, they bind. The trouble is that coating a surface with purified protein requires denaturing it: stripping it out of its native environment, unfolding it, and adhering it flat. What was once a three-dimensional glycoprotein sitting in a lipid bilayer is now a linearized string of amino acids on a plastic plate. For MOG, this matters enormously. The antibodies that are actually relevant in MOGAD recognize a conformational epitope — the specific three-dimensional shape of MOG’s extracellular domain as it exists in a cell membrane. Denatured MOG doesn’t have that shape. So ELISA-based assays were detecting antibodies against linear epitopes, finding them in patients with MS, patients with other demyelinating diseases, and healthy controls. The field grew appropriately skeptical. MOG antibodies looked like noise. The fix: cell-based assays The correction came in 2011 and 2012, with the development of cell-based assays. The approach is straightforward in principle: instead of adhering purified protein to a plate, you transfect cells to express full-length, native MOG on their surface. Patient serum is then incubated with these cells. If MOG-specific IgG is present, it binds to the correctly folded extracellular domain. A fluorescently labeled secondary antibody tags the bound IgG, and flow cytometry — FACS — quantifies the signal. The protein stays where it belongs, embedded in a lipid bilayer, presenting the same conformation the immune system encounters in vivo. The improvement in specificity was dramatic. False positives largely disappeared. A real signal emerged. The door that opened first This methodological breakthrough landed in fertile soil, because the field had already been primed to look. In 2004, Lennon and colleagues published a landmark finding: antibodies against aquaporin-4 — AQP4 — were present in a substantial subset of patients with neuromyelitis optica, or NMO. NMO had long been considered a severe variant of MS. The AQP4 discovery proved otherwise. Here was an antibody-mediated demyelinating disease, clinically and serologically distinct from MS, hiding in the seronegative-MS wastebasket. The discovery raised an obvious question: what was driving disease in the patients who were AQP4-seronegative? A disease takes shape Starting around 2011, groups from Oxford, Munich, and Melbourne began identifying patients — many AQP4-seronegative — with MOG-IgG detected by cell-based assay. Their clinical features were distinctive. Bilateral or simultaneous optic neuritis, sometimes with severe disc edema. Longitudinally extensive transverse myelitis. Acute disseminated encephalomyelitis, particularly in children. Cortical encephalitis. Between attacks, patients often recovered surprisingly well — better than typical MS or AQP4-positive NMOSD. The disease appeared steroid-responsive in ways that also distinguished it from its neighbors on the demyelinating spectrum. This was not MS. It was not AQP4-positive NMOSD. It was something new, or rather, something old that we had finally developed the tools to see. The term MOGAD was formally adopted around 2018 and 2019 to reflect this recognition — a distinct nosological entity with its own clinical phenotype, its own demographic predilections, and its own emerging treatment ladder. In 2023, Banwell and colleagues published international consensus diagnostic criteria in The Lancet Neurology, formalizing what years of cohort data had been building toward. Where apheresis enters Acute attacks are typically treated with high-dose corticosteroids. For patients who don’t respond, intravenous immunoglobulin is a reasonable next step. For steroid-refractory cases, therapeutic plasma exchange enters the picture — mechanistically sensible for an antibody-mediated disease, since removing circulating IgG directly targets what appears to be the primary effector. There is a growing body of case series and retrospective data supporting PLEX in refractory MOGAD attacks, though robust prospective trial data remains limited. The bottom line That last sentence captures something true about MOGAD more broadly. We have a name. We have diagnostic criteria. We have a treatment ladder with reasonable mechanistic logic supporting each rung. What we are still working out — actively, with ongoing trials — is the natural history, the optimal long-term immunosuppression, and the full spectrum of what the disease can look like, particularly in its cortical presentations. MOGAD went from animal model curiosity to false lead to distinct human disease over roughly four decades. The trajectory is a good reminder that the biology usually gets there before we do. Sometimes we’re just waiting for the right assay.
- RBC Exchange Transfusion for Babesiosis: We’ve Been Doing This for Decades. Now We Have Data.
The call comes in sometime around midnight. A patient is febrile and jaundiced, their smear shows ring forms in the red cells, and the parasitemia is sitting at 14%. The infectious disease fellow wants to know if we’ll do an exchange. The answer, for most of us, is yes — and it’s not a difficult yes. High parasitemia, signs of hemolysis, organs starting to wobble. The indication feels obvious. For the better part of forty years, that feeling was the best we had. The first reported case of severe babesiosis treated with RBC exchange was in 1980. Since then, we’ve been doing it based on case reports, small case series, mechanistic logic, and guideline recommendations that were themselves built on case reports, small case series, and mechanistic logic. The data, such as they were, strongly suggested we were doing the right thing. Strongly suggesting and actually demonstrating are different things. Now we have something closer to an actual answer. What We’re Dealing With Babesiosis is a tick-borne illness caused primarily by Babesia microti, an intraerythrocytic parasite — meaning it invades and replicates inside red blood cells. It’s endemic to the northeastern and upper midwestern United States, with incidence increasing steadily in recent years, particularly in New England. The geographic range is expanding, and true incidence may be ten times higher than reported cases. The disease spectrum is wide. Many people clear infection without knowing they had it. Others end up in the ICU. The patients most at risk for severe disease are immunocompromised — asplenic, actively malignant, post-transplant, on immunosuppression — as well as those at the extremes of age. In hospitalized patients, mortality ranges from about 3% to nearly 9% in the general population and can reach 20% in immunocompromised patients. The parasite’s mechanism of harm is hemolysis: infected red cells rupture, releasing hemoglobin into the plasma, triggering endothelial injury, organ dysfunction, and a proinflammatory cytokine cascade that can take on a life of its own. Why Exchange Makes Sense RBC exchange transfusion (ET) is an extracorporeal procedure that removes the patient’s circulating red cells — infected and uninfected alike — while simultaneously replacing them with donor RBCs. The mechanistic case for it in babesiosis is straightforward: fewer infected cells means less hemolysis, less free hemoglobin circulating, and less downstream organ injury. You may also be pulling cytokines out of the circulation, though the contribution of that effect is harder to quantify. The American Society for Apheresis (ASFA) and the Infectious Diseases Society of America (IDSA) have recommended considering ET for patients with high-grade parasitemia (>10%), severe hemolytic anemia, or acute organ injury for years. What neither guideline could do was point to a study with a real control group and say: we know this changes outcomes. The Evidence Problem A 2021 retrospective chart review from Yale (O’Bryan et al., J Clin Apher) was about as rigorous as the pre-existing literature got. Ninety-one patients, single center, 2011–2017. The investigators stratified patients by peak parasitemia — <1%, 1–5%, 5–10%, >10% — and showed that virtually every marker of end-organ dysfunction worsened in a stepwise fashion with increasing parasite burden: hematocrit fell, LDH rose, bilirubin climbed, creatinine drifted up, platelet counts dropped. Nineteen patients received exchange, all with peak parasitemia ≥9% and some degree of organ dysfunction. Parasitemia dropped sharply post-exchange. The study showed what we already believed: high parasite burden is bad, and exchange reduces parasite burden. What it could not show — because patients who received exchange were sicker at baseline than those who didn’t — is whether exchange changed outcomes. The only prior study that even attempted a comparison had twenty-eight total patients. That is the entire controlled evidence base for a procedure we’ve been doing since the Carter administration. Enter STOP-BABESIOSIS In March 2026, Leaf et al. published the STOP-BABESIOSIS study in JAMA Internal Medicine: a multicenter cohort of 3,233 patients hospitalized with babesiosis across 82 sites and 24 medical centers in the northeastern US, spanning 2010 to 2024. Of these, 629 met eligibility criteria for the analysis: parasitemia >10%, or 5–10% with acute organ injury or severe hemolytic anemia. The investigators used a sequential target trial emulation (TTE) framework — worth pausing on, because it matters for how much you trust the result. Target trial emulation is an approach to observational data analysis that mimics the structure of a randomized controlled trial: you specify eligibility criteria, a treatment strategy, a defined start of follow-up, and outcomes, then apply them to real-world data. The sequential version used here enrolls patients on each of the first 7 days of hospitalization, which eliminates a specific bias problem called immortal time bias and allows confounder adjustment at the actual moment of treatment assignment rather than at a fixed earlier time point. Inverse probability of treatment weighting (IPTW) was applied to balance the groups on measured covariates. It’s rigorous methodology, and it’s increasingly the standard when an RCT is impractical. The primary endpoint was a composite of in-hospital death or 30-day readmission. Among the 209 patients treated with ET and the 420 who were not: the composite occurred in 3.6% of ET-treated patients versus 9.8% of those not treated. Adjusted odds ratio: 0.22 (95% CI 0.09–0.51). That’s a nearly fivefold reduction in odds, and it held up across eight sensitivity analyses — adjusted for site, year of admission, LDH, restricted to the first three days, restricted to exchanges using at least 10 units of RBCs. The result didn’t move. What This Changes — and What It Doesn’t The caveats are real, and the authors don’t hide them. Even after IPTW, the ET group had higher median parasitemia and a greater proportion of immunocompromised patients than the control group. Residual confounding in either direction is possible — meaning the benefit of ET could be an underestimate, but it could theoretically also be overestimated. The total number of deaths was small, so most of the composite endpoint is driven by 30-day readmissions rather than mortality. And the study enrolled only patients in the northeastern US, where B. microti is overwhelmingly the dominant species. An RCT is almost certainly never coming. The investigators say so directly, and they’re right: the number of sites required, the enrollment duration, the difficulty of maintaining equipoise, and the inevitability of crossover make a randomized trial effectively impossible. This is the best evidence we are likely to have. And the best evidence we are likely to have is a nearly fivefold reduction in death or readmission among severely ill patients who received exchange. For practice, this means the ASFA/IDSA criteria — parasitemia >10%, or 5–10% with organ injury or severe hemolytic anemia — now have something behind them beyond expert opinion. It also raises questions the study doesn’t answer: which subgroups benefit most? Does exchange help patients who don’t meet the strict eligibility criteria, such as those with milder organ injury? The subgroup analyses showed consistent benefit across age, sex, immunocompromised status, and SOFA score, but the study wasn’t powered for those comparisons. There is a particular kind of clinical discomfort in practicing in an evidence vacuum — ordering a procedure you believe is right, knowing the belief is built on mechanistic logic and small case series rather than data. Most of transfusion medicine lives in that space. Most of medicine does, if you’re being honest about it. The STOP-BABESIOSIS investigators enrolled 3,233 patients across 82 sites over fifteen years to give us an answer for 629 of them who met strict eligibility criteria. That is what it takes to generate evidence in a disease this uncommon. The result is about as close to a definitive answer as we’re going to get. The data support exchange transfusion for severely ill patients with babesiosis. It took forty-six years to say that with numbers behind it. References Leaf DE, et al. (STOP-BABESIOSIS Investigators). Red Blood Cell Exchange Transfusion for Severe Babesiosis. JAMA Intern Med. Published online March 30, 2026. doi:10.1001/jamainternmed.2026.0244 O’Bryan J, Gokhale A, Hendrickson JE, Krause PJ. Parasite burden and red blood cell exchange transfusion for babesiosis. J Clin Apher. 2021;36:127–134. doi:10.1002/jca.21853
- Hypotensive Transfusion Reactions for the Overworked Fellow
The Reaction That Looks Like Everything Else You’re called to the bedside. A patient is forty minutes into a red cell transfusion and their systolic blood pressure has dropped from 130 to 72. They’re not febrile. There’s no rash, no urticaria, no wheezing. The nurses are looking at you. The attending is on the phone. You stop the transfusion, push fluids, and the pressure comes right back up. The patient feels fine. You send your workup — DAT negative, no hemolysis, gram stain unremarkable. Everything comes back normal. What just happened? This is a hypotensive transfusion reaction, and it is one of the more mechanistically interesting reactions we see — precisely because the mechanism has almost nothing to do with the patient’s immune system, and almost everything to do with the tubing between the bag and the vein. The Definition The formal criteria matter here because hypotension is common in sick patients, and not every blood pressure dip during a transfusion is a transfusion reaction. In adults, a hypotensive transfusion reaction is defined as a drop in systolic blood pressure of at least 30 mmHg, with an end systolic pressure at or below 80 mmHg, occurring during or within one hour of cessation of a transfusion. In pediatric patients, the threshold is a greater than 25% drop in systolic blood pressure from baseline. The key distinguishing feature is what’s absent : no fever, no urticaria, no hemolysis, no signs of volume overload, no respiratory distress. The workup is conspicuously clean. This reaction announces itself by exclusion as much as by presentation. The Mechanism: Bradykinin and the Kallikrein-Kinin System To understand why the blood pressure drops, you need to understand what happens to blood as it moves through transfusion tubing and filters — and what that contact triggers at the molecular level. Transfusion tubing and leukoreduction filters present a negatively charged surface to the blood passing through them. That surface contact activates Factor XII, also called Hageman factor, the initiating protease of the contact activation pathway. Once Factor XII is activated, it cleaves prekallikrein into kallikrein. Kallikrein, in turn, cleaves high-molecular-weight kininogen (HMWK) — a plasma protein that serves as a substrate — into bradykinin. Bradykinin is a potent vasodilator. It binds to B2 receptors on vascular endothelium, triggers the release of nitric oxide and prostacyclin, and causes profound smooth muscle relaxation. The result is a rapid drop in systemic vascular resistance and, consequently, in blood pressure. Bradykinin also increases vascular permeability and can cause flushing — which you may or may not see clinically. Under normal circumstances, bradykinin is short-lived. Its half-life is measured in seconds. Angiotensin-converting enzyme, or ACE, is one of the primary enzymes responsible for breaking it down. In a healthy patient with intact ACE activity, bradykinin generated during a transfusion is rapidly degraded before it can accumulate to clinically significant levels. This is where things get interesting. The ACE Inhibitor Connection: A Pharmacologic Vulnerability ACE inhibitors — the lisinopril, enalapril, and ramipril you see on nearly every medication reconciliation in cardiology and nephrology — work by blocking ACE and preventing the conversion of angiotensin I to angiotensin II. This is the intended therapeutic effect. But ACE is a promiscuous enzyme. It doesn’t just process angiotensin. It also degrades bradykinin. In patients on ACE inhibitors, bradykinin clearance is impaired. The same contact activation that generates a tolerable bradykinin load in an untreated patient can generate a clinically significant bradykinin excess in a patient whose primary clearance mechanism is pharmacologically blocked. This is not an allergic reaction. There is no IgE, no mast cell degranulation, no antigen-antibody interaction. It is a pharmacologic vulnerability: the drug does exactly what it was prescribed to do, and in the context of a transfusion, that’s the problem. The incidence of hypotensive transfusion reactions in patients on ACE inhibitors is meaningfully higher than in the general transfused population, though the absolute risk remains low. ACE inhibitor use is the most consistently identified risk factor in the literature. Other proposed risk factors include bedside leukoreduction (as opposed to prestorage leukoreduction), certain filter types, and possibly high infusion rates — though the evidence for these is less robust. It’s worth pausing here to appreciate the elegance of this mechanism, even when you’re standing at the bedside at 2 AM. The same filter we use to reduce febrile reactions and protect against transfusion-associated graft-versus-host disease is generating the vasoactive peptide that’s dropping the blood pressure. The same drug that’s protecting the patient’s kidneys and heart is preventing them from clearing it. Medicine is frequently this kind of double-edged sword. What You Actually Do The good news is that hypotensive transfusion reactions are highly responsive to supportive care. Stop the transfusion, give IV fluids, and in the vast majority of cases the blood pressure recovers fully. More aggressive hemodynamic support is occasionally required but unusual. A few things worth knowing for your management and counseling: Do not rechallenge with the implicated unit. Once a transfusion has been stopped for a suspected reaction, that unit does not go back up. The patient can receive a different unit if clinically indicated. Hypotensive reactions are stochastic. This is a reaction generated by a set of conditions during one transfusion — the contact time, the filter surface, the patient’s bradykinin clearance at that moment. It does not necessarily recur. You do not need to permanently modify future blood products based on a single hypotensive reaction. No pre-medications are indicated. This is a point worth emphasizing because the clinical instinct after a transfusion reaction is to reach for a premedication order. Benadryl and acetaminophen do nothing for bradykinin-mediated hypotension. Prescribing them provides false reassurance without addressing the mechanism — and, as we’ll discuss in a future post, premedications carry their own problems. For future transfusions in a patient who has had a hypotensive reaction, slow the infusion rate, monitor closely, and ensure prestorage-leukoreduced products are being used rather than bedside filtration. Discussing ACE inhibitor timing with the prescribing team is reasonable in patients who have had recurrent reactions, though evidence-based guidance on this is limited. What We Don’t Know Hypotensive transfusion reactions sit in a frustrating space: mechanistically coherent, but epidemiologically murky. The bradykinin story is well-established in the literature, but the clinical predictors of who will react remain poorly characterized. ACE inhibitor use is the best-validated risk factor, but most patients on ACE inhibitors are transfused without incident. There are also open questions about the role of storage time. Bradykinin and other kinins can accumulate in blood products over the storage period, particularly in plasma-rich components. Whether older units carry a higher bradykinin burden at the time of transfusion — and whether that translates to clinical risk — is not firmly established. And then there’s the question of underrecognition. Hypotension is common in hospitalized patients. A modest blood pressure dip during a transfusion in a patient on antihypertensives, diuretics, and vasodilators may never trigger a transfusion reaction workup. How many hypotensive transfusion reactions are quietly absorbed into the background noise of a busy floor? We genuinely don’t know. What we do know is that the mechanism is real, it is pharmacologically explainable, and understanding it makes you a better clinician at the bedside — which is the whole point.
- How I Actually Use AI: A Case for Augmented Intelligence
The discourse has two settings, and both are wrong Pick up any piece about artificial intelligence in medicine and you will find one of two arguments. Either AI is going to revolutionize clinical practice by automating diagnosis and replacing physician judgment, or AI is a dangerous, hallucinating black box that no responsible clinician should touch. Both camps are loud. Both camps are largely arguing past the actual experience of physicians who use these tools. The thing both arguments have in common is that they imagine AI as an autonomous agent — something that acts independently, makes decisions, and produces outputs you simply accept or reject wholesale. That framing drives the fear and the hype equally. And it doesn't describe how I use AI, or how I think physicians should use it. There's a better frame. It's called augmented intelligence, and the distinction matters. What augmented intelligence actually means Augmented intelligence is not a euphemism for AI with better PR. It describes a specific relationship between the human and the tool: the AI amplifies your thinking, your drafting, your analysis — and you retain intellectual ownership of the output. You are the decision-maker. You direct the work. You evaluate what comes back and correct it when it's wrong. The AI doesn't publish anything. You do. This is meaningfully different from autonomous AI, which operates independently and generates outputs without ongoing human oversight. The distinction isn't just philosophical — it has real implications for how you build your workflow, how you evaluate output quality, and where accountability sits. In augmented intelligence, accountability never leaves the physician. That's not a limitation. It's the point. What this looks like in practice I use AI tools daily. I use Claude for writing and coding: editing blog posts, structuring arguments, generating diagrams, iterating on prose. I use Gemini for personal assistant tasks — scheduling, reminders, quick lookups. Different tools, different jobs, same underlying principle. When I'm drafting a post, I bring the idea, the clinical knowledge, the interpretive framework, and the editorial judgment. Claude proposes structure, generates prose, and produces things like SVG diagrams that I couldn't efficiently produce by hand. I read everything. I correct errors — and there are always errors, some subtle. I rewrite passages that don't sound like me or aren't correct. I verify factual claims against primary sources. The post that goes up is mine. The thinking is mine. The AI accelerated the production of a written artifact that represents my analysis. It did not perform the analysis. This workflow is only valuable if I maintain that discipline. The moment I start publishing AI output I haven't critically evaluated, I've stopped practicing augmented intelligence and started practicing something more like delegation to a very fluent but unreliable assistant. Those are not the same thing. The oversight imperative Anyone who works in laboratory medicine already understands this intuitively, even if they haven't applied the framework to AI. We do not report analyzer results without understanding what the analyzer did. We run QC. We investigate flags. We understand the assay's limitations, its interference profile, the conditions under which it fails. When a result looks wrong, we don't shrug and report it — we investigate. The instrument is a tool. We are responsible for the result. AI output requires exactly the same critical scrutiny. The distinctive failure mode of large language models is not that they produce obviously garbled output — it's that they produce fluent, confident, plausible-sounding output that is wrong. A traditional analyzer error usually looks like an error. An AI hallucination often doesn't. It reads like a normal sentence. It cites a study that doesn't exist in the same register as one that does. This is why oversight isn't optional. It's not a hedge for cautious people. It's the minimum standard for using the tool responsibly. If you're accepting AI output without evaluating it, you're not practicing augmented intelligence. You're practicing something with no quality control, and in medicine, we know exactly how that ends. The case for engaging now I understand the instinct to wait. The tools are changing fast. The evidence base for clinical AI is immature. The regulatory landscape is unclear. Sitting it out feels like the prudent move. But physicians who opt out aren't avoiding risk — they're just outsourcing the learning curve. Someone is going to set the norms for how AI gets used in your institution, your specialty, your practice environment. It will either be clinicians who have hands-on experience with the tools and understand their limitations, or it will be administrators, vendors, and policy-makers who don't see patients. The physicians who engage critically now — who build workflows with real oversight, who learn where the tools fail, who can articulate what responsible use actually looks like — are the ones who will be positioned to shape those norms. The ones who wait will have AI handed to them later, implemented by people who weren't asking the right questions. I'd rather be in the first group. I'd rather have colleagues in medicine who are in the first group. Augmented intelligence, done right, is not about ceding judgment to a machine. It's about using a powerful tool with the same rigor we bring to every other tool in medicine. We validate. We monitor. We maintain accountability. That's not fear-mongering and it's not hype. It's just good practice.
- Granulocyte Transfusions for the Overworked Fellow
The patient you can't ignore Picture the consult. Profound neutropenia — ANC in the double digits. Documented fungal infection. Forty-eight hours of broad-spectrum antifungals and still febrile. The primary team is running out of moves. Someone suggests granulocyte transfusions. You nod. You place the consult. You mobilize a donor. And somewhere in the back of your mind, a small voice asks: does this actually work? That voice deserves an answer. The honest answer, unfortunately, is that we're not sure. Why the idea makes sense The logic is clean. Neutrophils kill bacteria and fungi. If a patient has no neutrophils — from chemotherapy, from bone marrow failure, from a primary immunodeficiency like chronic granulomatous disease — they can't mount an effective innate immune response. So we give them neutrophils from the outside. It's the same rationale as any component transfusion: if the patient can't make enough of something critical, and the deficit is causing harm, we try to make up the difference. We do it with red cells. We do it with platelets. Why not neutrophils? The problem is that logic and evidence are different things. And in transfusion medicine, we have a long history of confusing the two. The evidence, such as it is To be clear: we have been trying to answer this question for a long time. There are decades of trials in the granulocyte literature. The field has not been idle. The issue is not a lack of effort — it's that the evidence we've accumulated is genuinely hard to interpret. Early trials from the 1970s and 1980s showed some promising signals, but they were small, underpowered, and conducted before the era of modern antimicrobial therapy. Patient populations were heterogeneous. Organisms were different. Underlying diseases were different. Comparing across trials is difficult, and drawing conclusions from any individual one is precarious. More recently, the RING trial — the Resolving Infection in Neutropenia with Granulocytes trial — made a serious attempt to answer the question with a properly designed randomized controlled trial. It was larger and more rigorous than anything that came before. It had a mortality endpoint. It was the study the field needed. It did not show a survival benefit. But here's where honest interpretation matters. The RING trial's negative result doesn't necessarily mean granulocytes don't work. The trial faced a fundamental problem: dose. The doses actually delivered to patients were lower than what was considered potentially therapeutic, in part because of the inherent variability in granulocyte collection. Donors were stimulated with G-CSF and dexamethasone, yields varied between donors, and there was no reliable way to guarantee a therapeutic dose on any given day. If you can't reliably deliver the intervention, you can't interpret the result — at least not cleanly. This is not a minor methodological quibble. It goes to the heart of what the trial can and cannot tell us. RING is the best evidence we have. It is also evidence that came with a major confounder baked in. The survival curves didn't look dramatically different. The microbiological response data were encouraging in some subgroups and not in others. Secondary endpoints were mixed. You can read the RING trial and come away thinking granulocytes failed a fair test, or you can come away thinking the test itself wasn't quite fair. Both readings are defensible. We have not arrived at a definitive answer. We may not for a long time. The amphotericin rule nobody can fully justify If you've ever been involved in a granulocyte course, you've heard this: separate the granulocytes from the amphotericin. Don't give them at the same time. Space them out — 12 hours if you can. This is institutional gospel in most centers that do granulocyte transfusions. It's in the AABB Technical Manual. People follow it without question. Here's what it's actually based on: one paper from 1981 describing pulmonary toxicity in patients who received concurrent granulocytes and amphotericin B. One paper. There were also some in vitro and animal data that suggested a plausible mechanism. That was enough to generate a widespread practice recommendation. What happened next is instructive. Subsequent clinical studies — multiple of them — tried to confirm this finding and couldn't. The signal didn't replicate. Patients who received granulocytes and amphotericin close together did not consistently have worse pulmonary outcomes than those in whom the infusions were separated. And yet the practice persisted. The AABB Technical Manual still recommends separation. Centers still coordinate timing. Fellows still field late-night calls about when the liposomal amphotericin was given and whether there's enough of a window. This is how medical dogma works. A case series raises concern. The concern gets institutionalized. Later evidence fails to confirm it. The institution doesn't notice. To be clear: there may still be a real interaction. The absence of evidence is not evidence of absence, and the subsequent studies had their own limitations. Separating infusions is low-cost in most clinical situations. But when someone asks you why, the honest answer is: we're not entirely sure, and the original data that started this practice are weaker than the strength of the recommendation would suggest. A dose we mostly extrapolated The conventional therapeutic dose target for granulocyte transfusions is at least 1 × 10¹⁰ granulocytes per transfusion. This number comes from dose-response analyses suggesting that below this threshold, there's minimal ANC increment and possibly minimal clinical effect. There are a few problems with this. First, collection yields are highly variable. Donors are stimulated with G-CSF and dexamethasone before apheresis, which significantly increases peripheral neutrophil counts and therefore collection efficiency. But even with stimulation, yields vary substantially between donors. Hitting the 1 × 10¹⁰ target is not guaranteed. The RING trial demonstrated this empirically — actual delivered doses in the trial were often below what was intended. Second, the dose target itself is derived from indirect data. We're using ANC increment as a surrogate for clinical effect, which assumes the transfused neutrophils are functioning effectively after infusion and trafficking to sites of infection. There's evidence they do — labeled granulocytes have been shown to migrate to infection sites — but this is distinct from demonstrating that the dose-response relationship for ANC increment maps neatly onto a dose-response relationship for survival. Third, we dose by weight (roughly 0.6 × 10⁹ cells/kg as a lower threshold), but we collect a product whose yield is largely determined by donor biology. You can stimulate better. You can select donors with high baseline neutrophil counts. But you can't fully control what you get. The mismatch between what we target and what we deliver is a persistent feature of granulocyte therapy, not a solvable logistics problem. What to do with all this uncertainty Granulocyte transfusions are still used. At centers with the infrastructure to collect and process them — which is not everywhere — they remain an option for patients with severe neutropenia and refractory infections, particularly in the setting of primary immunodeficiencies or when marrow recovery is anticipated. The biological rationale is sound. The clinical experience is real, even if it's hard to quantify in controlled trials. But we should be honest about what we're doing when we order them. We're making a judgment call in the face of genuine uncertainty. We're not executing a protocol backed by level-one evidence. We're doing what makes mechanistic sense for a patient who is out of other options, knowing that our best randomized trial couldn't definitively prove benefit. That's okay. Clinical medicine involves a lot of this. The problem isn't uncertainty — it's the pretense of certainty. The fellow who confidently states that granulocytes improve survival is wrong. The fellow who confidently states they don't is also wrong. The right answer is that we tried hard to find out, the trial had a fatal flaw in its ability to deliver the intervention reliably, and we're still waiting for better data. Knowing the limits of the evidence is not a failure of clinical knowledge. It is the clinical knowledge.
- Transfusion Medicine: The Invisible Consult Service
There is a particular kind of email that transfusion medicine physicians learn to recognize. It arrives a day or two after an event — a transfusion reaction, a complicated crossmatch, a patient with antibodies nobody quite knew what to do with. The subject line is something like quick question or following up , and the body begins: I wasn't sure if I was supposed to call you. You weren't sure if you were supposed to call us. This is not a failure of clinical judgment. It is a failure of visibility — and it is one of the most common problems in transfusion medicine, at almost every institution I have ever encountered. Transfusion medicine occupies a strange position in the hospital ecosystem. We are essential infrastructure — the blood bank is running constantly, processing samples, issuing products, catching incompatibilities before they reach patients — but we are largely invisible to the clinicians ordering the blood. We are the electrical grid. You don't think about us until the lights go out. The problem has two distinct roots, and they compound each other. The first is awareness. Many clinicians — including experienced hospitalists, surgeons, and intensivists — do not know that a transfusion medicine consultation service exists, or that there is a physician available to answer questions around the clock. They know there is a blood bank. They may not know there is a board certified physician attached to the blood bank. The second is uncertainty about when to call. Even clinicians who know we exist often hesitate, unsure whether their situation is "bad enough" to warrant a consult. A patient ran a fever during a transfusion — is that ours? The blood bank flagged an antibody — does someone need to talk to me? There is no obvious threshold, no shared mental model of what transfusion medicine is for beyond the most catastrophic scenarios. The result is a gap. Reactions get managed in isolation. Antibody workups proceed without clinical context. Patients occasionally get the right outcome anyway — and occasionally don't. The febrile non-hemolytic transfusion reaction is a useful illustration of both problems at once. FNHTR is common, manageable, and almost never dangerous. Stop the transfusion, give acetaminophen, observe, document. Most hospitalists handle this appropriately without ever calling anyone. That's correct — FNHTR does not require a transfusion medicine consult. But here is where it gets complicated: FNHTR is a diagnosis of exclusion. You can only call it benign after you've ruled out the things that aren't — acute hemolytic reaction, septic transfusion reaction, early TRALI. The fever threshold matters. The hemodynamic picture matters. The timing matters. And a hospitalist who has never been walked through that differential is making a judgment call without a map. Most of the time, the call is right. But "most of the time" is a fragile foundation for patient safety, and the gap between managed correctly in isolation and should have called us is narrower than it looks in the moment. I made a resource. It's linked below — a one-page clinical reference for exactly this decision: when to call transfusion medicine, when to monitor, and what to look for in the five reactions that cannot be missed. It is not a substitute for a consult when you are unsure. That's the other thing I want to say plainly: uncertainty is a valid reason to call. You do not need to have a confirmed hemolytic reaction in front of you to page transfusion medicine. You just need to be unsure. That's enough. We exist. We are available. We want to hear from you before things go wrong — and that is not a high bar. It is just a call.
- Your Transfusion Reaction Started in the Processing Facility
If you trained anything like I did, you learned transfusion medicine in two separate silos. One bucket: processing. Leukoreduction, irradiation, CMV testing, storage conditions, expiration dates. The other bucket: clinical reactions. Febrile nonhemolytic transfusion reactions, allergic reactions, hypotension, TACO, TRALI. Two completely different lectures, two different shelf exam questions, two different mental filing cabinets. Here's the thing. They're the same story told from different ends. Every decision made during processing has a downstream clinical consequence — sometimes immediate, sometimes delayed, sometimes baked into institutional policy so old that nobody remembers why it exists. Understanding transfusion medicine means collapsing those two silos into one. Let me show you what I mean with four examples. Leukoreduction → FNHTRs and CMV A febrile nonhemolytic transfusion reaction, or FNHTR, is defined as a temperature of at least 38°C with a rise of at least 1°C — or rigors — occurring during or within four hours of the cessation of transfusion. Classically, we're taught that FNHTRs result from cytokine buildup in the unit. That teaching is correct, but it skips the part that makes it interesting. During storage, white blood cells in a blood unit don't just sit there. They die, and as they do, they release cytokines — IL-1, IL-6, TNF-α — that accumulate in the unit over time. By the time that bag of red cells or platelets hangs, it may be carrying a meaningful cytokine payload. Infuse it fast enough, and your patient spikes a fever. Not because of anything intrinsically wrong with the unit. Because you just infused a bag of inflammatory soup. Pre-storage leukoreduction — filtering out the white cells before storage, rather than at the bedside — eliminates the problem at its source. The cytokines never accumulate because the cells that produce them are gone. This is not a trivial distinction: universal leukoreduction significantly reduced FNHTR rates. When we moved from selective to universal leukoreduction in the early 2000s, febrile reactions dropped substantially. But leukoreduction's second accomplishment often gets less airtime, and it deserves more. White blood cells are the primary vector for transfusion-transmitted CMV. CMV is a herpesvirus that establishes latency in leukocytes, and in immunocompetent recipients, transfusion-transmitted CMV is generally clinically silent. In immunocompromised patients — transplant recipients, patients with HIV, premature neonates — it can be devastating. For decades, the solution was CMV seronegative blood: test donors, restrict CMV-negative products to high-risk recipients. The problem is that seronegative status is imperfect. Donors in the window period before seroconversion will test negative and still carry latent virus. Leukoreduction offers a mechanistically cleaner solution: remove the cells that harbor the virus, and you've addressed the problem regardless of serologic status. Current evidence supports leukoreduced blood as equivalent to seronegative blood for CMV-safe transfusion. One processing step. Two major clinical problems addressed. Bedside Filtration → Hypotensive Reactions Here's where it gets interesting. If leukoreduction is so effective, why does it matter when you filter? The shift from bedside to pre-storage leukoreduction wasn't driven purely by logistics, though the workflow advantages are real. It was also driven by a safety signal. Bedside leukoreduction filters activate the contact pathway of coagulation. That activation generates bradykinin, a potent vasodilator. In most patients, bradykinin is rapidly degraded by angiotensin-converting enzyme, or ACE. But in patients on ACE inhibitors, that degradation pathway is blocked. Bradykinin accumulates, blood pressure drops, and you have a hypotensive transfusion reaction with no fever, no urticaria, no obvious allergic trigger. The processing method determined the patient's risk profile. I'll come back to this one — the bradykinin story is deep enough to deserve its own post — but the principle is the same: a decision made upstream in processing showed up at the bedside. Storage Lesion → Neonatal Practice Red blood cells are not static objects. From the moment they're collected, they change. 2,3-DPG — the molecule that facilitates oxygen offloading from hemoglobin — drops within the first two weeks of storage. Potassium leaks out of the cells and accumulates in the supernatant. The cells become less deformable, less able to squeeze through small capillaries. Microparticles shed from the cell membrane. Collectively, these changes are called the storage lesion. In adult patients with normal physiology, the clinical significance of the storage lesion has been debated extensively. Large randomized trials — ABLE, INFORM, RECESS — have largely failed to show meaningful harm from older blood in most adult populations. The cells aren't great, but adults are fairly forgiving. Neonates are less so. A neonate receiving a large-volume transfusion is exposed to every consequence of the storage lesion in concentrated form. Hyperkalemia from stored red cell supernatant can trigger arrhythmias. Impaired oxygen delivery from 2,3-DPG-depleted cells matters when your patient weighs 700 grams. Deformability matters when you're perfusing vessels measured in microns. This is why neonatal transfusion practice looks so different from adult practice. Fresher units are preferred — the evidence that older units are truly catastrophic for neonates is less definitive than the physiologic concern might suggest, but the caution is reasonable given the stakes. Small-volume aliquots, often washed to reduce potassium load. CMV-safe products. And irradiation — which brings us to the fourth thread. Irradiation → TA-GvHD Transfusion-associated graft-versus-host disease, or TA-GvHD, is rare. It is also, when it occurs, nearly universally fatal — mortality exceeds 90%. That combination makes it one of the most important complications in transfusion medicine, and one of the clearest illustrations of why processing decisions are clinical decisions. Here's the mechanism. Cellular blood products contain viable donor T lymphocytes. In an immunocompetent recipient, those donor T cells are recognized as foreign and eliminated. In an immunocompromised recipient — or in certain other vulnerable populations — they aren't. The donor T cells engraft, proliferate, and begin attacking the host's tissues: skin, liver, gut, bone marrow. The host's own immune system, suppressed or naïve, cannot mount a response. The result is a graft-versus-host syndrome with no good treatment options and very few survivors. The at-risk populations are broader than most people initially assume. Congenital immunodeficiencies, hematologic malignancies, stem cell transplant recipients, and neonates are the obvious ones. Less obvious: patients receiving HLA-matched cellular products, or directed donations from first-degree relatives — situations where the donor and recipient share enough HLA antigens that the recipient's immune system fails to recognize the donor T cells as foreign, even in a host who is otherwise immunocompetent. Irradiation prevents TA-GvHD by delivering a targeted dose of gamma or X-ray radiation to the blood product, rendering donor T lymphocytes incapable of proliferation. The cells are still present — irradiation doesn't remove them — but they can't engraft and they can't divide. The threat is neutralized before the product ever reaches the patient. This is about as direct a processing-to-outcome link as exists in transfusion medicine. A near-universally fatal complication, preventable entirely by a modification applied hours or days before transfusion. The clinician at the bedside never touches it. The outcome depends entirely on whether the right box was checked upstream. The Punchline Processing isn't logistics. It's upstream medicine. The decisions made in processing — when to filter, how to store, what modifications to apply — are clinical decisions, even if the clinicians ordering transfusions rarely think of them that way. When a neonate avoids a hyperkalemic arrest, it's because someone understood the potassium curve on stored blood. When an immunocompromised patient doesn't get CMV, it's because of a filter applied hours before the product ever left the refrigerator. When a patient on lisinopril doesn't bottom out their blood pressure, it's because someone switched from bedside to pre-storage leukoreduction and understood why it mattered. When a post-transplant patient doesn't die of TA-GvHD, it's because a box got checked in a processing facility they'll never set foot in. The two silos were always one subject. We just taught them wrong.
- Jehovah's Witnesses and Blood: The Guidance Changed. The Complexity Didn't.
On March 20, 2026, the Governing Body of Jehovah's Witnesses issued Governing Body Update #2. In a video address, member Gerrit Lösch announced that members may now decide for themselves whether to have their own blood drawn, stored, and later reinfused during medical or surgical care. The prohibition on allogeneic transfusion — receiving blood from another person — remains firmly in place. But preoperative autologous deposit, long explicitly forbidden, has been moved into the "personal conscience" category. The theological rationale was concise: "The Bible does not comment on the use of a person's own blood in medical and surgical care." I've been thinking about this a lot since it dropped. Not just as a news item, but as a transfusion medicine physician who has spent years navigating the clinical and ethical complexity that Jehovah's Witness patients bring to the blood bank. This policy shift is significant. It's also worth understanding clearly — because the coverage so far has been long on theological analysis and short on what any of this actually means from where I sit. What We're Talking About, Clinically Preoperative autologous donation (PAD) is exactly what it sounds like. A patient donates their own blood — typically between six weeks and five days before a scheduled surgery — which is processed and stored at a blood bank or hospital transfusion service. If transfusion becomes necessary during or after the procedure, the patient receives their own blood back. If it isn't needed, the unit is discarded. PAD is not a new technique. It's been around for decades. Its advantages are real: no risk of alloimmunization, no risk of transfusion-transmitted infection, lower likelihood of immune-mediated transfusion reactions. Its drawbacks are also real: preoperative phlebotomy can induce or worsen anemia, and the blood still requires the same processing and storage infrastructure as allogeneic donations. It is not a casual or universally available option. More on that in a moment. The Conscience Zone Was Already a Patchwork Here's what I find genuinely fascinating about this update: it's being covered like a dramatic reversal, but the conscience zone was already wide before March 20th. "Conscience zone" is my shorthand for the category of practices the Watch Tower Society has long designated as individual decisions — neither mandated nor prohibited, left to each member to resolve according to their own beliefs. Intraoperative cell salvage, acute normovolemic hemodilution, cardiopulmonary bypass, dialysis, epidural blood patches — all individual-decision items for years. The zone is wider now. But it was already wide. More importantly: the official doctrine has never fully captured what actually happens in clinical practice. I've cared for Jehovah's Witness patients who would accept platelets. I've worked with patients who would accept directed donations from members of their own congregation. I've seen patients draw their own lines in places the official guidance didn't put them — navigating their faith and their medical situation in ways that were entirely their own. Jehovah's Witness patient care has always been variable, because the patients are people, not policy documents. What this update does is formalize something that experienced clinicians already knew: there is no single answer to "what will my Jehovah's Witness patient accept?" There never was. The conscience zone just got wider, which means the conversation at the bedside just got more important. What This Means for the Blood Bank So what actually changes operationally? Potentially quite a bit — for patients who want to pursue PAD and have access to it. Blood banks that offer autologous donation programs will need to be prepared for Jehovah's Witness patients presenting for preoperative collection. This isn't a simple extension of existing workflows. Autologous units carry specific labeling requirements and storage handling. There are consent considerations unique to this population — patients will need clear information about the anemia risk, the storage logistics, and the fact that unused units are discarded rather than entering the general blood supply. For some Jehovah's Witness patients, that last point may matter doctrinally. Surgeons and anesthesiologists planning cases involving Jehovah's Witness patients will need to update their conversations. The reflexive assumption that a Jehovah's Witness patient will decline all banked blood products is no longer accurate. These patients may now arrive at the OR with autologous units available — but only if someone asked, offered, and made the referral in time. The window for PAD is finite. A patient referred for major elective surgery with a two-week lead time cannot take advantage of this option. And that's before we get to the institutional side. Not every hospital has an autologous donation program. Not every blood bank has the capacity or infrastructure. The patients most likely to benefit are those undergoing planned, elective procedures at well-resourced academic medical centers — which is not the only place Jehovah's Witness patients receive surgical care. The Practical Limits of Personal Conscience This is where I want to pump the brakes on the more celebratory takes I've seen. The framing of this update — each Christian must decide for themselves — positions the change as an expansion of individual autonomy. And in a doctrinal sense, it is. But autonomy without access isn't really autonomy. Jehovah's Witnesses number approximately 9.2 million worldwide, across more than 200 countries. The infrastructure to support preoperative autologous donation does not exist uniformly across those settings. In much of the world, the option the Governing Body has now made permissible is simply not available. The theological door has opened, but the operational corridor behind it is narrow and unevenly distributed. There's also the question of social pressure, which former members have been vocal about. The update frames this as conscience — but conscience operates inside a community. The Watch Tower Society has a long history of framing individual decisions within a framework of spiritual accountability. Moving something to the "personal decision" category is not the same as removing the social weight attached to that decision. A patient who now technically may accept PAD is making that choice in a social and ecclesiastical context that still shapes what choices feel available. That's not a reason to dismiss the update. It matters that the prohibition has been lifted. But clinical teams working with Jehovah's Witness patients should not assume that "it's now allowed" translates automatically into "patients will feel free to accept it." The conversation still requires care, privacy, and time. Where This Leaves Us The transfusion medicine community has spent decades developing expertise in bloodless surgical programs, autologous techniques, and the clinical and ethical navigation of Jehovah's Witness patient care. That expertise doesn't become less relevant now — if anything, it becomes more so. What this update requires from us is updated fluency: knowing what changed, understanding the practical and doctrinal distinctions that remain, and meeting patients where they actually are rather than where the policy says they could be. The conscience zone just got wider. Our job is to help patients navigate it — without assuming the map is simpler than it is.
- A Primer on Hereditary Hemochromatosis for the Overworked Fellow
I was reviewing charts on the hemochromatosis protocol during my transfusion medicine fellowship when I came across a patient with iron overload severe enough to require ongoing therapeutic phlebotomy — and a completely wild-type HFE panel. No C282Y. No H63D. No S65C. Just normal. I had just finished writing the service guide, which included a brief section on HFE alleles and genotypes. I had written a sentence about this exact scenario: “Occasionally you will see patients with iron overload and a WT HFE locus. This probably means they have another type of HH.” I had written that sentence and moved on. I had no idea what it actually meant. So I went down the rabbit hole. What I found reframed everything I thought I knew about hemochromatosis — and I think it’ll do the same for you. Hemochromatosis Is a Hepcidin Story Here is the reframe: hereditary hemochromatosis is not, at its core, a story about HFE. It’s a story about hepcidin. Hepcidin is a small peptide produced by hepatocytes, and it is the master regulator of iron homeostasis. The mechanism is elegant. Hepcidin binds to ferroportin — the only known iron exporter in the human body — and tags it for internalization and degradation. When hepcidin is high, ferroportin disappears from the cell surface. Iron stays trapped inside enterocytes, macrophages, and hepatocytes. When hepcidin is low, ferroportin is abundant. The gut absorbs iron without restraint. In hereditary hemochromatosis, regardless of the gene involved, the unifying pathophysiology is hepcidin deficiency relative to iron burden. The iron accumulates because the hormone that should be putting the brakes on iron absorption isn’t doing its job. HFE is not hepcidin. HFE is one of several upstream signals that tell the liver to make hepcidin in the first place. And that distinction explains everything. The Sensing Circuit Think of hepcidin production as the output of a sensing circuit. The liver is constantly asking: how much iron is out there? The answer comes from multiple inputs, and several proteins are involved in integrating those signals. HFE, transferrin receptor 2 (TFR2), and hemojuvelin (HJV) all participate in sensing transferrin saturation and stimulating hepcidin expression. HJV acts as a BMP co-receptor, and both HFE and TFR2 modulate downstream BMP/SMAD signaling. Mutations in any of them produce the same functional consequence: the liver underestimates iron burden, hepcidin production is insufficient, and ferroportin runs unchecked. HAMP is the gene that encodes hepcidin itself. Mutations here skip the sensing problem entirely — you’re not impairing the signal circuit, you’re eliminating the signal. SLC40A1 encodes ferroportin. Mutations here operate at the other end of the pathway entirely, at the effector rather than the sensor. And as we’ll get to, ferroportin disease is its own special category. The Four Types, and Why They’re Not All the Same Type 1 — HFE This is the one we learn in medical school and then assume is the whole story. HFE mutations are the most common cause of HH, with C282Y homozygosity the genotype most strongly associated with clinical disease. Onset is typically in late adulthood, often amplified by additional iron-loading exposures like alcohol use or chronic ineffective erythropoiesis. Menstruating individuals are partially protected by blood losses until menopause. Penetrance is lower than we historically believed — many C282Y homozygotes never develop symptomatic disease. Compound heterozygosity (C282Y/H63D) causes milder disease. H63D homozygosity milder still. S65C, the least common of the HFE alleles, is associated with mild to moderate iron overload when homozygous, and a single copy is generally not enough on its own to cause clinically significant disease. A single copy of any HFE allele typically isn’t sufficient. Type 2 — HJV or HAMP Here is where things escalate. Type 2, also called juvenile hemochromatosis, presents in the first or second decade of life. Type 2A involves HJV, Type 2B involves HAMP. Both are autosomal recessive, both are rare, and both are aggressive. Because iron accumulation begins in childhood, end-organ damage — particularly cardiac and endocrine — accumulates early. Without treatment, fatal cardiomyopathy by the third decade of life is not a hypothetical. This is not a disease you find incidentally on routine iron studies in a 50-year-old. A fellow who has only ever managed Type 1 may not be thinking about HH in a young patient with unexplained iron overload, elevated transferrin saturation, and a normal HFE panel. That blind spot can have real consequences. Type 3 — TFR2 Type 3 HH is caused by mutations in TFR2 — one of those upstream sensors feeding into the hepcidin circuit — and is intermediate in severity and onset, typically presenting in early adulthood. It is autosomal recessive and rare, with most reported cases from Mediterranean populations. Clinically it resembles Type 1 more than Type 2, though it tends to present earlier. If Type 1 is the late-night slow burn, Type 3 is the same fire with an earlier start time. Type 4 — SLC40A1 (Ferroportin Disease) Type 4 is the most mechanistically interesting, and the one most likely to trip you up. Type 4A is a loss-of-function mutation in ferroportin. Iron accumulates preferentially in macrophages rather than parenchymal cells, because ferroportin is how macrophages export the iron they’ve scavenged from senescent red blood cells. When ferroportin doesn’t work, that iron is trapped. Serum ferritin can be markedly elevated — because ferritin leaks from iron-laden macrophages — while serum iron and transferrin saturation are low. This is the opposite pattern from classic HH. Patients may also become anemic with phlebotomy more quickly than expected, because their macrophages can’t release stored iron to support erythropoiesis. Type 4B is a gain-of-function mutation that makes ferroportin resistant to hepcidin. The brake exists; the car just doesn’t respond to it. This behaves more like classic HH: elevated transferrin saturation, parenchymal iron loading, and good response to phlebotomy. Both subtypes are autosomal dominant — which means a family history may be easier to elicit than in the recessive types, and a single pathogenic allele is enough. Back to the Wild-Type When you encounter iron overload with a normal HFE panel, the differential isn’t just “secondary causes.” Depending on the clinical picture — especially the patient’s age, the pattern of iron deposition, and family history — it’s worth asking whether you’re looking at Type 2, 3, or 4. Extended genetic testing panels exist. A hematologist or geneticist may be a useful colleague. And then there’s the patient I encountered who had wild-type results across the full panel — not just HFE, but HJV, HAMP, TFR2, and SLC40A1 as well. No known pathogenic variant anywhere in the circuit. Just iron overload that didn’t have a name we could give it yet. The most likely explanation is a mutation in a gene we haven’t characterized — which is to say, the circuit we’ve described is probably not complete. The bigger takeaway, though, is the same one that started this post. Hemochromatosis is a disease of hepcidin deficiency. Once you see it that way, the genetics stop feeling like rote memorization and start feeling like variations on a theme. HFE, HJV, HAMP, TFR2, SLC40A1 — they’re all part of the same story. Some are upstream sensors, one is the signal itself, one is the effector. The iron accumulates because somewhere in the circuit, the brake is broken. A wild-type HFE result doesn’t mean there’s no hemochromatosis. It means you need to look upstream, downstream — or possibly somewhere we haven’t mapped yet.
- More Is Not More: Hepcidin and the Counterintuitive Science of Iron Dosing
In my last post on donor iron deficiency, I buried the most interesting part. Most of the piece covered what the field has established: donation depletes iron, ferritin screening is underutilized, the HEIRS and STRIDE trials make a reasonable case for supplementation, and the AABB has recommendations in place. All of that holds. But near the end, almost as a footnote, I mentioned that the original HEIRS trial used daily iron dosing — and that subsequent evidence suggests daily dosing may actually inhibit absorption by triggering the release of hepcidin. I've been thinking about that footnote ever since. It deserves more than a footnote. A Brief Introduction to Your Iron Gatekeeper Hepcidin is a small peptide hormone made by the liver, and its job is to regulate how much iron enters circulation. When iron stores are adequate, hepcidin is secreted, binds to ferroportin — the channel that exports iron from cells into the bloodstream — and shuts the door. When stores are low, hepcidin falls and the door opens. It is an elegant feedback loop, and under normal circumstances it works well. What makes hepcidin relevant to donor supplementation is a less intuitive property: it also responds acutely to oral iron ingestion. A single dose of 60 mg or more of elemental iron — roughly what you find in a standard over-the-counter supplement — triggers a hepcidin spike that sets in within hours and persists for approximately 24 hours before returning to baseline. While hepcidin is elevated, absorption from any subsequent dose is meaningfully suppressed. The implication for how we advise donors to supplement follows directly from this. The Problem With 'Take Iron Daily' The instinct to recommend daily iron supplementation is understandable. More doses, more iron in, faster repletion. It is the same logic that leads to split dosing — take it twice a day to maximize the total amount ingested. Both approaches are intuitive. Both are, at least partially, counterproductive. A 2015 study by Moretti and colleagues, published in Blood , was among the first to characterize this effect in humans. They showed that a morning iron supplement triggers sufficient hepcidin elevation to reduce absorption from a dose given later the same day — and that the response persists into the following morning. Split dosing compounded the problem: dividing the daily dose produced higher hepcidin and lower fractional absorption per dose, not better total uptake. The 2017 Stoffel et al. trial in Lancet Haematology tested the logical alternative prospectively. Women randomized to alternate-day supplementation absorbed significantly more iron — both in fractional terms (21.8% vs. 16.3%) and in total — compared to those taking supplements daily. Allowing hepcidin to return to baseline between doses improved the efficiency of each one. Subsequent work confirmed that morning timing matters too: hepcidin follows a circadian pattern and is lower in the morning, making that the optimal window before the post-dose spike closes the door again. The practical upshot is that a donor who dutifully takes iron every morning may be absorbing less than one who takes the same dose every other morning. The body's own regulatory response is working against the intervention. What the Data Don't Yet Tell Us The alternate-day evidence is compelling, but almost none of it was generated in blood donor populations specifically. Most studies enrolled iron-depleted or iron-deficient women — a related but not identical context. Donors vary considerably in baseline iron status, sex, age, donation frequency, and the degree of deficiency at the time of supplementation. Whether the absorption advantage of alternate-day dosing holds consistently across this range is not yet established. The 2024 meta-analysis of daily versus alternate-day iron dosing added a useful wrinkle: baseline inflammation appears to modulate the benefit. Elevated hepcidin from inflammatory states may blunt the absorption advantage of spacing doses, since the favorable window is already partially closed before the first pill is swallowed. This is not a fringe concern in a donor population that includes people with subclinical inflammatory conditions. The dose question is also genuinely unresolved. HEIRS used daily supplementation; we now know that was suboptimal from an absorption standpoint. The alternate-day studies suggest that doubling the per-dose amount on an alternate schedule can achieve comparable or greater total iron uptake — but this has not been validated prospectively in donors. We are, in effect, extrapolating from better-designed absorption studies to a population that hasn't been directly studied under the revised paradigm. And then there is the infection question I raised in the earlier post. Oral iron has been shown to acutely elevate bacterial growth in human serum in iron-sufficient subjects. Whether this translates to iron-deficient donors — in whom the physiologic context is substantially different — remains unknown. No donor supplementation trial to date has tracked infection as an outcome. That gap is worth sitting with. Where This Leaves Us The case for addressing donor iron deficiency is solid. The case for doing it thoughtfully — rather than defaulting to daily supplementation because it seems like the obvious approach — is getting stronger. Hepcidin is not a curiosity. It is a central regulator of iron homeostasis, and it does not stop working just because we want our donors to replete faster. Any supplementation strategy that ignores it is, at minimum, less efficient than it could be, and possibly counterproductive at the margins. The AABB recommendations provide a reasonable framework. What they do not yet specify — with good evidence behind them — is the optimal schedule. Alternate-day morning dosing is the best current answer from the absorption literature. Whether that translates directly to the donor context, and at what dose, is work that still needs doing. In the meantime, it seems worth updating the footnote.
- Flying Blind: TPE for Acute Kernicterus in Crigler-Najjar Syndrome
Introduction One of the most humbling experiences in medicine is when a consult comes in and you realize the textbook has nothing for you. I had one of those recently — a 21-year-old with Crigler-Najjar syndrome type 1 and chronic kernicterus, averbal at baseline, who presented to an outside hospital with an infection and altered mental status. Her at-home bili lights were unavailable, and her bilirubin climbed from a baseline of around 24 to 32 mg/dL. She was transferred for ICU-level care and started on continuous phototherapy, which brought her bilirubin down from 32 to 29 — but her mental status didn’t budge. The concern was acute-on-chronic kernicterus, and now she was being transferred to us for therapeutic plasma exchange. Lord almighty, did I have a hard time coming up with a game plan. Crigler-Najjar Syndrome: A Primer For the uninitiated, Crigler-Najjar syndrome type 1 is a rare genetic disorder in which the enzyme responsible for conjugating bilirubin in the liver — UGT1A1 — is absent or nonfunctional. Without conjugation, unconjugated bilirubin accumulates in the blood. Unlike the common, transient jaundice seen in newborns, this is a lifelong condition. The mainstay of treatment is phototherapy, often for 10 to 16 hours daily, which isomerizes bilirubin into a water-soluble form that can be excreted without conjugation. The only definitive cure is liver transplantation, though gene therapy trials are underway. When bilirubin rises above a patient’s baseline — due to infection, fasting, or loss of access to phototherapy — the risk of acute bilirubin encephalopathy, or kernicterus, becomes very real. What the Literature Says (and Doesn’t Say) So, does therapeutic plasma exchange (TPE) have a role in acute kernicterus for Crigler-Najjar patients? I went to the literature to find out. What I found was… underwhelming. TPE is not listed as a primary indication in the ASFA guidelines for Crigler-Najjar syndrome. The evidence that does exist consists of scattered case reports and case series, and in every single one, plasmapheresis is treated as an afterthought — mentioned almost in passing as something that was done during a crisis, without rigorous evaluation of its contribution to the outcome. A 10-year-old with CN1 who developed kernicterus during streptococcal pharyngitis was treated with plasmapheresis, intensive phototherapy, and antibiotics, and recovered without neurologic sequelae. A 23-year-old man with CN1 who developed acute hepatitis from infectious mononucleosis received plasmapheresis to prevent neurological decline. A 2-month-old with a bilirubin of 30 mg/dL and signs of encephalopathy underwent plasmapheresis and urgent liver transplantation. Two 17-year-old boys with bilirubins in the 30s received intermittent plasmapheresis over a prolonged hospitalization. In none of these reports is there a standardized protocol. In none of them is TPE the focus of the study. It’s always a side note. Borrowing from the Acute Liver Failure Literature That left me with some very practical questions and very few answers. What exchange volume should I use? What replacement fluid? How often? The Crigler-Najjar literature doesn’t say. So I looked to the closest analogy I could find: the acute liver failure literature. In acute liver failure (ALF), high-volume plasma exchange (HVPE) has become a first-line therapy, based on a landmark 2016 randomized trial by Larsen and colleagues showing improved survival. HVPE in that context is defined as 8 to 12 liters of exchange, or about 15% of ideal body weight, which works out to roughly 2.5 to 3 plasma volumes. The replacement fluid is fresh frozen plasma, because ALF patients have severe coagulopathy and need factor replacement. A subsequent 2022 trial showed that even standard-volume plasma exchange — 1.5 to 2 plasma volumes — was effective and potentially safer with respect to cerebral edema. But here’s the critical difference: in ALF, the liver can potentially recover. In Crigler-Najjar, the enzyme deficiency is permanent. Bilirubin production continues at roughly 4 to 5 mg/dL per day, and studies have shown that bilirubin rebounds within 24 hours after plasma exchange. TPE in this context is a temporizing measure at best — buying time while you maximize phototherapy and, if indicated, arrange for transplant evaluation. My Approach I also had to consider the replacement fluid question carefully. The ALF literature uses FFP because those patients need clotting factors. My patient didn’t have liver synthetic dysfunction — her liver makes everything except functional UGT1A1. What she needed was bilirubin removal, and albumin is the primary carrier of unconjugated bilirubin in the blood. On the other hand, some FFP in the replacement fluid provides additional albumin and maintains oncotic pressure. I ultimately decided on a one-time TPE with a 50/50 mix of albumin and plasma — a pragmatic decision born more from first principles than from evidence, because the evidence simply doesn’t exist for this specific scenario. I also recommended maximizing phototherapy — exposing as much skin surface area as possible and using as many bili light devices as they could get their hands on. Phototherapy remains the workhorse of bilirubin management in CN1, and TPE without concurrent aggressive phototherapy is unlikely to make a meaningful dent. When Evidence Runs Out The broader point here is one that I think resonates with anyone who practices in a niche or rare-disease space: sometimes the literature leaves you on your own. You can search every database, pull every case report, and still end up making decisions based on pathophysiology, first principles, and clinical judgment rather than evidence-based protocols. That’s not a comfortable place to be, but it’s an honest one. A Call for Better Evidence For Crigler-Najjar patients in acute crisis, I think there’s a real need for better evidence on the role of TPE. What volume? What fluid? What schedule? Does it actually change outcomes, or does it just change numbers on a screen? These are questions that case reports can’t answer. Given the rarity of the condition, a multi-center registry or collaborative case series with standardized TPE protocols would be a reasonable starting point. In the meantime, if you get this consult, know that you’re not going to find a protocol waiting for you. You’re going to have to reason through it. And if you come up with something better than what I did, I’d love to hear about it.











