According to Fast Company, a bill introduced in the U.S. House of Representatives in early 2025, known as H.R. 238, would permit AI systems to prescribe medications without direct human oversight. This legislative push has sparked intense debate among health researchers and lawmakers about whether such autonomous prescribing is even feasible or advisable. The core tension is that while people often tolerate AI mistakes for efficiency gains, the cost of an error in healthcare can be catastrophic, including patient death. The article highlights the perspective of a complex systems researcher who studies the inherent limits of AI, arguing that unpredictable outcomes are baked into these systems. How this would work in practice if the bill passes is still entirely unclear, but it fundamentally raises the stakes for how many errors developers can allow.
The Inevitability of Error
Here’s the thing that the tech boosters often gloss over: AI errors aren’t just bugs to be fixed. They’re a feature. The researcher in the article frames it as exploring “the limits of science.” Basically, these systems are built on statistical approximations trained on historical data that’s often messy, biased, or incomplete. They interact with a world—human biology, in this case—that is wildly complex and non-deterministic. So the idea that we can engineer all the risk out is a fantasy. It’s not a question of if an AI prescribing tool will make a lethal mistake, but when. And when that happens, who’s liable? The developer? The hospital that deployed it? The law, as proposed, seems to be putting the cart miles before the horse.
A Dangerous Precedent
Now, look at what’s being proposed. Letting an algorithm autonomously prescribe medication isn’t like using AI to schedule appointments or transcribe notes. This is the core, irreversible act of medical treatment. A mis-prescribed dose, a dangerous drug interaction missed, an allergy overlooked—these aren’t minor inconveniences. Proponents might argue it could increase access in underserved areas, and that’s a noble goal. But is the solution really to deploy a black-box system with “limited human supervision” into the most high-stakes scenarios first? It feels like we’re using the most vulnerable patients as a testing ground. There’s a reason the medical field has rigorous, human-centric protocols. Throwing them out for the sake of “efficiency” seems reckless.
The Accountability Black Hole
So what happens after the inevitable error? This is where it gets legally and ethically murky. The text of H.R. 238 would need to create an entirely new framework for accountability that simply doesn’t exist. Can an algorithm be negligent? Current malpractice law revolves around the “standard of care” provided by a human professional. An AI doesn’t have a license to lose. The developers will point to their terms of service and disclaimers. The hospital will point to the software‘s certification. And the patient’s family will be left in a labyrinth of buck-passing. Research in journals like npj Digital Medicine often focuses on accuracy benchmarks, but they rarely solve for the societal and legal aftermath of a failure. We’re building a system primed for catastrophic failure with no clear plan for the fallout.
A Smarter Path Forward
I’m not saying AI has no place in healthcare. It absolutely does. But its role should be augmentation, not autonomy. Think of it as the most powerful assistant a doctor has ever had—one that can scan thousands of studies in seconds or flag a potential anomaly in a scan. The human remains in the loop, making the final, accountable decision. That’s a sustainable path. Rushing to full autonomy, especially in a field as critical as pharmaceuticals, ignores the fundamental nature of the technology. It ignores the complex systems research that tells us these interactions are unpredictable. We need to slow down and get the guardrails right. Because in healthcare, you don’t get to iterate quickly and break things. The thing you break is a person.
