How a slimmed-down FDA regulates AI
Over the weekend, the federal government eliminated thousands of employees from health care agencies in key jobs, such as the FDA's head of medical device safety. Because AI is so new, a lot of the FDA employees working on AI were newly hired and thus vulnerable to the firings — 10 people out of a team of 40 working on reviewing imaging devices, as well as 40 people on a research team that helps regulatory staff understand AI and other research, STAT has learned so far.
STAT's FDA reporter Lizzy Lawrence has an excellent piece on those layoffs. The FDA has been struggling to keep up with AI-related applications and these terminations will only compound those issues. An official from the agency warned Lizzy that these cuts mean that more of the responsibility for medical device and AI safety will end up falling on hospitals' hands.
"I fear if there's going to be even less rigor because we can't keep up with the bandwidth, we can't do important research. That burden is going to go on the hospitals. It's going to go on the patients," said the official. "At a time where we're at maximum AI hype…companies are always going to oversell the performance and safety."
How exactly does the FDA regulate AI, anyway? Well, it's narrower than you might think.
The FDA, generally speaking, has authority over food, medical devices, and pharmaceuticals (as well as cosmetics, tobacco, and electronics that give off radiation.) Where does AI fit into these categories? In medical devices.
This October JAMA article from three FDA officials lays it out nicely: On one end of the spectrum are AI programs that are embedded in devices that are clearly under FDA purview, like cardiac defibrillators. On the other end are AI programs that help with back-office administration, which aren't FDA regulated. In the middle are things like AI models used for clinical decision support. Exactly where the regulation line is drawn in that muddled middle is getting more complicated.
A couple recent examples highlight the problems with that muddled middle. Former FDA commissioner Scott Gottlieb outlined in a JAMA Health Forum piece earlier this month some side effects of the FDA only being able to regulate medical devices: In 2022, the FDA issued guidance redefining what kinds of clinical decision support software count as medical devices. "The FDA would consider software that integrates data from multiple sources (such as imaging, consultation reports, and clinical laboratory results) as medical devices because of its ability to synthesize diverse data and formulate insights when the chain of reasoning behind the tool's verdict can remain murky (leaving the clinician uncertain about exactly how the final judgment was reached)," wrote Gottlieb.
This means, he said, that electronic health record systems that include AI software that synthesize these data together via a tool would be considered medical devices and thus subject to FDA review. EHRs are one of the most useful places for AI tools to live, but this guidance places an artificial cap on how useful EHR vendors can make their tools, he argued.
Another complication? Generative AI. In the fall, the FDA's Digital Health Advisory Committee met for the first time to discuss how the agency should approach regulating (or not regulating) generative AI tools. As STAT's Katie Palmer and Casey Ross noted in their review of the adcomm's documents, it's really hard for the FDA to evaluate tools built on top of proprietary AI models like OpenAI's GPT. The committee also noted that popular products like generative AI medical scribes, though treated as administrative software, carry a risk for medical harm.
There are many other reasons why AI — especially generative AI — is difficult for the agency to regulate: The tools aren't always specific to a certain indication (which is the way the FDA normally regulates devices and drugs), don't easily fall into definable risk categories (another way the FDA typically reduces regulatory burden), and would need heavy post-market monitoring, which neither the FDA nor the health care system is really equipped for.
While pruning FDA staff might theoretically help advance a goal of de-regulating AI, the agency's approach to regulation has been high-touch — the FDA is often in contact with companies, asking how it can be more efficient in the data it's asking for, how it evaluates things, and giving feedback on study designs and applications. Fewer staff members means that applications will take longer and that the agency may miss deadlines set out by MDUFA, the contract between device makers and the FDA.
If the agency is going to be understaffed, one FDA official told STAT, "We're going to lose our future-proofing edge...we're not going to be able to keep up with the new advances in the way we currently are."
From the STAT archives:
- For an example of AI tech that falls through the cracks at the FDA, look no further than this Lizzy Lawrence article that led to a STAT-wide debate as to whether we could use "dick pics" in a headline. Though the FDA didn't have jurisdiction over the supposedly STI-detecting AI photo app, the FTC later shut it down.
- In 2021, when the FDA had only cleared around 160 devices that used AI, STAT's Casey Ross found that the amount and quality of testing for FDA-cleared AI products varied wildly, bringing into question how well they work and how they will affect care.
- Biden FDA commissioner Robert Califf issued the same warning in September to hospitals as the FDA official above: "I think there's a lot of good reason for health systems to be concerned that if they don't step up, they're going to end up holding the bag on liability when these algorithms go wrong." Read more from me.
No comments