Breaking News

How the FDA regulates AI (and how recent firings will affect it)

February 19, 2025
britt-tran-avatar-teal
Health Tech Reporter

✨ You're receiving this email because you're a Health Tech subscriber and we thought you might be interested in AI Prognosis. If you'd like to opt out of AI Prognosis, click here to opt out.

I've never read a more bewildering corporate blog than this post by Oracle EVP Ken Glueck accusing competitor EHR vendor Epic Systems of sabotaging Oracle Health head Seema Verma via a shadow campaign of edits to her Wikipedia page. In addition to its tenuous logic, the post includes many references to Wicked and Harry Potter, nodding to Epic CEO Judy Faulkner's fascination with stories and fantasy, which she's integrated into Epic's Disneyland-like campus

(By the way, I'm still looking to hear whether there's an Epic employee Dungeons & Dragons group — it's gotta exist. If you're in Judy's adventuring party, or just have questions/comments about AI, send me an email: aiprognosis@statnews.com)

How a slimmed-down FDA regulates AI

Over the weekend, the federal government eliminated thousands of employees from health care agencies in key jobs, such as the FDA's head of medical device safety. Because AI is so new, a lot of the FDA employees working on AI were newly hired and thus vulnerable to the firings — 10 people out of a team of 40 working on reviewing imaging devices, as well as 40 people on a research team that helps regulatory staff understand AI and other research, STAT has learned so far. 

STAT's FDA reporter Lizzy Lawrence has an excellent piece on those layoffs. The FDA has been struggling to keep up with AI-related applications and these terminations will only compound those issues. An official from the agency warned Lizzy that these cuts mean that more of the responsibility for medical device and AI safety will end up falling on hospitals' hands.

"I fear if there's going to be even less rigor because we can't keep up with the bandwidth, we can't do important research. That burden is going to go on the hospitals. It's going to go on the patients," said the official. "At a time where we're at maximum AI hype…companies are always going to oversell the performance and safety." 

How exactly does the FDA regulate AI, anyway? Well, it's narrower than you might think.

The FDA, generally speaking, has authority over food, medical devices, and pharmaceuticals (as well as cosmetics, tobacco, and electronics that give off radiation.) Where does AI fit into these categories? In medical devices.

This October JAMA article from three FDA officials lays it out nicely: On one end of the spectrum are AI programs that are embedded in devices that are clearly under FDA purview, like cardiac defibrillators. On the other end are AI programs that help with back-office administration, which aren't FDA regulated. In the middle are things like AI models used for clinical decision support. Exactly where the regulation line is drawn in that muddled middle is getting more complicated.

A couple recent examples highlight the problems with that muddled middle. Former FDA commissioner Scott Gottlieb outlined in a JAMA Health Forum piece earlier this month some side effects of the FDA only being able to regulate medical devices: In 2022, the FDA issued guidance redefining what kinds of clinical decision support software count as medical devices. "The FDA would consider software that integrates data from multiple sources (such as imaging, consultation reports, and clinical laboratory results) as medical devices because of its ability to synthesize diverse data and formulate insights when the chain of reasoning behind the tool's verdict can remain murky (leaving the clinician uncertain about exactly how the final judgment was reached)," wrote Gottlieb. 

This means, he said, that electronic health record systems that include AI software that synthesize these data together via a tool would be considered medical devices and thus subject to FDA review. EHRs are one of the most useful places for AI tools to live, but this guidance places an artificial cap on how useful EHR vendors can make their tools, he argued.

Another complication? Generative AI. In the fall, the FDA's Digital Health Advisory Committee met for the first time to discuss how the agency should approach regulating (or not regulating) generative AI tools. As STAT's Katie Palmer and Casey Ross noted in their review of the adcomm's documents, it's really hard for the FDA to evaluate tools built on top of proprietary AI models like OpenAI's GPT. The committee also noted that popular products like generative AI medical scribes, though treated as administrative software, carry a risk for medical harm.

There are many other reasons why AI — especially generative AI — is difficult for the agency to regulate: The tools aren't always specific to a certain indication (which is the way the FDA normally regulates devices and drugs), don't easily fall into definable risk categories (another way the FDA typically reduces regulatory burden), and would need heavy post-market monitoring, which neither the FDA nor the health care system is really equipped for.

While pruning FDA staff might theoretically help advance a goal of de-regulating AI, the agency's approach to regulation has been high-touch — the FDA is often in contact with companies, asking how it can be more efficient in the data it's asking for, how it evaluates things, and giving feedback on study designs and applications. Fewer staff members means that applications will take longer and that the agency may miss deadlines set out by MDUFA, the contract between device makers and the FDA.

If the agency is going to be understaffed, one FDA official told STAT, "We're going to lose our future-proofing edge...we're not going to be able to keep up with the new advances in the way we currently are."


From the STAT archives:

  • For an example of AI tech that falls through the cracks at the FDA, look no further than this Lizzy Lawrence article that led to a STAT-wide debate as to whether we could use "dick pics" in a headline. Though the FDA didn't have jurisdiction over the supposedly STI-detecting AI photo app, the FTC later shut it down.
  • In 2021, when the FDA had only cleared around 160 devices that used AI, STAT's Casey Ross found that the amount and quality of testing for FDA-cleared AI products varied wildly, bringing into question how well they work and how they will affect care.
  • Biden FDA commissioner Robert Califf issued the same warning in September to hospitals as the FDA official above: "I think there's a lot of good reason for health systems to be concerned that if they don't step up, they're going to end up holding the bag on liability when these algorithms go wrong." Read more from me.


WATCH: Veritasium's 'The Most Useful Thing AI Has Ever Done'

If you're not from the biology or drug discovery corner of the STAT universe, figuring out what people are talking about with AI in drug discovery may be overwhelming. Last week, educational YouTube channel Veritasium released an extremely good, understandable video on AI in the protein structure prediction and design space. It is 24 minutes long, but well worth it. If you're short on time, play it — like I did — while cooking and washing the dishes at your sister's house because your neighbor clogged your kitchen plumbing by throwing shrimp tails down the garbage disposal.

Veritasium talked to all three 2024 Nobel Prize in Chemistry winners, including somewhat rare interviews with AlphaFold developers Demis Hassabis, John Jumper, and Kathryn Tunyasuvunakool; and fellow Nobel laureate Prof. David Baker (in a wicked pair of red glasses!) There are some great details within: Did you know that Demis Hassabis was inspired to pursue protein folding because of a David Baker protein video game?


Conspiracy behind the conspiracy

So that bewildering Oracle post I pointed out in the blue box above? Brendan Keeler, known online as Health API Guy, has a take: It was targeted at Trump, Elon, and others in power in the federal government, in hopes of getting a federal contract through regulatory capture, he posits.

Oracle CTO Larry Ellison was on the record as recently as last week saying that for AI to actually help improve your nation's population health, you have to put all of your nation's health records in one place for the AI. What's one major obstacle in the way of Oracle/Cerner making a big database of all of America's health records?, asks Brendan. The nation's most popular EHR system, Epic. As of 2023, Epic serviced 52% of hospital beds in the U.S, versus Oracle/Cerner's 24%. (Note that Oracle has the contract for the government's VA hospitals, which has gone…not well.)

Discrediting Epic in front of the president and his team makes sense for this goal, especially as Ellison seems to have the ear of the president when it comes to tech, having stood side-by-side when introducing AI infrastructure project Stargate. I'll go one step further and point out that last week's Make America Healthy Again executive order included a line about empowering Americans by making all data from federally funded health research "transparent" and "open-source," potentially indicating a federal appetite to make big databases of health data. 

One more thing to keep in mind: Ellison, in his recent chat with former British prime minister Tony Blair linked above, mentioned that the NHS has big databases of health data on U.K. citizens. But as Johns Hopkins bioethicist Marielle Gross told me, a centralized database of health information is a fundamentally different proposition when the government is paying for the health care, as opposed to the U.S. system where the government has no reason to collect health data on its citizens without their permission. "If it doesn't go hand in hand with access to health care, then forget it," she said.


More around STAT
Check out more exclusive coverage with a STAT+ subscription
Read premium in-depth biotech, pharma, policy, and life science coverage and analysis with all of our STAT+ articles.

Song of the Week: "hourglass" by A Place For Owls

One thing you're going to learn about me is that I love a lyrically perfect song — one that evokes images and emotions perfectly without having to tell the listener what the song is about. 

I didn't expect to find a lyrically perfect song on a Denver emo band's record, but on "hourglass" (best listened to by starting with the preceding track, record opener "go on," which flows into it), A Place For Owls details the narrator's experience of their partner's miscarriage in heartrending detail that puts you in the scene without ever explicitly saying that's what the song is about.

There's so much joy as we tell your folks / But this bleeding changes everything / The hospital in your hometown is slow on New Year's Day / And as I'm trying to calm us down, they're sending me away / I could not wait with you…


Do you have questions about what's in this week's newsletter, or just questions about AI in health in general? Suggestions? Story tips? Ideas for song of the week? Simply reply to this email or contact me at AIPrognosis@statnews.com.


Thanks for reading! More next week — Brittany


Enjoying AI Prognosis? Tell us about your experience
Continue reading the latest health & science news with the STAT app
Download on the App Store or get it on Google Play
STAT
STAT, 1 Exchange Place, Boston, MA
©2025, All Rights Reserved.

No comments