regulation
Could FDA use AI for drug safety surveillance? This research team thinks so.
There's no shortage of pitches for using large language models to make the actual provision of health care easier: churning through medical literature and health records, offering patients, and clinicians, easily digestible answers to complex questions, and even, potentially one day, offering diagnoses. But what about to aid federal regulators?
This week a team of researchers, including two from the Food and Drug Administration, offered a provocative suggestion in JAMA Network Open: Why not use LLMs to cull unstructured text in EHRs, social media posts, insurance claims and other sources for data on the safety of specific drugs once they've hit the market?
I spoke with lead author Michael Matheny, a Vanderbilt biomedical informatics professor and the lead investigator for one of a handful of FDA innovation centers for Sentinel, the federal agency's electronic system for product monitoring and evaluation. Among the tidbits he dropped in our interview: While there's concern about hallucination, the risk of error and how detrimental it could be to patient safety depends on the context.
"You could ask, "was someone diabetic," of a large language model, but maybe diabetes doesn't play into the risk of an adverse event. If someone is on a given medication, and it doesn't really matter whether or not they're diabetic, you could imagine that if there was a 10% error rate in the detection of diabetes, it may not have a strong impact [on safety analyses.]," he told me. "Whereas if the adverse event is bleeding after a medication, and the only way that you can detect that bleeding is through natural language processing, and only 10 or 20% of nosebleeds are going to be bad enough to go to the emergency room, and maybe the rest of them are reported just to the doctor…Then that's really important to that detection. Because the entire analysis hinges on detection of that adverse event." Read more.
Another suit filed against FDA over lab-developed test rule
Also on FDA, a trade group representing molecular pathologists has filed a lawsuit against the agency claiming it overstepped its authority in attempting to regulate lab-developed tests, Lizzy Lawrence writes. This is the second legal challenge to that effort: the American Clinical Laboratory Association filed a similar suit in May.
"We filed this lawsuit to ask the Court to vacate the FDA rule given the agency's lack of authority to regulate LDTs and to avert the significant and harmful disruption to laboratory medicine," Association for Molecular Pathology president Maria Arcila in a statement.
As Lizzy writes, FDA hasn't historically regulated lab-developed tests, but their potential to harm patients has risen as the products, such as misleading prenatal genetic tests or Theranos' fraudulent blood panels, get more complex. Read more from Lizzy.
biotech
AI drug firm Recursion's path to industry domination
My biotech colleague Allison DeAngelis has a deep dive into a company charging to the forefront of the increasingly crowded field of AI-guided drug development: Recursion Pharmaceuticals, an Nvidia partner and recent planned acquirer of Exscientia. Founded by a medical school dropout, an e-commerce entrepreneur and a University of Utah scientist, Recursion hasn't always enjoyed success, struggling initially to capture the interest of biotech VCs. "We have been laughed out of a reasonably large number of rooms," co-founder and CEO Chris Gibson told Allison, who spoke to more than 30 Recursion leaders, investors, former employees and analysts.
"Recursion is a startup planning for both world domination and survival," said Viswa Colluru, a former employee who now leads his own learning-based biotech company, Enveda Biosciences. Read more.
testing Cancer detection firm Grail announces layoffs
Another industry darling continues to face challenges: Grail, best known for its blood-based detection test Galleri (and a partner for celebrity-backed consumer lab screening company Function Health, whose fundraise I mentioned in the newsletter earlier this summer) has announced widespread workforce cuts affecting about a third of its current and planned hires, my colleague Jonathan Wosen reports. That amounts to about 350 existing roles and 150 open ones. The company announced the cuts the same day as its second-quarter earnings, the first such readout since it was divested by DNA sequencing company Illumina in June.
As Jonathan writes, Grail's test is on the market for almost $1,000, out of pocket but it doesn't have regulatory approval and is rarely covered by insurance. Clinical trials could change that, but U.K.'s National Health Service has already announced that preliminary data from a recent study wasn't significant enough to launch a large-scale pilot testing Galleri's potential use in routine health care. Read more from Jonathan.
ai scribes
Abridge rolls out AI tech at 40 Kaiser hospitals
Competition's heating up within medical scribe technology, with Nuance, Abridge, Augmedix, Suki, Nabla and others vying for hospitals' business. In a major win for Abridge, the startup's ambient documentation tool is now available to clinicians across 40 Kaiser Permanente hospitals and more than 600 medical offices in multiple states across the country. "By reducing administrative tasks, we're making it easier for our physicians to focus on patients and foster an environment where they can provide effective communication and transparency," Ramin Davidoff, who leads the Southern California Permanente Medical Group, said in a release.
My colleague Katie Palmer wrote about how health systems are testing these AI scribes in a recent update to STAT's Generative AI Tracker.
Separately, Kaiser is published an intriguing set of principles for responsible AI use: read them and let me know what you think.
No comments