artificial intelligence
Machine learning gaps in medicine
Ashley Beecly, a cardiologist at New York Presbyterian Hospital, started her talk at the Machine Learning for Health Care conference on Saturday with a revelatory question: "How many people here have implemented an AI-based model that's patient-facing?" Only a smattering of hands went up in an audience that included many of the most advanced practitioners in the field.
I attended the two-day conference at Columbia University, which dove deeply into that implementation gap, exploring the problems that must be overcome for AI to begin to make a real difference in clinical care. Participants discussed ways to make data flow better and improve the fairness of AI models. They also zeroed in on the need to focus less on an AI model's technical performance than whether it can be combined with a set of interventions that actually improve patient outcomes.
"I love AUCs just like I'm sure everyone here does," said Beecly, referring to a commonly-referenced measure of AI model performance. But for models to reach the bedside, she said, they must get buy-in from clinicians who can help build the right user interface, craft interventions, and test for safety, effectiveness, and fairness.
big tech
YouTube's next move against misinformation
YouTube's health division is stepping up its effort to combat medical information, with a particular focus on cancer treatment and prevention. The Google-owned video sharing company unveiled a new plan Tuesday to remove content that promotes harmful or ineffective cancer treatments, or discourages viewers from getting professional medical treatment.
Garth Graham, the head of YouTube Health, told STAT the company is focusing on cancer because it's an area where misinformation is prevalent and particularly damaging. "If a video was to claim garlic cured cancer, or you should drink this special water or take vitamin C instead of radiation therapy — that's the kind of content we will be removing ," he said. On a broader level, Graham added, YouTube is trying to stake out a policy that will be clear to viewers and content developers, so that it can apply it to other medical domains where the scientific evidence is stable and clear, and misinformation is especially harmful.
pharma
BMS turns to a Viz.ai algorithm to detect disease
As Bristol Myers Squibb works to build its drug Camzyos into a blockbuster, it's propped up a Viz.Ai algorithm designed to help find more people who are affected the heart condition it treats, called hypertrophic cardiomyopathy, or HCM. As part of a multi-year agreement between the companies, BMS provided funding and scientific input for the development of the algorithm. The algorithm, which received FDA clearance this month, looks at 12-lead electrocardiograms collected during routine care and flags suspected cases of HCM for further assessment.
"BMS has a vested interest in finding more patients with HCM," said Matthew Martinez, a cardiologist who leads Viz.ai's HCM medical advisory board. "Why? Because they're so philanthropic? No, they want to sell more drugs. They want to help more people by identifying more people." STAT's Mario Aguilar has more on the algorithm and how BMS plans to use it.
artificial intelligence
How to boost local oversight of AI
AI researchers at Duke and the University of Michigan published new recommendations for improving local oversight of AI tools. Their paper in Nature Machine Intelligence bullets four interventions:
- Create standards for implementation of AI models to monitor performance over time and collect input from clinicians and patients. The researchers suggested the Centers for Medicare and Medicaid Services could require adherence to the standards as a condition of payment, as it does in other domains such as antimicrobial stewardship.
- Invest in IT systems and data infrastructure to enable pre-implementation testing of AI models, and provide training for caregivers using or overseeing them.
- Require public reporting of AI model performance so their safety, effectiveness, and fairness can be assessed across sites.
- Implement centralized evaluation of AI models beyond the FDA approval process for AI models. Many models currently in use are not reviewed by FDA, creating a large gap in oversight of AI tools that work within electronic health records.
No comments