regulation
FDA delivers on promise to deregulate AI
The Food and Drug Administration this week delivered on its promise to remove regulatory barriers to health tech products coming to market by updating guidance documents to reflect more lenient interpretations of statute.
In one update, the agency said that products that supply single outputs, including a diagnostic recommendation, are not necessarily regulated medial devices. The agency also updated a guidance to make clear that products that deliver readings for blood pressure, blood glucose, and more do not require FDA review if they are intended for wellness purposes. The latter is an apparent victory for wearable maker Whoop, which was warned by FDA last summer over one of its blood pressure features that was not cleared by the agency.
Read more here
Personnel file
Rune appoints Tempus exec CEO
Rune Labs, known for its Apple Watch-based platform for tracking Parkinson's disease symptoms, announced it hired Tempus executive Amy Gordon Franzen as the new CEO. Co-founder and previous CEO Brian Pepin will stay on as president at the company, overseeing the development of its technology platform, and biopharma partnerships. Rune raised $11 million from existing investors last year and is plotting an expansion to Alzheimer's disease and other neurologic conditions.
Read more from me here
artificial intelligence
ChatGPT gets a Health mode — what's new?
Millions of people, including me and possibly you, are already asking ChatGPT questions about health. Still others are dumping otherwise inscrutable medical records downloaded from patient portals into the generative AI bot, hoping to glean new insights.
Now, OpenAI will encourage this behavior with a new health specific tab in the service that the company says has better security and privacy protections so users feel safe pouring sensitive medical data into the bot. In addition to uploading files, users can hook up data from products like Apple Health and Weight Watchers or obtain medical records from providers through b.well's network. OpenAI promises that it won't train its models on the data you put into ChatGPT Health. (Reminder: Data that you upload to a consumer service is not covered by HIPAA.)
"Health and wellness questions are some of the most meaningful ways people use ChatGPT, and we believe we have an obligation to ensure these conversations are safe and protected," said Karan Singhal, one of OpenAI's health leads, during a media briefing that my colleague Brittany Trang attended.
Nate Gross, OpenAI's recently hired VP of health, described this as one of the three phases OpenAI is pursuing in health:
- "Raising the floor for everyone globally to receive the benefits of AI." That's the bucket ChatGPT Health falls into
- "Sweeping the floor" by reducing the repetitive tasks physicians have to do so they can practice at the top of their license. "We're actually going to have more to share about our work with health systems coming real soon," said Gross.
- "Raising the ceiling of what's possible, advancing scientific frontiers, expanding the frontier of the care that we can deliver, making sure that new benefits are created from what is possible with AI that can then reach everyone."
OpenAI says that it worked with hundreds of physicians "to understand what makes an answer to a health question helpful or potentially harmful" but it's not super obvious how or if the model underlying ChatGPT Health is any different than standard GPT-5, for which OpenAI has done some health specific tuning and testing. When Brittany asked how ChatGPT Health had been improved compared to regular ChatGPT, an OpenAI spokesperson said that the model uses tools and guardrails to deliver a more helpful and personalized health experience for users and that it has additional safety systems that isolates conversations, memory, and files.
The risks of this new feature are clear. Chatbots based on large language models make stuff up all the time (called hallucination) and there's no way to ensure the information ChatGPT provides in response to your potentially serious medical questions will always be accurate or appropriate. That may be why the company has all the usual disclaimers that the new health feature "is designed to support, not replace, medical care" and that it "is not intended for diagnosis or treatment."
No comments