medical devices Insulet sues a rival for allegedly stealing diabetes tech secrets
In her latest dispatch, Lizzy Lawrence gives us a window into a heated legal battle that could have implications for the diabetes technology industry as well as a planned merger.
This month, insulin pump manufacturer Insulet filed a lawsuit against South Korean company EOEFlow, the target of Medtronic's planned $738 million acquisition, for stealing the design for its only product, a patch pump.
Experts tell Lizzy the lawsuit reflects just how much a threat the planned acquisition poses to Insulet and its Omnipod product. What happens next could predict Insulet's trajectory: If the merger goes through and if regulators approve EOFlow's pump for patients in the U.S., Insulet may see its position threatened. But a lawsuit could also force Medtronic to postpone or rethink the acquisition.
Some background: Insulet, now worth $13 billion, released its own tubeless, adhesive insulin pump in 2005. In 2016, EOEFlow hired several of the company's senior executives, and Insulet alleges that's when it started copying its pump design.
"If Medtronic actually had a good patch pump, a good tubeless option, that would be tough to fight against," Debbie Wang, a Morningstar medical device analyst, told Lizzy. Read more from Lizzy here.
Artificial intelligence
More bad news for Epic's sepsis model
Electronic health record vendor Epic has for years touted the ability of its AI model to give clinicians early warning of the life-threatening condition known as sepsis. But a new study by researchers at North Carolina-based Atrium Health found it to be less useful than other commonly-used models, such as SIRS (Systemic Inflammatory Response Syndrome), Casey Ross tells us.
The researchers reported that Epic's product was more accurate at the highest scoring thresholds, when it was most confident that a patient had sepsis. But they also found that those thresholds were only reached after clinicians had already taken steps to treat the condition. The new findings confirm earlier research by the University of Michigan — and reporting by STAT — that first flagged the model's questionable performance. Perhaps more troubling than news of the model's weaknesses is that it took about six years for researchers to smoke them out.
From the field
Inside NYU Langone's ChatGPT experiments
On the ground in New York, Mario Aguilar reports on how one health system is putting generative AI and large language models into practice today. At a recent "prompt-a-thon" at NYU Langone, a small group including a music therapist and a medical student were tasked with creatively analyzing medical records using the health system's HIPAA-compliant version of ChatGPT, which can generate text in response to queries, with an eye toward equity. (Microsoft makes the tool NYU Langone uses.)
Among them was Christine Gonzalez, who studies implicit bias in medicine. She noticed the system couldn't identify any instances of bias in the text of patients' records, which suggested the importance of keeping humans in the loop, she said.
"The ideas aren't going to come from me, they're going to come from everyday folks who are thinking about their own problems, who are doing things for themselves," Yindalon Aphinyanaphongs, an assistant professor who leads the predictive analytics unit in NYU Langone's department of informatics. "And one advantage to GPT is that it's incredibly democratizing with a low barrier to entry."
No comments