medical devices
Why medical devices don't get the placebo treatment

It's the gold standard in medicine: taking a treatment, and putting it head to head against a placebo to confidently declare whether it actually works. But for most medical devices, placebo trials have never been done.
Medical device makers and regulators often argue that mimicking an invasive procedure is far more difficult than handing patients a sugar pill. But a growing, vocal contingent of doctors claim this lets device makers off the hook and pushes devices that haven't been thoroughly tested into the bodies of unsuspecting patients. Their goal is to convince the Food and Drug Administration to require more device studies to have placebo controls.
"If you don't do the placebo control, that means that you are using a device or a risky procedure without having any idea if it actually works," said Rita Redberg, a cardiologist at University of California, San Francisco. Read more on why doctors think devices tested without shams could be shams themselves here.
home health
Study: Remote blood pressure monitoring works
Right before the pandemic, Mass General Brigham set up a system to remotely monitor patients' blood pressure. A study published Monday in the Journal of the American Heart Association shows that the program succeeded in helping more patients reach their blood pressure goals.
The study included 1,256 participants in the health system, about half of which enrolled in the six months before the March 2020 shutdown, and half after. All participants received a home blood pressure monitor. Health providers used the monitor's data as well as a clinical algorithm to treat each patient. Only half of participants completed the program, but of that group, the rate of achieving goal blood pressure improved significantly — hitting 95% during the pandemic, up from 75% pre-pandemic.
"Such programs have the potential to transform hypertension management and care delivery," the researchers concluded.
first opinion
Opinion: AI chatbots in health need help from humans
ChatGPT might be able to bang out essays or compose poems, but the use of AI chatbots to inform, support, diagnose, and offer therapy requires human help and oversight. That's what STAT First Opinion authors Smisha Agarwal and Rose Weeks of Johns Hopkins have found in their research on people's attitudes around chatbots, and their own work developing chatbots that give information on Covid-19 and vaccines. The tech has benefits: in surveys, young people found chatbots offering Covid-19 information to be faster, friendlier to anonymity, and easier to use than web searches. Chatbots in the cognitive behavioral therapy space, along with Planned Parenthood and other health care providers, have also offered vetted, confidential information.
But with health care chatbots like these, or the Vaccine Information Resource Assistant (VIRA) that Agarwal and Weeks developed, it's vital that developers review programming regularly to ensure it updates with accurate health information. Otherwise, the authors say, the potential patient harm from AI health chatbots might be a bigger price to pay than their benefits with labor-intensive tasks of large volumes.
No comments