| Which tools are clinicians using Productivity tools and secure access to large language models are among the most useful applications of AI at Mass General Brigham, said Rebecca Mishuris, the health system's chief health information officer and vice president of digital. For instance, she noted Mass General has seen strong uptake of Microsoft Copilot, which helps clinicians draft emails, summarize information and generate presentations. She also pointed out that the health system has built secure internal access to large language models, which allows clinicians and researchers to safely experiment with AI while using protected health information. That access has already enabled researchers to build an AI agent that can summarize a new patient's decades of medical records for clinicians before a visit, Mishuris said. Measuring success In order for an AI deployment to be successful, Mishuris noted that health systems need to align people, processes and technology. Technology alone isn't enough — she said Mass General invests heavily in AI education for staff, helping employees understand what generative AI can and cannot do, how to use it safely, and how it fits into workflows. Once an AI solution is launched, it must be monitored in multiple layers, Mishuris stated. She described three types of monitoring at Mass General: real-time monitoring during patient care to catch potential hallucinations immediately, short-term retrospective monitoring days or weeks later to review model outputs at scale and identify potential issues, and ongoing performance monitoring to ensure tools continue delivering their intended outcomes. Reality check Mishuris pointed out that AI should be judged against real-world performance — not perfection. When evaluating AI tools, the comparison should be how they perform relative to current workflows. In some cases, humans already make similar errors, so the key question is whether AI performs as well as or better than the status quo. "There was actually a study out of California that showed that humans hallucinate just as much as the computer does when doing a discharge summary for a patient in the hospital. And so if you get a result like that, if it's the same, if the humans are hallucinating and the computers are hallucinating, then what is the risk of moving to the computer?" Mishuris remarked. Ultimately, she said, the value of any AI tool comes down to whether it meaningfully improves workflows or patient care compared with the reality clinicians face today. — By Katie Adams |
No comments