| Guardrails needed When it comes to a patient's healthcare decisions, ultimately, a human bears the responsibility for AI decisions, not a computer. Since AI is such a powerful tool, it needs guardrails. But any regulations applied won't be as simple as putting on a Band-Aid and forgetting about future problems. To regulate appropriately, we need to learn from the past and act quickly as AI is rapidly incorporated into healthcare. E-prescribing as a model What seemed like a foreign concept in the 1990s, electronically transmitting prescription information, is now commonplace and successful because we have standards. What we've seen from e-prescribing should be a lesson on how healthcare should adopt and regulate AI. The goal is a standard, uniform set of guidelines to ensure clarity across the industry as AI technology quickly becomes adopted. Using AI-powered tools is already improving care accuracy by giving physicians more time and helping them catch potential errors. With standardization comes greater usage and a healthier population. As new issues arise, they can be addressed. Like e-prescribing, any regulations for healthcare AI must be considered an ongoing process. Focus on the "gray area" This occurs when AI output sounds reasonable, but the clinician is unfamiliar with the information. Clinicians must participate in continuing medical education (CME) to maintain licensure and AI training should be considered as part of the CME process. We can see a similar approach used for controlled substance prescribing education. To use AI as a tool, clinicians must understand how to use that tool. Additionally, regulations should require that any AI output be cited and that the citation be easily accessible to clinicians. If a physician questions an AI response, the AI must provide a clear path for clinicians to fact-check the output. A trusted tool Let's make sure AI can be as powerful and as trusted a tool as possible. And that means smartly implementing regulations. — By MedCity Influencer Michael Blackman |
No comments