Police called in for AI in healthcare showdown
An eyebrow-raising incident in Melbourne, Australia, puts a spotlight on a simple question: How should artificial intelligence (AI ) in healthcare be used during a medical visit — and do patients have a say in it? When artist Caerwin Martin declined her periodontist’s request to deploy an AI tool to take notes and draft referrals, she found herself facing the police.
While no charges were filed, and the matter is now before the medical board, the episode raises serious questions about consent, data protection and the quiet spread of AI in clinical settings. In the United States there is still no universal rule requiring disclosure when AI is used in a medical appointment.
Under the U.S. Health Insurance Portability and Accountability Act (HIPAA), patient health information must be protected. But experts warn that many AI tools lack the necessary business associate agreements (BAAs) and double-encryption required for compliance. “Common consumer AI platforms rarely offer BAAs or proper privacy protections to safely be used by healthcare providers for patient health information,” says Ron Harman King CEO of Vanguard Communications.