
FDA Urged to Set AI Medical Device Label Standards
Medical devices that harness the power of artificial intelligence or machine learning algorithms are rapidly transforming health care in the U.S., with the Food and Drug Administration already having authorized the marketing of more than 1,000 such devices and many more in the development pipeline. A new paper from a University of Illinois Urbana-Champaign expert in the ethical and legal challenges of AI and big data for health care argues that the regulatory framework for AI-based medical devices needs to be improved to ensure transparency and protect patients’ health.
Sara Gerke, the Richard W. & Marie L. Corman Scholar at the College of Law, says that the FDA must prioritize the development of labeling standards for AI-powered medical devices in much the same way that there are nutrition facts labels on packaged food.
“The current lack of labeling standards for AI- or machine learning-based medical devices is an obstacle to transparency in that it prevents users from receiving essential information about the devices and their safe use, such as the race, ethnicity and gender breakdowns of the training data that was used,” she said. “One potential remedy is that the FDA can learn a valuable lesson from food nutrition labeling and apply it to the development of labeling standards for medical devices augmented by AI.”
The push for increased transparency around AI-based medical devices is complicated not only by different regulatory issues surrounding AI but also by what constitutes a medical device in the eyes of the U.S. government.
If something is considered a medical device, “then the FDA has the power to regulate that tool,” Gerke said.
“The FDA has the authority from Congress to regulate medical products such as drugs, biologics and medical devices,” she said. “With some exceptions, a product powered by AI or machine learning and intended for use in the diagnosis of disease – or in the cure, mitigation, treatment or prevention of disease – is classified as a medical device under the Federal Food, Drug, and Cosmetic Act. That way, the FDA can assess the safety and effectiveness of the device.”
If you tested a drug in a clinical trial, “you would have a high degree of confidence that it is safe and effective,” she said.
But there are almost no clinical trials for AI tools in the U.S., Gerke noted.
“Many AI-powered medical devices are based on deep learning, a subset of machine learning, and are essentially ‘black boxes.’ Their reasoning why the tool made a particular recommendation, prediction or decision is hard, if not impossible, for humans to understand,” she said. “The algorithms can be adaptive if they are not locked and can thus be much more unpredictable in practice than a drug that’s been put through rigorous tests and clinical trials.”
It’s also difficult to assess a new technology’s reliability and efficacy once it’s been implemented in a hospital, Gerke said.
“Normally, you would need to revalidate the tool before deploying it in a hospital because it also depends on the patient population and other factors. So it’s much more complex than just plugging it in and using it on patients,” she said.
Although the FDA has yet to permit the marketing of a generative AI model that’s similar to ChatGPT, it’s almost certain that such a device will eventually be released, and there will need to be disclosures to both health care practitioners and patients that such outputs are AI-generated, said Gerke, also a professor at the European Union Center at Illinois.
“It needs to be clear to practitioners and patients that the results generated from these devices were AI-generated simply because we’re still in the infancy stage of the technology, and it’s well-documented that large language models occasionally ‘hallucinate’ and give users false information,” she said.
According to Gerke, the big takeaway of the paper is that it’s the first to argue that there is a need not only for regulators like the FDA to develop “AI Facts labels,” but also for a “front-of-package” AI labeling system.
“The use of front-of-package AI labels as a complement to AI Facts labels can further users’ literacy by providing at-a-glance, easy-to-understand information about the medical device and enable them to make better-informed decisions about its use,” she said.
In particular, Gerke argues for two AI Facts labels – one primarily addressed to health care practitioners, and one geared to consumers.
“To summarize, a comprehensive labeling framework for AI-powered medical devices should consist of four components: two AI Facts labels, one front-of-package AI labeling system, the use of modern technology like a smartphone app and additional labeling,” she said. “Such a framework includes things from as simple as a ‘trustworthy AI’ symbol to instructions for use, fact sheets for patients and labeling for AI-generated content. All of which will enhance user literacy about the benefits and pitfalls of the AI, in much the same way that food labeling provides information to consumers about the nutritional content of their food.”
The paper’s recommendations aren’t exhaustive but should help regulators start to think about “the challenging but necessary task” of developing labeling standards for AI-powered medical devices, Gerke said.
“This paper is the first to establish a connection between front-of-package nutrition labeling systems and their promise for AI, as well as making concrete policy suggestions for a comprehensive labeling framework for AI-based medical devices,” she said.
The paper was published by the Emory Law Journal.
The research was funded by the European Union.
https://news.illinois.edu/paper-fda-needs-to-develop-labeling-standards-for-ai-powered-medical-devices/