site advertisement

AI in Health Care: Lifesaving, Cost-Cutting, Slow Shift

Imagine walking into your doctor’s office feeling sick – and rather than flipping through pages of your medical history or running tests that take days, your doctor instantly pulls together data from your health records, genetic profile and wearable devices to help decipher what’s wrong.

This kind of rapid diagnosis is one of the big promises of artificial intelligence for use in health care. Proponents of the technology say that over the coming decades, AI has the potential to save hundreds of thousands, even millions of lives .

What’s more, a 2023 study found that if the health care industry significantly increased its use of AI, up to US$360 billion annually could be saved .

But though artificial intelligence has become nearly ubiquitous, from smartphones to chatbots to self-driving cars, its impact on health care so far has been relatively low.

A 2024 American Medical Association survey found that 66% of U.S. physicians had used AI tools in some capacity, up from 38% in 2023. But most of it was for administrative or low-risk support . And although 43% of U.S. health care organizations had added or expanded AI use in 2024, many implementations are still exploratory , particularly when it comes to medical decisions and diagnoses.

I’m a professor and researcher who studies AI and health care analytics. I’ll try to explain why AI’s growth will be gradual, and how technical limitations and ethical concerns stand in the way of AI’s widespread adoption by the medical industry.

Artificial intelligence excels at finding patterns in large sets of data. In medicine, these patterns could signal early signs of disease that a human physician might overlook – or indicate the best treatment option, based on how other patients with similar symptoms and backgrounds responded. Ultimately, this will lead to faster, more accurate diagnoses and more personalized care.

AI can also help hospitals run more efficiently by analyzing workflows, predicting staffing needs and scheduling surgeries so that precious resources, such as operating rooms, are used most effectively. By streamlining tasks that take hours of human effort, AI can let health care professionals focus more on direct patient care.

But for all its power, AI can make mistakes . Although these systems are trained on data from real patients, they can struggle when encountering something unusual, or when data doesn’t perfectly match the patient in front of them.

As a result, AI doesn’t always give an accurate diagnosis. This problem is called algorithmic drift – when AI systems perform well in controlled settings but lose accuracy in real-world situations.

Racial and ethnic bias is another issue. If data includes bias because it doesn’t include enough patients of certain racial or ethnic groups, then AI might give inaccurate recommendations for them, leading to misdiagnoses. Some evidence suggests this has already happened .

Health care systems are labyrinthian in their complexity. The prospect of integrating artificial intelligence into existing workflows is daunting ; introducing a new technology like AI disrupts daily routines. Staff will need extra training to use AI tools effectively. Many hospitals, clinics and doctor’s offices simply don’t have the time, personnel, money or will to implement AI.

Also, many cutting-edge AI systems operate as opaque “black boxes.” They churn out recommendations, but even its developers might struggle to fully explain how. This opacity clashes with the needs of medicine, where decisions demand justification.

But developers are often reluctant to disclose their proprietary algorithms or data sources , both to protect intellectual property and because the complexity can be hard to distill. The lack of transparency feeds skepticism among practitioners, which then slows regulatory approval and erodes trust in AI outputs. Many experts argue that transparency is not just an ethical nicety but a practical necessity for adoption in health care settings.

There are also privacy concerns ; data sharing could threaten patient confidentiality . To train algorithms or make predictions, medical AI systems often require huge amounts of patient data. If not handled properly, AI could expose sensitive health information, whether through data breaches or unintended use of patient records.

For instance, a clinician using a cloud-based AI assistant to draft a note must ensure no unauthorized party can access that patient’s data. U.S. regulations such as the HIPAA law impose strict rules on health data sharing, which means AI developers need robust safeguards.

Privacy concerns also extend to patients’ trust: If people fear their medical data might be misused by an algorithm, they may be less forthcoming or even refuse AI-guided care.

The grand promise of AI is a formidable barrier in itself . Expectations are tremendous. AI is often portrayed as a magical solution that can diagnose any disease and revolutionize the health care industry overnight. Unrealistic assumptions like that often lead to disappointment. AI may not immediately deliver on its promises.

Finally, developing an AI system that works well involves a lot of trial and error. AI systems must go through rigorous testing to make certain they’re safe and effective . This takes years, and even after a system is approved, adjustments may be needed as it encounters new types of data and real-world situations.

Today, hospitals are rapidly adopting AI scribes that listen during patient visits and automatically draft clinical notes, reducing paperwork and letting physicians spend more time with patients. Surveys show over 20% of physicians now use AI for writing progress notes or discharge summaries . AI is also becoming a quiet force in administrative work. Hospitals deploy AI chatbots to handle appointment scheduling, triage common patient questions and translate languages in real time.

Clinical uses of AI exist but are more limited. At some hospitals, AI is a second eye for radiologists looking for early signs of disease . But physicians are still reluctant to hand decisions over to machines; only about 12% of them currently rely on AI for diagnostic help .

Suffice to say that health care’s transition to AI will be incremental. Emerging technologies need time to mature, and the short-term needs of health care still outweigh long-term gains. In the meantime, AI’s potential to treat millions and save trillions awaits.

Turgay Ayer owns shares in Value Analytics Labs, a healthcare technology company. He received funding from government agencies, including NSF, NIH, and CDC.

View Original | AusPol.co Disclaimer

Have Your Say

We acknowledge and pay our respects to the Traditional Owners of country throughout Australia


Disclaimer | Contact Us | AusPol Forum
All rights are owned by their respective owners
Terms & Conditions of Use