Beyond the Algorithm: Who Should Be Accountable When AI Misfires in Healthcare?

 

Artificial Intelligence (AI) has made significant inroads into healthcare, promising earlier and more accurate diagnoses, improved risk stratification, and personalised treatment plans. However, when these AI tools produce incorrect diagnoses or suggest harmful treatments, an urgent and often unresolved question emerges: who ultimately bears responsibility? The treating physician or the AI (and its developers)? This discusses on the complex ethical, legal, and clinical dimensions of accountability in an age of increasingly autonomous AI systems.

The AI Revolution in Healthcare

From Diagnostic Aid to Clinical Workhorse

Initially developed as diagnostic support tools, AI algorithms have evolved into sophisticated, autonomous decision-makers that interpret imaging, predict disease trajectories, and guide treatment choices. For instance, IBM’s Watson for Oncology was introduced to revolutionise cancer care through advanced AI-driven recommendations (1). While some successes have been recorded, the system’s occasional inaccuracies illustrate the challenges of matching AI outputs with the complexities of real-world medicine. Meanwhile, recent deep-learning algorithms have shown high accuracy in detecting skin cancer, diabetic retinopathy, and even certain cardiac conditions from imaging (2). These successes underscore the technology’s promise and emphasize the need to address accountability when misfires inevitably occur.

Sources of AI Errors

AI errors arise from multiple sources. One is dataset bias, where training data lacks sufficient diversity or fails to represent specific population subgroups (3). Another issue arises from the opaque “black-box” nature of advanced neural networks, which complicates the physician’s ability to scrutinize the AI’s rationale (so-called “explainability” challenge) (4). An example is an AI trained primarily on specific imaging protocols; if it encounters a new scanner or a novel patient demographic, its performance might sometimes degrade dramatically.

Current Accountability Frameworks

Physician’s Ethical and Legal Duties

Under most legal and professional frameworks, physicians are responsible for their clinical decisions. A doctor’s obligation to exercise reasonable care cannot be offloaded onto a machine, no matter how advanced (5). In the United States, for example, malpractice laws hinge on whether a physician meets the expected standard of care. If an AI recommendation is clinically unreasonable or inconsistent with established guidelines, the physician is still expected to question or override it. Therefore, courts have historically assigned liability to the physician in AI-facilitated errors, reasoning that AI is merely a sophisticated tool (6).

Liability of AI Developers and Vendors

The degree to which AI developers can be held liable depends on the classification of the AI system:

  • Medical Device Status: Under U.S. Food and Drug Administration (FDA) regulations, certain AI tools are classified as “Software as a Medical Device (SaMD).” If an AI system is cleared or approved as a medical device and subsequently causes harm due to design flaws or insufficient testing, the developer could face product liability claims (7). Similar regulations exist in the European Union (EU) under its Medical Device Regulation (MDR).
  • Failure to Update or Monitor: Many AI models undergo “continuous learning” after deployment, retraining on new patient data to improve their performance. If a developer fails to properly monitor these updates or validate the algorithm’s evolving outputs, it could result in liability for not adequately ensuring safety and efficacy post-deployment (8).
  • Inadequate Warnings or Instructions: Developers also must provide accurate instructions and warn potential users about the algorithm’s limitations. Injured parties might have grounds for a legal claim against the developer if the instructions or disclaimers are insufficient or misleading (9).

Institutional Responsibility

Hospitals and healthcare systems can also share liability if they mandate using AI tools without proper oversight. For instance, if a hospital’s policy requires physicians to follow AI recommendations blindly (or strongly discourages deviation), the institution could be held responsible when things go wrong. Organizational responsibility includes ensuring staff receive training on interpreting AI outputs and that robust fail-safe protocols exist for uncertain or anomalous AI recommendations.

A Shared Responsibility Paradigm

Given these overlapping obligations, legal scholars suggest a “shared responsibility” model, where liability is proportionally distributed among physicians, AI developers, and healthcare institutions (10). The nature of the error, whether it originated from a physician’s misuse of the AI tool, a fundamental design flaw in the algorithm, or an institutional policy that constrained physician discretion, often dictates the degree to which each party may be held accountable.

Ethical and Regulatory Landscape

Ethical Directives

Professional bodies such as the American Medical Association (AMA) and the World Health Organization (WHO) emphasize the principle of beneficence and non-maleficence, even in the context of AI (11). They underscore the physician’s obligation to prioritize patient welfare and to employ AI with a critical eye, questioning outputs that conflict with clinical judgment or established guidelines. Ethical guidelines also advocate transparency and explainability in AI, urging developers to strive for interpretable systems so physicians can meaningfully vet the recommendations.

Regulatory Developments

  • United States: The FDA is increasingly involved in creating frameworks that clarify how AI-driven medical devices should be tested, monitored, and updated post-approval. A key focus is real-time monitoring (“Real-World Performance” monitoring) to detect performance drift in AI systems (12).
  • European Union: The proposed AI Act aims to categorize AI applications by their level of risk, imposing stricter requirements for “high-risk” tools used in healthcare settings. This regulation may help define explicit obligations for developers regarding data quality, transparency, and oversight (13).
  • Global Landscape: The WHO calls for international collaboration on AI governance, emphasizing data privacy and accountability mechanisms in cross-border healthcare contexts, where AI might be deployed in multiple countries with varied legal frameworks (14).

Practical Strategies to Mitigate Risk

Human-in-the-Loop Systems

Many healthcare institutions adopt “human-in-the-loop” or “hybrid intelligence” models, ensuring that clinicians review and validate AI outputs before final decisions are made (15). This arrangement preserves the physician’s critical oversight role while harnessing AI's speed and analytical power.

Comprehensive Training and Auditing

Hospitals increasingly invest in staff training to familiarize clinicians with AI’s capabilities and limitations. Continuous auditing of AI performance, especially through prospective and retrospective outcomes analyses, can help identify shortcomings before they cause widespread harm (16).

Explainability and Transparency Measures

The concept of Explainable AI (XAI) is gaining traction. In this approach, algorithms are designed to provide interpretable outputs or highlight key factors that led to a certain prediction (17). Such transparency aids clinicians in understanding whether the recommendation aligns with known pathophysiology or patient-specific clinical data, thus reducing blind reliance on AI.


Conclusion

Addressing accountability in AI-driven healthcare is not a zero-sum game of assigning blame to the physician or the machine. Instead, it is a multi-stakeholder responsibility encompassing clinicians, developers, healthcare institutions, and regulators. The physician’s ethical and legal duty to protect patients remains paramount, but AI developers must also be prepared to shoulder liability if their products prove defective or misleading. In parallel, hospitals need to implement robust governance frameworks to ensure safe and responsible AI deployment.

As regulatory bodies refine laws to address AI’s unique challenges—such as continuously learning algorithms and black-box decision-making—these guidelines will serve as guardrails for all parties involved. Ultimately, the goal is to build an ecosystem where cutting-edge AI enhances patient outcomes without diluting clinical judgment and where accountability mechanisms are transparent, fair and focused on patient safety above all.


References

  1. Ferrucci, D. (2012). Introduction to “This is Watson.” IBM Journal of Research and Development, 56(3.4), 1–15.
  2. Esteva, A., Kuprel, B., Novoa, R. A., et al. (2017). Dermatologist-level classification of skin cancer with deep neural networks. Nature, 542(7639), 115–118.
  3. Obermeyer, Z., & Emanuel, E. J. (2016). Predicting the future—big data, machine learning, and clinical medicine. The New England Journal of Medicine, 375(13), 1216–1219.
  4. Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1, 206–215.
  5. AMA. (2019). Augmented Intelligence in Health Care. Retrieved from https://www.ama-assn.org/
  6. Price, W. N., Gerke, S., & Cohen, I. G. (2019). How should we judge the safety and effectiveness of AI applications in health care? AMA Journal of Ethics, 21(2), E125–E130.
  7. Food and Drug Administration (FDA). (2019). Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning-Based Software as a Medical Device (SaMD).
  8. Gerke, S., Minssen, T., & Cohen, G. (2020). Ethical and legal challenges of artificial intelligence-driven healthcare. In Artificial Intelligence in Healthcare (pp. 295–336). Academic Press.
  9. Mello, M. M., & Messing, N. A. (2020). Restrictions on the use of AI in healthcare. Nature Medicine, 26, 1327–1330.
  10. Sartor, G. (2020). Liability for AI decisions: Broadening the perspectives. European Journal of Risk Regulation, 11(1), 61–80.
  11. World Health Organization (WHO). (2021). Ethics and governance of artificial intelligence for health.
  12. FDA. (2021). Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) Action Plan.
  13. European Commission. (2021). Proposal for a Regulation on a European approach for Artificial Intelligence.
  14. WHO. (2021). Global strategy on digital health 2020–2025.
  15. Holzinger, A., Dehmer, M., & Jurisica, I. (2014). Knowledge discovery and data mining in biomedical informatics: The future is in integrative, interactive machine learning solutions. In Interactive knowledge discovery and data mining in biomedical informatics (pp. 1–18). Springer.
  16. Kelly, C. J., Karthikesalingam, A., Suleyman, M., et al. (2019). Key challenges for delivering clinical impact with artificial intelligence. BMC Medicine, 17(1), 195.
  17. Gunning, D., & Aha, D. (2019). DARPA’s explainable artificial intelligence (XAI) program. AI Magazine, 40(2), 44–58.

Comments