The rapid advancement and integration of Artificial Intelligence (AI) across diverse sectors, such as healthcare, has undeniably revolutionized capabilities and efficiencies. However, this technological leap also presents profound legal and ethical dilemma, particularly concerning the "standard of care." Traditionally, the standard of care dictates the level of caution and competence expected from a reasonable person or professional in a given situation. AI's unique characteristics presents a challenge to existing legal frameworks and necessitate a re-evaluation of how culpability and responsibility are assigned when AI-driven systems cause harm.
The Traditional Standard of Care and AI's Disruption
In law, the standard of care is the cornerstone of negligence claims. The question is ‘what would a reasonably prudent person have done under similar circumstances?” If an individual's actions fall below this standard, and that failure causes harm, they can be held liable. This human-centric approach is ill-equipped for AI for several reasons.
According to Professor Samir Rawashdeh, he explains the "Black Box" Problem in those many advanced AI systems, particularly those relying on deep learning, operate as "black boxes." Their decision-making processes are so complex that even their creators may struggle to fully explain why a particular output or recommendation was generated. This inherent capacity makes it exceedingly difficult to assess if an AI system "acted" reasonably, or to pinpoint where a "defect" in its logic might lie.[1]
Unlike static software, AI systems can learn and evolve post-deployment based on new data. This dynamic nature means that the system that caused harm might not be identical to the one initially designed or even the one tested. This continuous evolution complicates the idea of a fixed "standard" against which the AI's behavior can be measured at the time of an incident.[2]
AI development and deployment involves a complex system of actors who include data providers, algorithm developers, system integrators, deployers, and end-users. When an AI system gives an error, attributing fault to a single entity becomes a formidable challenge. Is it the developer for flawed code, the data provider for biased training data, the deployer for improper implementation, or the user for overriding a correct AI recommendation?[3]
The existing legal principles on negligence and product liability are now being stretched to accommodate AI-related harms. On negligence while it is still applicable to human actors overseeing AI, applying negligence directly to AI is problematic. Courts may struggle to define a "reasonable AI" or to ascertain a duty of care for a non-human entity. However, human operators (such as, a doctor using an AI diagnostic tool) will continue to be judged based on the standard of care expected of their profession, including the responsible use and oversight of AI tools. If a doctor blindly follows a flawed AI recommendation without exercising their professional judgment, they could still be liable for medical malpractice.
The doctrine of product liability holds manufacturers strictly liable for injuries caused by defective products, regardless of fault. The key question here is whether an AI system, particularly software or an AI-driven component, can be classified as a "product." If so, is the defect in its design, manufacture (i.e., its coding or training data), or a failure to warn? The evolving nature of AI makes proving a "design defect" particularly challenging, as the "design" might be constantly changing through learning. Some legal scholars advocate for stricter product liability for AI, especially for high-risk autonomous systems, to incentivize developers to prioritize safety from the outset.[4]
If an AI system provides a service (e.g., a legal research tool offering advice), traditional service liability principles might apply, focusing on the professional standard of care in delivering that service. However, this still circles back to the difficulty of assessing the AI's own "competence."
Emerging Laws and Regulatory Approaches
Recognizing the inadequacy of existing laws, jurisdictions worldwide are actively developing new legal frameworks and regulatory approaches to address AI. The European Union AI Act,[5] provides a risk-based approach. It categorizes AI systems based on their potential to cause harm (e.g., unacceptable risk, high-risk, limited risk, minimal risk). High-risk AI systems (e.g., those used in critical infrastructure, medical devices, law enforcement) face stringent requirements, including mandatory risk assessments, human oversight, robust data governance, transparency obligations, and conformity assessments before they can be placed on the market. This aims to ensure safety and accountability by imposing greater burdens on systems with higher potential for harm.[6]
Beyond the EU AI Act, efforts are underway to specifically address civil liability for AI. The proposed EU AI Liability Directive, though facing some legislative hurdles, is aimed to ease the burden of proof for victims of AI-related harm, for instance, by introducing rebuttable presumptions of causation for certain high-risk AI systems. Such initiatives seek to balance fostering innovation while ensuring adequate consumer protection.
Despite these efforts, significant challenges remain. Establishing causation can be complex when AI is involved in a chain of events leading to harm. The foreseeability of AI's behavior, particularly for autonomous learning systems, is also difficult to prove. Furthermore, the global nature of AI development and deployment means that a patchwork of national and regional regulations could create regulatory fragmentation, thus hinder innovation and create enforcement challenges. The evolution of AI and the standard of care will likely involve a continuous interplay between technological advancements, legal innovation, and societal expectations.
In conclusion, Artificial Intelligence demands a fundamental re-evaluation of legal principles, especially the standard of care. The journey to effectively regulate AI and establish clear liability rules is complex and ongoing. It requires a multidisciplinary approach, blending legal expertise with technological understanding and ethical foresight, to ensure that AI's immense potential is harnessed responsibly and equitably within the bounds of justice and accountability.
For more information and/or assistance in matters relating to Artificial Intelligence & Standard of Care in Law, contact us via info@gmorinadvocates.org /godfrey@gmorinaadvocates.org and/or +254786437754.
[1] University of Michigan-Dearborn, 'AI’s Mysterious “Black Box” Problem Explained' (University of Michigan-Dearborn, 7 June 2023) <https://umdearborn.edu/news/ais-mysterious-black-box-problem-explained> accessed 19 June 2025.
[2] Ashraf MH Taha, Zakaria KD Alkayyali, Qasem MM Zarandah, Bassem S Abu-Nasser and Samy S Abu-Naser, 'The Evolution of AI in Autonomous Systems: Innovations, Challenges, and Future Prospects' (2024) 8(10) International Journal of Academic Engineering Research (IJAER) 1 <http://www.ijeais.org/ijaer> accessed 19 June 2025.
[3] Trilateral Research, ‘Who Is Responsible for Artificial Intelligence Governance?’ (Trilateral Research, 3 November 2020) <https://trilateralresearch.com/artificial-intelligence/who-is-responsible-for-artificial-intelligence-governance> accessed 19 June 2025.
[4] BIICL, ‘Overview of the Legal and Institutional Framework in the United Kingdom’ (August 2004) <https://www.biicl.org/documents/267_overview_uk_-_aug_2004.pdf> accessed 19 June 2025.
[5] European Commission, ‘AI Act Explorer’ (Artificial Intelligence Act, 2024) <https://artificialintelligenceact.eu/ai-act-explorer/> accessed 19 June 2025.
[6] Trail ML, ‘EU AI Act: How Risk Is Classified’ (Trail ML, 2024) <https://www.trail-ml.com/blog/eu-ai-act-how-risk-is-classified> accessed 19 June 2025.