Theories of AI liability: It’s still about the human element – Reuters

September 20, 2022 – Artificial Intelligence (AI) is a transformative technology changing nearly all sectors of commerce ranging from analytical modeling and e-commerce to health care. AI systems are capable of perceiving, learning, and forecasting outputs with minimal human intervention. They can store and analyze data to inform their decision making through a subset of AI, Machine Learning (ML), and unconventional computer algorithms. ML “teaches” the algorithms; consequently, their functionality increases.

Although AI systems offer potentially significant benefits to society, they also present new risks and legal challenges for liability. Without regulatory standards applicable to AI systems specifically, theories of liability currently available and applicable to AI systems still hinge on finding the human behind its development or application liable rather than the AI itself.

This article explores the various theories of liabilities applicable to AI systems and their current limitations.

Register now for FREE unlimited access to Reuters.com

I. Contractual liability

Many AI companies use AI-developer favored allocations of risk in relevant contracts, but these contractual provisions have not been tested in a court of law. There are other contractual liabilities that could arise for the AI system and the AI-developer: liabilities arising from the breach of a condition or certain warranties in the contract such as the implied warranty of fitness or quality of AI system.

Contractual liabilities of an AI system and its developer(s) depend on whether the AI system involved is a “good” or “services” under the applicable jurisdiction’s law. In the United States, for example, a contract for an AI-system will typically be governed by the Uniform Commercial Code (UCC), which applies to contracts for sale of goods. Traditionally, U.S. courts viewed all software as a good because generic software comes in tangible form, although there is a recent trend to the contrary.

A generic off-the-shelf software that includes an AI component would be considered a “good” under the UCC. If the software is customized for the specific use by a particular user such that it includes certain services (e.g., technical support), most courts will apply the Predominant Factor test to determine whether the software contract is of a good or service: If the transaction is predominated by the development of the software, rather than the ancillary services, courts will consider that software a good and apply the UCC to the software contract.

The UCC imposes express warranties, implied warranties of “merchantability,” “fitness for a particular purpose,” and warranty of good title. Some AI system developers may waive these warranties by formal UCC disclaimers or with less formal language such as “goods sold as is” to limit their contractual liabilities. However, without such UCC disclaimer language, it is possible that U.S. courts will look to the extent of customization of software for the buyer’s specific purpose in purchasing the AI system. Where the AI-developer understands the purpose of the AI system, the courts will likely reject any disclaimer because courts will look to words or conduct relevant to the creation or limitations of a warranty.

The current U.S. case law treats software as both a good and a service. However, it remains to be seen whether U.S. courts will consider AI systems in software products as goods, and what threshold of customization in the software will trigger the implied warranty of fitness for a particular purpose.

II. Tort liability theories

As with contract liabilities, whether AI systems are a product or services will also impact the applicability of traditional tort liability theories. Negligence applies to services such as data analysis or medical devices using AI/ML; product liability and strict liability would apply to flaws in product design, manufacture, or lack of warnings that cause personal injury or property damage.

The AI systems used in health care are good examples of recent AI tort liability. As of 2021, the U.S. Food and Drug Administration approved nearly 350 AI/ML-enabled medical devices. The majority of these AI medical devices involve imaging/diagnostic technologies. Accordingly, health care providers and the AI-enabled device developers are subject to different theories of tort liability.

Health care providers may be subject to malpractice and other negligence liabilities, but not to product liability. Medical malpractice (Medmal) applies to physicians who deviate from the profession’s standard of patient care. If a physician uses an AI-enabled medical device for diagnosis or treatment of a patient and the use deviates from an established standard of care, the physician could be liable for improper use of that AI medical device.

Also, Medmal will likely attach when a physician fails to critically evaluate the recommendations of an AI-enabled medical device. Typically, physicians rely on AI systems in good faith to provide diagnostic recommendations. However, physicians must independently review these recommendations and apply the standard of patient care in treating the patient regardless of the AI output.

As with contractual liability of AI, there is mixed and limited U.S. case law on Medmal of physicians for failure to independently review an AI system’s recommendation. Some U.S. courts have allowed Medmal claims to proceed against medical professionals where the professional relied on an intake form that did not completely reflect the patient’s medical history.

Other U.S. courts hold a physician liable for Medmal where the malpractice was based on errors by a system technician or nurse. Relatedly, the health care system employing the physician subject to Medmal would face vicarious liability. Currently, however, there is no established standard of patient care with regard to specific AI-enabled medical devices.

Products liability (PL) is often based on injuries based on defective design, failure to adequately warn about risks, or manufacturing defects. While case law on PL is well defined in the U.S., its application in the context of AI system is unclear. It is imaginable that PL will attach to an AI system and its developer if the AI system is used by health care professionals and results in patient injury that raises issues on data transparency and accuracy, errors in software coding in the AI system, or errors in AI outputs.

Strict liability (SL) is an alternative cause of action that would require the AI system user to show the product was inherently defective.

However, there are challenges to an injured patient establishing a prima facie case of PL for AI systems. First, the legal issue for PL or SL of AI systems is whether the AI’s defect existed when it left the control of the manufacturer or developer. The technical issue lies in the inherent adaptive nature of AI: AI is constantly evolving in its analytical capacity by continuously amassing more data for it to analyze and build its predictive model from its use.

Whether the “defect” in an AI system existed at the time of its manufacture or in the course of its operation by the user remains a very technical question and requires an industry consensus to help shape the appropriate standard.

Even if PL applies to AI systems, traditional upstream or downstream supply chain liability is another complicating factor. Traditional PL would apply to any product supplier in the commercial supply chain; but product liability up or down the supply chain may be severed if the defect in the product existed when it came into the retailer’s control and the retailer had no way to determine that there was such defect while it was in its control.

Another challenge injured patients may face with PL is demonstrating that there is a viable alternative design for the AI system. The “viable alternative” approach to AI design also remains a technical question, and there is no real consensus on appropriate AI design as the industry is still nascent despite its increasing prevalence. AI design defect may include insufficiently diverse set of data. There is no industry consensus on how to “properly” design an AI system, including the threshold question of how to adequately diversify the data set fed into an AI system for its outcome predictions.

III. Conclusion

Notwithstanding the traditional liabilities explored above, the nascent nature of AI applications across various sectors limit the application of the traditional liabilities on AI systems, which may be addressed when authorities implement a regulatory framework for AI liabilities (e.g., the European Commission’s proposed rules to regulate AI, and the U.S. Consumer Product Safety Commission’s AI regulatory initiatives).

For now, businesses in the AI space should consider reducing uncertainties on liabilities with contractual provisions until a clearer standard of care relating to AI systems is established by either industry professionals or the courts. This includes expressly stating that ML systems are designed to operate with direct human involvement.

Contractual warranties, indemnities, and limitations on each contract for AI product can allocate liability in a way that businesses can anticipate despite the lack of a clear legal standard applicable for AI liability. Thus, interested industry members should start liability mitigation efforts such as reviewing their policies and procedures on documentation of AI coding, and documenting how AI-decisions are made and their risk profiles.

Linda A. Malek is a regular contributing columnist on AI and health care for Reuters Legal News and Westlaw Today.

Register now for FREE unlimited access to Reuters.com

Opinions expressed are those of the author. They do not reflect the views of Reuters News, which, under the Trust Principles, is committed to integrity, independence, and freedom from bias. Westlaw Today is owned by Thomson Reuters and operates independently of Reuters News.

William A. Tanenbaum is a partner at Moses Singer and leader of the Data Law practice. His practice focuses on technology, outsourcing, IP, data, AI, with particular application for the health care industry. His clients include technology companies and companies that acquire technology, as well as data providers. He can be reached at wtanenbaum@mosessinger.com.

Kiyong Song is an associate at Moses & Singer’s Healthcare, Privacy & Cybersecurity practice groups. He counsels clients in the fintech, health care, and health tech space on the regulatory and compliance issues relating to privacy and security of data under U.S. and European laws, clinical research, and medical devices. He can be reached at ksong@mosessinger.com.

Linda A. Malek is a partner at Moses & Singer LLP and chair of the firm’s Healthcare and Privacy & Cybersecurity practices. Her practice concentrates on regulatory, technology and business matters in the health care industry. She can be reached at LMalek@mosessinger.com.

Spread the love

Leave a Reply

Your email address will not be published.