
Artificial intelligence (AI) is attracting widespread attention due to its significant potential to transform how businesses operate. However, new technology also brings the risk of potential harm due to unforeseen defects, errors or misuse, which may lead to legal claims and financial liabilities for companies that opt to use AI products. Without the proper insurance policy or limits in place, a company could find itself openly exposed to significant liabilities due to AI that might have otherwise been covered by insurance. Nonetheless, finding the right insurance policy for these new risks can be challenging since AI covers such a broad area (and growing), including machine learning, deep learning, robotics, natural language processing and expert systems. Each type of AI brings their own set of considerations when determining the risks and potential insurance coverage for those risks.
First, there are data privacy risks. AI technologies may allow cyber-criminals the ability to generate more sophisticated deepfakes to steal sensitive information. Other types of AI may automatically pull sensitive information about individuals across various data sources, increasing the risk of privacy invasion. Additionally, if those technologies are not sufficiently secure, any unauthorized access or malfunction of the AI that exposes the sensitive information could result in significant legal ramifications and penalties. Class action lawsuits have already been brought against tech giants like Alphabet and OpenAI accusing them of such privacy violations.
Second, there are intellectual property infringement risks. Pertinent to generative AI products such as ChatGPT, AI can result in potential liability if the system grabs certain data such as artwork, trade secrets or writings from the internet and that data is then used without permission.
Third, and more relevant to fans of the movie “The Terminator,” AI may inadvertently cause physical harm to users or their property if it malfunctions while users are handling a product that incorporates AI.
Finally, there is the risk of impermissible bias and professional error. In 2023, the Equal Employment Opportunity Commission settled its first AI hiring discrimination lawsuit where the AI hiring program automatically rejected applicants that were above a certain age. Similarly, companies and individuals relying on AI to make business decisions or provide professional advice bring with it the heightened risk of legal liability if those decisions or advice are subsequently found to be in error.
There are a few different insurance policies to consider for some of these risks. For example, a company’s cyber insurance policy may cover a broad range of cyber incidents where AI is a factor, such as data breaches, ransomware attacks and regulatory and media liability. For potential intellectual property infringement issues, specific intellectual property policies are available. Additionally, errors and omissions (E&O) insurance may offer coverage for negligence, errors or omissions made while delivering professional services and advice that relied on answers from generative AI. Moreover, if a company is planning on releasing any robotic product that interacts with consumers, commercial general liability (CGL) insurance and product liability insurance should be at the forefront to help mitigate claims alleging bodily injury or property damage. Furthermore, directors and officers (D&O) liability insurance or employment practices liability insurance (EPL) may protect executives at companies in certain instances from liabilities stemming from AI-related decisions.
Due to the new and upcoming nature of AI technology, most existing insurance policies do not explicitly address AI-related issues, leaving some uncertainty about whether these risks are covered. Currently, however, courts have not necessarily treated claims that involve AI differently than any other claims that do not involve it. For example, in Citizens Ins. Co. of Am. v. Wynndalco Enterprises, LLC, the United States Court of Appeals for the Seventh Circuit analyzed whether conduct by a firm that sold access to a facial recognition database was covered by the firm’s business owner’s insurance policy. The court explained that normal contract interpretation still applies to determine the range of coverage. In other words, just because an insurance policy does not discuss AI, a court will likely still look at the express language of the policy to determine whether a certain act falls within coverage. If courts treat insurance policies broad enough to encompass AI-related claims, insurance companies may respond by excluding AI-related incidents, inserting lower sub-limits for AI-related losses or by charging higher premiums to cover the additional risks. Furthermore, similar to when cyber policies were introduced to market, many experts forecast and see a demand for separate policies that are AI-specific. To date, however, that is not the norm.
Given the uncertainty, companies employing any type of AI should address two fronts: First, companies should consider the potential risks posed by their use of AI. Second, companies should review their insurance policies to identify whether, and to what extent, those risks may be covered.
If you have any questions about whether your existing insurance policies have the potential to provide coverage for AI-related matters, please do not hesitate to reach out to Carly Zagaroli or a member of Warner Norcross + Judd’s Insurance Industry Group.
Article courtesy of Warner Norcross+Judd.
Click here for more News & Resources.