In the 1990s cyber insurance was a niche add-on product to general business insurance policies, adopted mostly by tech-savvy or highly regulated companies. It wasn’t until 2016, when numerous high profile data breaches outlined the need for dedicated cyber insurance across most industries. The increasing threat exposure, along with data regulations such as GDPR, pushed cyber insurance into the mainstream as an essential safeguard to most business operations (Source: Marsh Commercial).
Now in 2025, we are seeing a similar trend, this time with AI. With the recent boom of AI, and its rapid adoption in enterprise products, many companies mistakenly assume that general cyber insurance policies will also cover AI-related liabilities. However, AI introduces a unique set of risks, such as privacy breaches, IP infringement, and hallucinations, which stem from its complexity, unpredictability, and evolving legal landscape.
In response, a small group of forward thinking insurance providers are now offering AI liability insurance as a standalone product, positioning it as the next essential coverage for companies that want to leverage AI. In this article, we explore the emerging AI-specific risks, real-world examples of legal exposure, and how specialized insurance can offer protection and strategic value.
Emerging AI-Specific Risks
Performance failures
AI systems can produce inaccurate outputs, such as financial forecasting errors or manufacturing defects, which may lead to operational disruptions or financial losses. For example, an AI quality control system in automotive manufacturing might overlook defects, resulting in costly recalls. In our recent article on the ongoing lawsuits in the healthcare space, we outlined a few major cases against insurers accused of using faulty AI systems to deny legitimate claims. In one case, over 90% of the appealed claims were successful, which revealed systematic issues with the accuracy of the AI model.
Legal liabilities
- Intellectual property infringement: Using copyrighted or unlicensed data to train AI models can expose companies to lawsuits. Meta, for instance, is facing legal action for allegedly using pirated digital books to train its language models. Likewise, Thomson Reuters has sued the AI startup Ross Intelligence for training its legal AI on proprietary Westlaw data without authorization (Source: Costero Brokers).
- Bias and discrimination: Algorithmic bias in hiring or lending decisions can trigger regulatory penalties and reputational damage. UNT Dallas has outlined how biases can occur when AI lending models are trained on historical data that reflects decades of systematic discrimination, thus leading to patterns of racial discrimination in mortgage lending. In 2022, Wells Fargo was accused of discriminatory lending practices based on their use of an algorithm designed to assess the creditworthiness of applicants. The algorithm was found to give higher risk scores to Latino and Black applicants, resulting in denials of loans at significantly higher rates compared to white applicants with similar qualifications (Source: UNT Dallas).
- Hallucinations: Large language models are also known to generate convincingly false or even dangerous content. One such example is the lawsuit against Character Technologies in which the plaintiffs claim that the company’s AI product enhances the risk of depression, suicide, isolation and others among the American youth. Another popular example of false data generated by AI models is the case of a lawyer who used ChatGPT in court and unknowingly cited fake sources to defend his case (Sources: Hunton & Forbes).
Moreover, the data used for training AI models could be deliberately manipulated by bad actors, also known as data poisoning. Such attacks can happen within active AI chatbot applications or can emerge from the LLM supply chain, e.g. in pre-trained models, open source and crowdsourced data.
Cybersecurity threats
The use of AI has introduced new cybersecurity threats, such as:
- Voice and video impersonation (deepfakes)
- Synthetic identity creation
- AI-enhanced phishing attacks
- Unauthorized access to proprietary AI tools
We are also seeing the rise of AI Denial-of-Service attacks, which similarly to the classic DOS attacks, aim to overwhelm AI models. Such attacks can consume excessive resources and incur large costs to the model providers, especially those who rely on pay-as-you-go billing models.
Moreover, AI models, particularly proprietary LLMs, are at risk of model theft, which could lead to intellectual property loss, reputational harm, or even data leaks (Sources: TrendMicro & IBM).
Regulatory non-compliance
New AI governance frameworks, such as the upcoming EU AI Act, impose strict requirements for transparency and accountability. Once the act goes into effect in August 2025, non-compliance with certain AI practices could result in massive fines of up to 7% of the company’s annual turnover (Source: EU Artificial Intelligence Act).
AI Liability Insurance: Mitigating the risks
As more companies are integrating AI systems into their core operations, the range of legal, operational, and reputational risks is increasing. Recognizing this shift, insurance providers are rolling out AI-specific liability coverage to protect businesses from financial losses associated with AI failures.
This new type of insurance can help businesses manage costs related to:
- Legal defense and settlements for claims involving algorithmic discrimination or IP violations
- Regulatory fines for non-compliance with AI governance laws
- Operational losses due to AI malfunctions or misuse
- Data breach or cyberattack recovery, particularly when AI systems are the attack target
For instance, if a company’s AI tool is found to have denied insurance claims unfairly or used copyrighted material in training, AI liability insurance may provide financial protection against legal claims and settlements. Additional business benefits of AI specific coverage include:
Risk transfer: Even with rigorous testing, AI systems can fail unpredictably. Insurance can absorb costs, such as compensation for defective products due to faulty AI quality control, or legal defense costs in intellectual property disputes.
Regulatory alignment: AI coverage demonstrates proactive risk management and can help companies align with emerging AI governance frameworks like the EU AI Act. This can be especially valuable in industries where compliance is closely monitored and penalized.
Trust acceleration: Insured performance guarantees (e.g. “Our AI detects fraud with 99.9% accuracy or we compensate you”) can reduce customer skepticism and shorten the sales cycle.
“As AI becomes deeply integrated into core business operations, companies must begin treating AI risk with the same rigor as cybersecurity or financial risk. Unmanaged AI systems can lead to serious consequences including reputational damage, legal liability, and operational failures, especially when used in high-stakes areas like hiring, credit scoring, and customer engagement. Regulatory pressure is also growing, with new laws like the EU AI Act requiring strict compliance. Similar to how cybersecurity evolved into a board-level concern, AI risk must be embedded into enterprise risk management, with governance structures, monitoring, and response strategies in place. Risk transfer through insurance solutions can act as a critical tool for loss mitigation.”
Claire Davey, PhD ACII CISSP
Head of Product Innovation and Emerging Risk at Relm
Conclusion:
As AI becomes more central to businesses operations, the associated risks are becoming very real. Waiting for regulations to settle or for the industry to catch up could leave companies exposed at the worst possible time. By getting ahead of the curve with AI liability insurance companies can show customers, partners, and regulators that they take AI risks seriously. It’s a smart, proactive step that can build trust, support innovation, and keep your business resilient in the evolving AI landscape.
References:
- How Liability Insurance Can Drive the Responsible Adoption of Artificial Intelligence in Health Care
- aiSure™ More AI Opportunity. Less AI Risk | Munich Re
- AI Gone Wrong? Now There's Insurance For That
- Artificial Intelligence, Real Risks: Insurance Coverage That Can Respond to AI-Related Loss and Liability
- AI and Insurance: Managing Risks in the Business World of Tomorrow | Herbert Smith Freehills Kramer | Global law firm
- Unveiling the Quasi-Regulatory Landscape: Empirical Insights into AI Liability Policies | Armilla
- Insurers Explore New AI Coverage Options, Potentially Filling Coverage Gaps for Policyholders Developing Generative AI