Regulation
The Role for Insurance within AI Regulation
Dr. Anat Lior, George Lewin-Smith, Mark Titmarsh
Jul 17, 2024
Bring The Change You Want to See: Creating the Regulatory Landscape for AI with Insurance
Introduction
Insurance is significantly lacking in the current discourse on AI regulation and policy. This brief proposes a closer look at the insurance instrument as a soft regulation approach to enable the safer adoption of AI technology, especially Generative AI, into our commercial market. The insurance tool should be especially beneficial for corporations and enterprises adopting this technology and AI providers and vendors supplying their products and services.
This brief presents five possible paths regulators could take to ensure that the insurance market is more proactive in its role within the AI regulatory landscape. A lighter-touch regulatory approach for AI could be more efficient, beneficial, and flexible while enabling the market to reach its own equilibrium on risk without overburdening corporations adopting or providing AI with ambiguous compliance requirements. Overall, this approach could better encourage innovation and ensure that the short and long-term benefits of AI technologies will be enjoyed by providers and consumers alike.
Throughout history, from the days of the first industrial revolution, the invention of the automobile, to the ubiquitous presence of the internet today, the insurance industry has acted as the backbone of new innovators in their endeavors to develop emerging technologies for the benefit of society. As a risk hedging and managing mechanism, insurance can be a valuable asset to policymakers considering the path AI regulation should take while attempting to maintain a flexible framework.
Paths of Action for Legislators
In crafting a flexible AI policy for the inevitable integration of AI technology into our commercial market, legislators should utilize insurance policies to manage risks, ensure consumer safety and protection, and encourage innovation alongside other regulatory approaches that lay beyond this brief's scope. The regulator’s involvement should be clear, and a roadmap should be provided to institutions seeking to advance their business in emerging industries (similar to the regulatory insurance requirements regarding custodians in digital assets). To do so, they have multiple legal methods in their toolkit. This brief focuses on five possible regulatory approaches regulators could take to leverage the insurance industry in the context of AI development and distribution.
These five approaches move on a spectrum from a light touch to a heavier one, where legislators have a greater degree of intervention and heightened enforcement to integrate insurance into the AI development sphere.
First, setting up a general AI insurance risk framework that provides overall guidelines for insurance companies when offering policies covering risks associated with AI.
Second, establishing a compensation fund, whether general to AI or specific to AI activities such as the usage of generative AI in specific industries.
Third, offering limited liability where AI providers or users have obtained specific coverage and carry specific safety measures.
Fourth, legislators could set up a reimbursement cap so that the government will be responsible for compensation if a correlated event occurs and multiple policies covering AI activities are triggered at once. This is similar in essence to syndicated markets, such as Lloyds, with a capital structure that includes central funds in case one syndicate cannot pay.
The fifth and most intrusive approach will be to set up a mandatory insurance scheme when AI is involved, similar to how today, those who own a car must have an active insurance policy to drive.
Before elaborating on each of these approaches, this brief will first offer a short review of lessons we could learn from previous policies the insurance industry has offered in the context of emerging technologies, focusing on the product of cyber insurance.
Learning from History
The insurance industry has adjusted and evolved in the past as emerging technologies entered our commercial market. The most prevalent example is the creation of cyber insurance products as the risks associated with cyberspace became clearer over time. There are clear similarities between cyber insurance’s evolution and AI policies’ projected development. Munich Re, which offers AI-specific products, has recently weighed in on this issue. Munich Re’s prediction is that AI-related losses will increase similarly to cyber losses and risks, requiring risk managers, brokers, and insurance carriers to think about AI risks more systemically and strategically. AI risks will also be excluded from traditional policies, given correlated risks and “silent AI” exposure. If the increase in AI-related losses leads to insurance claims, this may lead to exclusions under traditional policies, especially where these risks are covered silently. In the cyber context, cyber losses hit insurers, which lead to exclusions to existing policies. Thus, insurers’ capacity dries up for specific lines of insurance until new entrants enter this market with an understanding of how to write this risk. A similar market cycle could occur in the AI context, leading to a similar outcome regarding a designated AI market and policies.
Cyber insurance policies are a complex and challenging product the insurance industry is still grappling with. Corporations developing AI, insurance carriers aiming at offering coverage, and regulators can avoid past mistakes conducted in the cyber context to mitigate risks to AI users. This can manifest by establishing in-house compliance standards to ensure that corporations develop and deploy AI safely and that consumers use it as intended and with proper safety measurements. These precautions could be provided by ISO, NIST, and SOC attestations, as well as private companies operating in this field. The latter may include basic education, leading to a better understanding of this technology and its potential risks as a loss prevention service and a better approach to its development and usage.
Another interesting example is a sub-evolution of the traditional error and omission policy in the form of a Tech E&O policy. While cyber insurance protects the policyholder from data leaks, data breaches, and cyberattacks, Tech E&O covers businesses in cases of product failures, negligence, and mistakes related to the technology being used. Some insurers have been offering this coverage in the context of AI when third parties allege a failure (i.e., an error or omission) of this technology provided by AI developers. It is unclear whether this type of policy will be sufficient in covering AI-associated risks as some aspects of it might be excluded given the inherent opaque nature of AI and its lack of predictability, unlike other emerging technologies that were in mind when this policy was first introduced.
Another interesting example is the history of mandatory insurance in the nuclear and space context. Nuclear insurance is mandatory, highly regulated, and backed by federal funds. The Price-Anderson Act has created a three-tier insurance mechanism to handle claims involving harm from nuclear power plant accidents. Given the wide scope of damages involved in a nuclear accident, this industry cannot solely rely on insurance companies, leading the government to act as a reinsurer when needed. In the space travel industry, the shift from solely governmental space travel by NASA to privately operated space tourism has questioned the risks associated with space travel and who is liable for them when those occur. In the US, an insurance policy must be in place to receive a licence from the Federal Aviation Agency (FAA). The FAA has the discretion to decide how high the policy carried by the licensee should be regarding third-party claims for bodily injury or property damages. The cap is $500 million. Both nuclear and space industries present systemic risks similar to those of AI. Historically, it was just the government and its federal agencies conducting these activities, unlike AI, which is democratized in nature and is being developed by both the public and private sectors. Thus, drawing similarities between the three might be misleading. Nonetheless, the cases of nuclear technology and space exploration present the importance of insurance in enabling the development of these technologies and the important role the government constantly takes upon itself to enable emerging technologies’ growth.
Five Possible Paths
The five approaches detailed below are in no way an exhaustive list of potential paths the regulators could and should take to ensure insurance will have an active and useful role in AI policymaking. They are mostly offered as a starting point for those interested in this line of thought. They aim to present the underappreciated role insurance can have in mitigating risks associated with emerging technologies. The paths below are presented on a spectrum from a light-touch to a heavier-touch approach where the legislator is significantly more actively involved in the creation of contemporary insurance law.
First, the regulator can set up a general AI insurance risk framework providing general principles and guidelines for insurance carriers as they offer policies covering AI-associated risks. A good example of this approach is the 2021 New York Cyber Insurance Risk Framework, which aims to “foster the growth of a robust cyber insurance market that maintains the financial stability of insurers and protects insureds.” The Framework identifies six priorities for best practice, including establishing cybersecurity expertise, educating policyholders, and evaluating systemic risk, all of which are also highly relevant in the AI context. This approach doesn’t offer clear regulatory instructions as it only suggests general guidelines to insurance companies working in this field. Nonetheless, it provides flexibility and a soft regulatory nudge in the form of standards and principles, enabling insurers to better understand what is expected of them and how they can offer better services this early in the process of AI implementation into our commercial market. As such, it is a welcomed starting point in this novel field where regulatory sources are lacking.
Second, the regulator could establish a compensation fund, general to all AI activities or specific to AI sub-fields. A similar proposition was made in 2017 by the European Parliament Resolution with Recommendations to the Commission on Civil Law Rules on Robotics. Section 59(d) of the resolution considers the creation of “a general fund for all smart autonomous robots” or “an individual fund for each and every robot category and whether a contribution should be paid as a one-off fee when placing the robot on the market or whether periodic contributions should be paid during the lifetime of the robot.” This discussion did not consider generative AI and mostly focused on the physical manifestation of AI technologies. Nonetheless, the notion of setting up a fund and splitting the contribution between AI developers and users has been repetitively brought up, especially in the context of autonomous vehicles. This approach seems premature at the moment, given the state of the technology and the high burden it puts on developers and users of AI. It does provide a safety net in a field that is currently alarming society, but it is likely to do so in a non-structured way as no clear categorization of AI usage currently exists, especially given the ubiquity of generative AI. Thus, it seems like a problematic measurement to pursue.
A related path is that of a group captive founded by large corporations such as Microsoft and Goldman Sachs. These large publicly traded companies could set up an inclusive “industry group captive” to manage AI-associated risks instead of purchasing an insurance policy from a traditional insurance provider. This industry group-captive approach could benefit the entire industry, giving it a comprehensive and stable safety net, thus signaling to the regulator, users, and developers of AI to the strength and potential growth of the field. We hope to elaborate on this innovative tool in a follow-up blog.
Third, regulators could mandate limited liability where AI providers or users have obtained specific coverage and consulting provided by risk management companies, including but not limited to insurers, attestation, and audit firms. Combined with the previous path, section 59(c) of the resolution offers a similar line of thought: “The manufacturer, the programmer, the owner or the user to benefit from limited liability if they contribute to a compensation fund.” This approach incentivizes AI developers and users to purchase policies to benefit from limited liability under certain circumstances. By doing so, it increases the safety net insurance products offer. On the other hand, it is still difficult to decide which specific coverage or services should be mandated, given the novelty of AI and the lack of information surrounding its risk mitigation processes. As a result, this seems like an appropriate method to implement as more information is gathered and it becomes clearer what safety measurements could and should be mandated to benefit from limited liability.
Fourth, the regulator could set up a reimbursement cap from which the government will compensate policyholders if a correlated event involving AI occurs and multiple policies are triggered simultaneously. This is similar to the structure set in the Terrorism Risk Insurance Act (TRIA). The trigger for the TRIA mechanism is fulfilled once insurers pay losses of $200 million dollars following an act certified by the Secretary of State as a terror attack. In this case, eligible insurers can “recoup reinsurance for 80 percent of their payments beyond their deductible, which is calculated as 20 percent of the insurer’s previous year’s direct earned premiums.” The cap for aggregated government and private insurer payout for losses is $100 billion annually. This type of mechanism in the AI context can help alleviate insurers’ concerns about correlated/aggregated risks and diluted revenue and encourage them to minimize exclusions, knowing this backup reserve exists. Conversely, AI risks, even existential ones, are not similar enough to risks posed by terrorism and other wide-scope risks generated by malicious third parties, for which the government is responsible for protecting its citizens. Thus, this path might suffer political and social resistance, opting to invest the funds into other more pressing matters. Still, given the national security significance of AI and the high cap that will be set, it still seems feasible to implement such a method as long as some bi-partisan support is established.
Lastly, the regulator could mandate a first- and third-party liability policy scheme when AI is involved, whether across the board or in specific, more risky situations or sectors. The most intuitive example of this approach is how insurance is mandated in the automobile industry. Out of the paths listed above, this is the most potentially burdensome way the regulator could get involved regarding the intersection of insurance policies and AI, but it sets an important baseline for consumer protection. The social utility of AI must be extremely clear, similar to how we view automobiles as an essential aspect of our lives. Otherwise, it will be difficult to successfully implement this path without social and political resistance. It will also require a rapid expansion in insurance markets dedicated to AI and commitments from insurers to provide this cover. Nonetheless, establishing this mandatory scheme will benefit all sides if such utility is established, focusing on specific sectors of AI development and usage that present more risks (e.g., medicine, banking, etc.). By focusing on mandating coverage in specific instances of AI usages that present broader risks but are still considered essential, the regulator will ensure that AI activities can continue with an added layer of protection.
Given the current trajectory of AI development and usage, the lighter touch paths described above seem more appropriate to combine flexibility and enable innovation for AI developers, corporations using AI, and insurance carriers operating in this field. Insurance companies will be better positioned to leverage their data and risk-managing properties if the regulations provide them with a clearer starting point.
Conclusion
Four possible scenarios might lead to the creation of a new insurance market. First, unexpected losses resulting from AI might lead to the repricing of traditional policies and eventually to the exclusions of risks associated with AI, or at least parts of it, leading to a specialized market. This would create a new market, and is similar to the path cyber insurance took. Second, given the opaque and unpredictable nature of AI, it is possible new and emerging first-party losses will arise. These will require their own dedicated insurance market and capacity, which will organically form as new approaches to underwriting these risks are developed. Third, specialist coverage mandated by regulation for AI risks will create a specialized market. Lastly, in the future, when humans are potentially taken ‘out of the loop’ given the agentic capabilities of AI, traditional policies will fall short, and insurance carriers will need to develop a new concept of AI coverage, answering to algorithmic decision-making.
Whatever it might be, regulators have a role in ensuring that the benefits of purchasing an insurance policy will be available to both developers and users of AI. Setting up a high-level global standard is important and might be inevitable, given how this technology spreads globally. Nonetheless, given the fast-paced growth of AI and the lack of a clear, unified regulatory response, especially in the US, individual jurisdictions should actively pursue ways in which insurance can be beneficial in maintaining the safety of AI while encouraging AI developers to continue to innovate. This will help both sides hedge and manage their risks, which is an essential aspect of adopting emerging technologies.
Sources and Further Readings
Insurance Circular Letter No. 2 (2021), New York State (Feb. 4, 2021), https://www.dfs.ny.gov/industry_guidance/circular_letters/cl2021_02 [https://perma.cc/NA5P-KGE7].
European Parliament Resolution of 16 February 2017 with Recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)) https://www.europarl. europa.eu/doceo/document/A-8-2017-0005_EN.html [https://perma.cc/5272-8MFL].
Kenneth S. Abraham & Robert L. Rabin, Automated Vehicles and Manufacturer Responsibility for Accidents: A New Legal Regime for a New Era, 105 Va. L. Rev. 127 (2019).
Carrie Schroll, Splitting the Bill: Creating a National Car Insurance Fund to Pay for Accidents in Autonomous Vehicles, 109 Nw. U. L. Rev. 803 (2015).
Jin Yoshikawa, Sharing the Costs of Artificial Intelligence: Universal No-Fault Social Insurance for Personal Injuries, 21 Vand. J. Ent. & Tech. L. 1155 (2019).
Russ Banham, AI Insurance Takes a Step toward Becoming a Market, Carrier Management (Nov. 28, 2022), www.carriermanagement.com/features/2022/11/28/242708.htm?bypass=e3a499e2a59c457bbaeb794a2ae30c46.
Anat Lior, Innovating Liability: The Virtuous Cycle of Torts, Technology and Liability Insurance, 25 Yale J.L. & Tech. 448 (2023).