
A recent paper by Guy Carpenter underscores the increasing risks associated with the widespread adoption of artificial intelligence (AI), particularly in terms of cyber event aggregation. As AI technology rapidly evolves and becomes more integral to business operations, the potential for large-scale cyber incidents—stemming from both intentional attacks and accidental failures—has significantly increased.
The report identifies four primary areas where AI contributes to these risks: software supply chain vulnerabilities, the expansion of attack surfaces, increased data exposure, and the growing integration of AI in cybersecurity operations. Each of these factors heightens the likelihood of cyber events that could impact multiple entities simultaneously, making it a critical area of concern for businesses and insurers alike.
When companies deploy AI, whether within their own networks or via third-party providers like ChatGPT or Claude, they open themselves up to new vulnerabilities. For instance, if a third-party AI model is compromised, all businesses relying on that model are at risk. Additionally, AI models process vast amounts of data, which can be manipulated either maliciously or accidentally, leading to severe consequences such as data breaches, loss of service availability, or even network-wide breaches.
A specific concern highlighted is the possibility of "jailbreaking," where an AI model is tricked into bypassing its intended restrictions, exposing sensitive information or causing unintended actions within a network. The report also notes the risks associated with AI’s reliance on large, often sensitive datasets for training. The centralization of this data increases the potential for catastrophic breaches if security measures fail.
While AI is celebrated for its potential to enhance cybersecurity—automating complex tasks and responding to threats at machine speed—these benefits come with significant risks. Automated systems, if not carefully managed, could exacerbate issues by taking inappropriate actions without human oversight, such as wrongly quarantining systems or disrupting network operations.
Despite these risks, Guy Carpenter encourages re/insurers to view AI as a significant growth opportunity. The paper suggests that with careful risk management, insurers can profitably underwrite AI exposures while mitigating the aggregation risks. To do so, it is essential for insurers to ask detailed questions and collect robust data on how AI models are developed, deployed, and tested. Understanding the safeguards in place to protect data integrity, confidentiality, and availability is key to managing these risks effectively as AI continues to scale with technological advancements.