The question for our industry is no longer whether AI will be utilized by adversaries, but whether the insurance model can evolve quickly enough to remain a viable backstop for the digital economy, writes Guy Simkin, co-founder and CEO at Cyber Insurance Academy

Guy Headshot

Cyber insurance was built on the assumption that cyber risk, while complex, was ultimately human in scale.

We operated on the premise that attacks required manual effort, followed observable patterns, and developed at a pace that insurers could model and price with reasonable confidence.

That landscape is now fundamentally changing.The rise of AI-enabled cyberattacks represents a structural shift in how risk must be assessed and absorbed.

When threats can be launched at machine speed, the traditional foundations of cyber underwriting (historical data and human-centric response times) become increasingly strained.

The question for our industry is no longer whether AI will be utilized by adversaries, but whether the insurance model can evolve quickly enough to remain a viable backstop for the digital economy.

This is not about the emergence of a new category of “AI insurance”. The issue is more fundamental.

AI is rapidly becoming a new set of tools in the hands of attackers, tools that radically change how cyberattacks are executed, how quickly they spread, and how difficult they are to contain. What was once human paced is now machine driven.

An Anthropic Inflection Point

In September 2025, we reached a significant milestone in the threat landscape.

A highly sophisticated AI-orchestrated espionage campaign was uncovered, marking one of the first documented cases where an AI model executed the vast majority of an intrusion lifecycle autonomously.

This was no longer a case of a human hacker using AI to write a better phishing email.

According to Anthropic, threat actors leveraged its Claude Code tool to handle reconnaissance, vulnerability discovery, exploitation, and data exfiltration with minimal human intervention. In key phases, the AI operated at a velocity that exceeds human capability.

This transition from “AI-assisted” tools to “AI-autonomous” agents is the proof of concept the industry has feared. It confirms that the frequency and efficiency of breaches can now outpace traditional actuarial expectations. What was once a roadmap item in a threat report is now a live reality on the claims desk.

The Great Compression

The traditional cyber kill chain - once a slow, painstaking and resource-intensive process that required significant human capital - is now being dramatically compressed into minutes or hours.

The shift is most visible in the transition from targeted effort to autonomous efficiency, with the “reconnaissance” phase becoming the first casualty.

Where human actors once spent days mapping a network, AI now possesses the ability to “see” an entire architecture instantaneously. It autonomously scans vast environments, identifying “crown jewels” and prioritizing vulnerabilities with surgical precision before a human defender even receives an initial alert.

Once inside, the friction of lateral movement evaporates. Rather than analysts poring over logs and access paths to find their next move, AI agents navigate networks as a continuous, automated process. They identify key data repositories and bypass internal barriers at machine speed, operating with a persistence that human actors simply cannot replicate.

The implications for defenders are profound and unsettling. Unlike a human adversary, an AI agent does not suffer from fatigue, does not overlook misconfigured ports, and does not require weekends off. It represents a permanent, high-velocity presence that simply outpaces the human-centric responses designed to stop it.

As PwC recently warned, we are witnessing a transition beyond discrete exploit generation toward multi-stage campaigns executed with total autonomy. These “algorithmic hacker armies” dwarf the productivity of traditional threat actors, allowing for complex breaches to be deployed at a fraction of their historical cost.

Cyber Insurance: A Business Model Under Siege

For the insurance sector, the Great Compression strikes at the very heart of the cyber insurance contract.

Since its inception, cyber insurance has been built to cover a human-led risk. Underwriting frameworks, pricing models, policy terms, and incident response assumptions all reflect the limitations of human adversaries: finite time, finite resources, and observable patterns of behavior. Those assumptions are now eroding.

As we move into the era of the autonomous agentic threat actor, three core pillars of the insurance model are beginning to crack:

The Erosion of Underwriting Cycles

Traditional cyber underwriting operates on a snapshot in time. We assess a company’s posture, analyze historical breach data, and price a policy intended to remain valid for the year ahead. That approach only works when risk evolves slowly enough for the snapshot to remain meaningful.

Recent trends in the use and abuse of AI breaks this premise. We now know that an adversary can discover, prioritize, and exploit vulnerabilities in minutes, rendering yesterday’s assessment obsolete.

Historical loss data (already an imperfect proxy) becomes a weak predictor of an attacker that adapts continuously, learns autonomously, and operates at machine speed. The temporal “buffer” that once allowed underwriters to price risk with confidence is rapidly disappearing.

Unstable Loss Profiles & Claims Overload

This compression has direct consequences on claims and response.

Faster, cheaper, and more scalable attacks translate into higher incident frequency and greater claims volatility. Policies calibrated for human-scale ransomware events now face the prospect of dozens or hundreds of AI-driven intrusion attempts within a single coverage period.

More troubling still, the nature of loss is changing. AI-enabled espionage does not announce itself with encrypted files and ransom notes.

Instead, it enables silent, persistent exfiltration of intellectual property, customer data, or strategic plans, resulting in losses that are harder to detect, harder to attribute, and far harder to quantify. For insurers, that ambiguity complicates both claims handling and capital planning.

Destabilized Feedback Loop

Cyber insurance did not start from a position of strength. For much of its existence, it struggled for legitimacy - dismissed as opaque, volatile, and reactive. Only after a painful hard-market reset did the industry begin to rebuild trust through discipline and restraint.

AI-enabled attacks now threaten to push the market back into survival mode. As losses become faster, cheaper to execute, and harder to predict, insurers are once again forced to retreat behind stricter terms and higher prices.

Policyholders, faced with escalating costs, may under-insure or exit the market entirely. Years of progress in rehabilitating cyber insurance’s image risk being undone not by poor underwriting, but by a threat environment that has outpaced the model itself.

Can we hope to beat them?

A common rebuttal is that defenders can deploy AI too - building AI-powered detection, response, and remediation tools capable of matching attackers at machine speed. There is truth in that argument. But it obscures a more uncomfortable reality: cybersecurity remains fundamentally asymmetric.

Attackers must only succeed once. Defenders must succeed every time.

AI compresses that imbalance. When reconnaissance, credential harvesting, and lateral movement are executed at machine speed, even minor oversights - a stale credential, an unpatched service, a forgotten access path - become immediate points of exploitation. The margin for error, already thin, is narrowing further.

This is no longer a hypothetical concern. Anthropic’s recent disclosure suggests defenders were fortunate.

The campaign was detected, and the underlying “jailbreak” techniques identified, before irreversible damage occurred. But there is no guarantee future iterations will be interrupted as early. As adversaries refine their methods, automated actions will increasingly blend into normal network behaviour, making detection slower and attribution harder.

Crucially, parity in tools does not translate to parity in outcomes. Attackers scale with near-zero marginal cost. Defenders remain constrained by budgets, operational complexity, and the necessity of human oversight.

AI may enhance cybersecurity, but it does not eliminate its structural disadvantages. Treated as optional or incremental, it risks accelerating the failure of defenses designed for a slower, more forgiving threat environment.

Toward a New Defense Doctrine

Responding to this shift requires more than incremental upgrades. It demands a rethinking of how digital systems are secured and governed.

For enterprises, bolting AI onto existing detection stacks will not be enough.

Identity architectures must be redesigned, access controls made granular and policy-driven, and continuous verification treated as a baseline assumption. AI agents, both internal and external, must be recognised as first-class security subjects, governed by enforceable constraints rather than implicit trust.

Governments and technology providers face their own reckoning. The unrestricted release of powerful general-purpose AI tools carries systemic risk that extends beyond individual breaches.

Guardrails, transparency obligations, and responsible deployment standards are no longer optional ethical considerations; they are prerequisites for economic stability.

Insurers, too, cannot remain on the sidelines. Static underwriting models must evolve into dynamic frameworks that incorporate real-time telemetry, AI-driven threat intelligence, and continuous risk evaluation. Stronger baseline controls, live visibility, and collaborative incident response will increasingly become conditions of coverage, not competitive differentiators.

And yet, even if all of this is achieved, a harder question lingers. If AI can already orchestrate complex, largely autonomous intrusions, the cyber domain is no longer a contest of humans versus humans. Have we lost the cyber war? Not yet.

But if the industry continues to underestimate the offensive power of AI, or relies on models built for a slower and more predictable adversary, the answer may arrive sooner than expected. In a world where attacks unfold faster than we can detect, respond, or insure against them, standing still is not neutrality. It is surrender.

About Cyber Insurance Academy

The Cyber Insurance Academy was cultivated by the leading minds in cybersecurity and insurance, with a mission to help cyber insurance professionals stay ahead of the curve. We aim to address the industry’s educational gap and technical challenges, while fostering a vibrant community of like-minded professionals.

Our first-of-its-kind online campus blends a Gold-Standard CII-CPD accredited course, expert-led certification courses, industry-leading events, a top-tier content library, and a supportive, diverse and professional network that equips you with the confidence and expertise to lead in cyber insurance and make an impact.