For re/insurers, AI governance should be considered a cyber control, not a separate compliance initiative, writes Matthew Geyman, managing director, Intersys
The conversation around shadow AI has moved on. Most re/insurance organisations now recognise the risk of unapproved AI use within their business. The challenge for 2025 is not awareness but control.

The recent IBM Cost of a Data Breach Report quantifies the scale of the issue: shadow AI was implicated in one in five breaches, adding an average of $670,000 to incident costs for organisations with high levels of unsanctioned AI use.
Despite this, 63% of companies surveyed had no AI governance framework, and only 34% had ever audited for unapproved AI tools.
These statistics illustrate a widening gap between rapid AI adoption and equally rapid risk accumulation. The next phase must be technical and procedural maturity: moving from informal experimentation to governed, observable AI operations.
Shadow IT, accelerated by AI
Shadow AI is the natural evolution of shadow IT.
The principle is the same: employees deploy unapproved software or services to increase productivity, bypassing formal IT controls. What has changed is the scope and potential impact.
AI platforms are deeply integrated into workflows, often granted access to company data stores, APIs and communication systems.
Uncontrolled AI usage introduces risk at multiple layers. There is data leakage when sensitive information is submitted to external models without encryption or anonymisation.
There is exfiltration risk where free or trial AI platforms monetise user data. There are also integrity and provenance risks, where outputs generated by unsanctioned tools find their way into underwriting models, claims systems or client-facing documentation.
In a sector that depends on confidentiality, data accuracy and regulatory compliance, these exposures are not theoretical. They are active vulnerabilities that can bypass even mature cyber defences.
Operational exposure in re/insurance context
Re/insurance operations are data-intensive and interconnected, which amplifies the effect of governance gaps.
Claims automation, pricing optimisation, portfolio analytics and client servicing are all increasingly AI-enabled.
Each of these functions relies on controlled data flows and strong model assurance. Introducing unverified third-party models into that environment undermines both.
From a systems perspective, the key technical challenges include:
● Authentication drift: where generative AI plug-ins connect to corporate systems without central credential management.
● Data provenance loss: where AI outputs cannot be traced back to original, validated data sources.
● Cross-domain privilege escalation: where AI models integrated with productivity platforms such as Microsoft 365 or Slack access data beyond user clearance levels.
● Regulatory exposure: as AI-assisted decisions intersect with regulated processes such as underwriting or claims adjudication.
Yet a subtler but equally significant risk lies in the quality of AI-generated information itself. AI hallucinations, instances where large language models present false or misleading information as fact, introduce a new category of operational exposure.
Their root cause, spurious correlation, is the LLM equivalent of “correlation does not equal causation.” It arises when models infer false patterns from data and present coincidental relationships as truths.
In an underwriting or risk assessment context, such distortions could propagate into pricing models, claims evaluations, or even regulatory submissions. This is where human oversight
remains indispensable; mirroring the function of peer review or managerial sign-off in other critical decision processes.
For a light-hearted illustration, consider some of the more absurd but statistically “significant” spurious correlations that have been observed in real-world data.
For example, the number of insurance underwriters in Florida has been found to correlate with the average number of comments on the Technology Connections YouTube channel.
The number of associate degrees awarded in fire control and safety correlates with the number of insurance claims adjusters in Illinois.
The number of insurance claims adjusters in New York correlates with petroleum consumption in Venezuela. And, perhaps most bizarrely, bulk and canned skimmed evaporated milk consumption correlates with the number of insurance claims adjusters in Washington State.
These examples are entertaining, but they illustrate a serious point: AI models, left unchecked, can easily mistake coincidence for causation, turning noise into narrative. Without human validation, these errors can cascade into operational, reputational and financial risks.
Even where the intent is benign, the outcome can be severe. A misconfigured AI assistant drawing on unrestricted SharePoint data can inadvertently surface confidential information such as salary data, actuarial assumptions or client pricing models.
Governance as a cyber control
The regulatory environment is beginning to close in.
The EU AI Act and the UK’s AI Assurance Roadmap are explicit in placing responsibility on data controllers and business leaders for governance of AI use.
For re/insurers, AI governance should therefore be considered a cyber control, not a separate compliance initiative.
Pragmatically, that means integrating AI monitoring into existing cyber and IT service management tooling.
Security teams can use Microsoft Defender or Cloud App Security, for instance, to detect unapproved AI traffic patterns or API connections. Policy-based access control can limit which applications can invoke generative AI services.
But beyond technical monitoring, governance must account for the cognitive limitations of AI systems; ensuring validation processes detect both data leakage as well as hallucination-driven misinformation.
Formal AI Governance Policies should therefore include review mechanisms for
factual integrity and audit trails that confirm outputs are human verified before business use. Automation is key, but oversight cannot be delegated to the machine. Governance should be enforced by architecture and verified by people.
Detection to discipline
Insurers are increasingly recognising AI governance as part of enterprise risk management.
This shift needs to be mirrored in operational practice.
Organisations that are implementing AI at scale should consider designating a chief AI officer or ensuring the CISO and / or CTO’s remit formally includes AI oversight, backed by budget and reporting lines.
Whatever the route, if this isn’t on the agenda in your next Board meeting, it should be.
AI-related incidents are now part of insurers’ own cyber exposure profiles, affecting both underwriting models and regulatory resilience testing.
As Consilium’s Ethan Godlieb observed, existing cyber policy wordings may no longer be sufficient without affirmative AI coverage. Insurers cannot credibly underwrite what they do not themselves control.
Responsible AI use is an extension of digital hygiene. Technical controls, governance frameworks and employee awareness must evolve together. Shadow AI will not be eliminated by prohibition but by visibility and discipline.
The imperative is clear: organisations must define what “authorised AI use” looks like, detect deviations early, and defend data integrity through both policy and tooling.
AI can enhance resilience if managed well – but unmanaged, it multiplies risk faster than any human user ever could.
For re/insurers, the priority now is not simply to acknowledge the threat, but to proactively provide guidance that engineers it out of the shadows.
By Matthew Geyman, managing director, Intersys



No comments yet