Just as motorists have come to rely on sat-navs to show them the best course to take, insurers depend heavily on catastrophe risk models. Such dependence can lead to disappointment

The advent of sat-nav has been a boon for motorists. No more getting lost, asking strangers for directions or wrestling with oversized maps: one press of a button and you know where you are and where to go.

Reinsurance underwriters experienced a similar epiphany when catastrophe risk models came into their own around 20 years ago. Models presented them not only with a widely accepted indication of potential catastrophe exposures, but also a benchmark against which to compare the risk of different portfolios or regions.

The chairman of Lloyd’s insurer Beazley’s group underwriting committee, Neil Maidment, who has been involved in the property-catastrophe reinsurance market for roughly the same amount of time as the models themselves, likens the pre-model method of assessing catastrophe exposures to driving using only the rear-view mirror.

“Before Hurricane Andrew [in 1992], the industry would look back at a historic event and try to recalculate what the loss would be if it happened again, which set their parameters of what they thought could happen,” Maidment says. “The models look at a range of outcomes and therefore people have a more reasonable expectation of what an extreme event might cost.”

Despite her recent public criticisms of the efficacy of near-term models, Karen Clark, founder of risk modelling firm AIR Worldwide and now president and chief executive of catastrophe risk management consultancy Karen Clark & Company, adds: “I don’t think you can write a book of property-catastrophe reinsurance without the models.”

Misguided

Models are far from infallible, however. Just as the popular press is littered with tales of how sat-navs have misdirected drivers – sometimes almost to their deaths – insurance journals and industry forums are awash with criticisms of models’ prediction abilities.

“We don’t believe in any of the models. Whatever they call the risk, it is always wrong,” collateralised reinsurance fund CATCo’s chief executive, Tony Belisle, told Global Reinsurance last month when explaining his fund’s decision not to invest in catastrophe bonds.

The dissent was perhaps loudest in the aftermath of North Atlantic hurricanes Katrina, Rita and Wilma in 2005, all of which hit the US Gulf coast in the space of three months. Critics pilloried the commercial vendor risk models for failing to prepare them for the extent of the losses, which hardened the global reinsurance market and led to the formation of the class-of-2005 reinsurers. Following this, modellers scrambled to update their assumptions.

The impact of catastrophe model updates on (re)insurers’ risk assumptions has retaken centre stage with the issue of a US hurricane model update by Risk Management Solutions (RMS) at the end of February. The update was no minor bug fix. RMS said it expected to see wind risk increase for all hurricane states on an industry-wide basis. While warning that the effect would differ by portfolio, the risk modeller said increases in loss results on the market portfolios analysed ranged between 20% and 100%.

Model updates can pull the rug from beneath the feet of users. While not faulting RMS for wanting to include the latest knowledge, understanding and technology in its US hurricane model, chief actuary of Lloyd’s and London market insurer Markel International Nicholas Line says: “If you are the board of a company and you suddenly receive different numbers as a result of a model change, it is very disruptive.”

Such is the expected impact of the recent RMS release that Lloyd’s insurer Novae has been considering setting up a sidecar reinsurer in order to capitalise on the expected additional demand for reinsurance.

More harm than good?

Upheaval caused by model updates is only likely to get worse, according to Line. “With the advent of Solvency II, it is possible it will be even more disruptive because we have to explain what we are doing to the regulators, which links into enterprise risk management and risk appetite,” he says.

Modellers acknowledge that updates can be a challenge for the industry and try to cushion them against the impact. “We have been pre-communicating

for over a year about what the drivers of the model change are in terms of more data and more computing power available to us, and over the past three to four months we have been communicating on expected changes and results,” says RMS vice-president of natural catastrophe and portfolio solutions Claire Souch.

This kind of pre-warning can only help so much. The full effects of the RMS model are still being determined. “We won’t really know the impact until we get it running,” Line says. “That will go for every company in question.”

There is also a question about whether the updates really improve the models. Risk modellers’ announcements proudly list details such as the new technology employed and the amount of claims data analysed, but in some cases updates could do more harm than good.

“Modellers are always using actual events to calibrate their models,” Clark says. “If one or two of those events are relied upon too heavily for a model update, that can skew your results too much in one direction.”

A good example of this, Clark argues, is Hurricane Ike, which slammed into Texas in 2008. Ike was unusual in that its damage was felt further inshore than typically occurs. “Some of the reasons for this are not even related to the storm,” Clark says. “If you calibrated your model to Ike, you would be over-estimating the inland damage for most storms.”

In addition, the fact that models need to be updated calls into question their reliability as a risk benchmark. “Assuming models are reasonably good at estimating losses, if you then have an update that increases the modelled loss output by 100% or 200%, are those numbers good now?” Clark asks.

Modelling firms themselves, perhaps not surprisingly, defend the updates. Souch argues, for example, that every area of assumption-based decision-making is subject to the same types of revisions to take account of new information and processes.

However, modellers acknowledge the criticism levelled at model accuracy in particular, and admit that at least some of the fault for this lies with them. “Models are not perfect,” AIR Worldwide senior vice-president of research and modelling Jayanta Guin says.

“We at AIR take our share of the blame for the shortcomings in models. Models have, in certain cases, shown weaknesses, so part of the criticism is fair.”

Souch adds that she is not surprised criticism still exists in spite of, but also because of, continual improvements.

“It is understandable that the market can see a sudden change or maybe a model prediction miss for an event and not understand immediately why that is,” she says.

Guin cites models’ loss estimates for commercial property as one of their weaknesses in recent years. “That is something we underestimated in the past,” he says.

A further example is water seeping into buildings via windows during high-speed winds: so-called wind-driven rain. “We have been surprised there, and we have taken some measures in our latest release to make sure we are adequately reflecting the risk.” AIR updated its hurricane model in 2010.

User error

However, part of the blame for the disappointment with model performance must also lie with the users. Some reinsurers simply expect them to do too much. Anyone expecting accurate predictions of losses from a particular event is bound to be disappointed with catastrophe risk models or indeed any models.

“They don’t give reinsurance companies or any insurance companies precise numbers. They just get companies in the ballpark,” Clark says. “They are very good tools, but they are blunt tools.”

Some believe those relying too heavily on models deserve all they get. “I tend to be unsympathetic towards people who complain,” Markel International’s Line says. “You can’t outsource your understanding of risk. If you have done that, you are always going to be disappointed.”

Reinsurers’ model usage has become much more sophisticated. Many companies now take the output of models from the three main vendors – RMS, AIR and EQECAT – and weight them according to their strengths in a particular region or risk. They will often also combine this with their own assumptions or modelling. Furthermore, few will only rely on modelled output of any kind to make an underwriting or capital management decision.

“We use catastrophe models in our pre-underwriting analytics, portfolio management and loss estimation processes. However, the volatility of risk means that models can never be wholly relied upon,” says Lloyd’s insurer Kiln Group head of insurance operations Rob Stevenson. “We would not depend solely on the outcomes of any model – they are just one step in our underwriting process.”

But, even when they are using models intelligently, reinsurers can find their efforts confounded. Rating agencies, for example, may place more importance in individual models’ outputs than the reinsurers, which, given the importance and power of ratings, can have a direct influence on reinsurers.

“It is a problem that rating agencies such as AM Best are asking companies for point estimates for their one-in-100-year and one-in-250-year probable maximum loss,” Clark says. “These numbers are highly uncertain. It is false precision to require a company to manage their business based on these highly volatile numbers. The most recent updates show how volatile the numbers can be.”

Equally, while savvy reinsurers can adjust for what they perceive as unnecessary changes to models, regulators may be difficult to convince. Though acknowledging that reinsurers should use models to inform decisions rather than make them, Line asserts this is easier said than done.

“Catastrophe models feed into the capital and you may well have told your board and your regulator that you are going to link RMS into your capital model directly,” he says. “If RMS changes the number and your capital goes up, the regulators will expect you to follow suit. You can’t suddenly say: ‘RMS is 10% too heavy. We are going to knock a bit off’.”

There is clearly room for improvement in risk modelling practices, both on the part of the modellers and the users. Modellers argue that reinsurers need to improve the data they feed into the models. “The models can only do so much,” Guin says. “If the data being fed into the models is of inferior quality, the models cannot work magic.”

Knowing the limitations

A key to effective model use is understanding what is excluded. Even with the many advances in computer technology, engineering and data capture made over the 20 years since models started to be taken seriously by the reinsurance industry, there are certain risks that models cannot cope with. For example, some still struggle with business interruption, and few stray into the territory of loss-adjustment costs.

Modellers say they have taken steps to make any omissions or exclusions as clear as possible. “We have upped the ante on that aspect since Hurricane Katrina,” Guin says. “Katrina caused a lot of disappointment among the model users and part of that disappointment was down to recognition of what the model covers and does not cover. Since then, for every model we have become more vocal and more explicit in stating what aspects of risk we account for and what we don’t account for, and that could be additional sources of risk.”

But there is more to be done. As well as expressing what is excluded, some would prefer any weaknesses to be flagged up too. “It would be helpful if modellers could say where they have had to make big assumptions because of poor data, for example, and how the numbers would have changed if they made different assumptions,”

Line says. “If that happens we can say: ‘You told us that part was uncertain, so we understand that is why the number has moved’.”

Others require more customisation. “Model vendors could improve their offering by providing greater flexibility in their software,” Kiln’s Stevenson says. “This would enable companies to manipulate the results generated by the system. Doing so could greatly enhance efficiency within the industry.”

Models may be imperfect, and work may still be needed to get the most from them, but they are clearly better than the old rear-view mirror technique of 20 years ago. Nonetheless, like the older techniques, they can be dangerous when used in isolation.

“Nobody drives staring at a sat-nav because they would hit something,” Line says. “If you are purely running your business based on a model, whether it is a cat model or a capital model, you will come unstuck.” GR