Users think catastrophe models are good value for money and increasingly important to their business. The challenge, it seems, is to feed them better data – and to use their output more effectively. Peter Joy reports
In June, Global Reinsurance ran an online survey on catastrophe risk models which attracted responses and commentary from seasoned professionals around the globe – 56% of them from reinsurance or insurance companies, the remainder split evenly between broking and support services. Respondents raised concerns about the data and assumptions used in modelling. But the biggest headache, it seems, is how to interpret the information the models put out – and then, how to integrate that into underwriting decisions.
In the wake of Hurricane Katrina, hard-hit sections of the insurance industry vented their wrath on the models. Nearly three-quarters of respondents said models had been held “heavily” or “moderately” responsible for the scale of losses. Today, however, few feel the models really deserved much blame at all. “The argument that cat models are somehow to blame for unexpected exposure is facetious,” said one New York-based reinsurance CEO. “The carriers that had the worst experience with cat models during Katrina were the ones who did the least to understand their exposures prior to Katrina.”
Paul VanderMarck, executive vice president of Risk Management Solutions (RMS), remembers models being “the scapegoat for the first 90 days”. But then the conversation “quickly grew more constructive”. Today, the shocks of 2004-2005 seem only to have enhanced the value insurers place on modelling: 39% of our respondents say they now see the models as having a lot more value than before (see figure 1), while 48% say their opinion of the value has remained about the same. Only nine percent now see them as having less value. On overall value for money, 30% rate their models “good” and 10% “very good”, while the remainder rate them at least “acceptable” value. All agreed the models had improved since Katrina – 32% citing “considerable”, 42% “moderate” and 26% “slight” improvement.
In what ways, exactly, have models improved? Katrina’s storm surge, inland flood, looting and repair cost inflation provided a master-class on how urban losses can accumulate and be amplified. Today’s models better reflect that risk.
But the models’ weakest link, says VanderMarck, is the quality of the data fed into them: “If you feed garbage in, you’ll get garbage out”. Our survey respondents agreed – one, for instance, offers a shopping list of rickety data that includes “… undervalued exposures, miscoding of policy structures, incorrect assumptions for occupancy and construction, incorrect inputting of multi-location policies, omission of perils such as tornado and hail, outdated exposures and mis-estimation of portfolio growth.”
The industry has been working on sharpening its property location data since Hurricane Andrew – although information on precisely what’s being insured remains less than reliable. “Exposure data is still the largest source of error and uncertainty,” says one London broker. Modelling firms have been increasing the “granularity” of their mapping, however, and have been combing through past claims information for useful data.
The most switched-on insurers are taking a disciplined, proactive approach to data, says VanderMarck, incentivising front line staff to capture accurate information. “The ten percent who rated their models ‘very good value’ will be the ones that are doing what’s required to really maximise that value,” he says. “The rest are somewhere on the journey.”
“The argument that cat models are somehow to blame for unexpected exposure is facetious
Sound input data, of course, is just the first stride. The next is to build a genuine understanding of how a model operates and the assumptions it makes, rather than treating it as an oracular black box. “Uncertainty is an inherent part of modelling risk,” says VanderMarck. “For users, the fundamental challenge is to understand that uncertainty as best as possible.”
There’s a temptation, it seems, to take basic output data and then go for a rule-of-thumb approach. As one reinsurance treaty technician laments: “Some insurers and reinsurers seem content just to look at an RMS 1-in-200 result and purchase and rate on this number.”
A catastrophe model is a test, not an answer. “Many dismissals of the inaccuracy of catastrophe models are based on illogical deductions and the inability to move on from preconceived risk fictions,” said one respondent. “From the broking perspective, many end-users of catastrophe modelling output still focus on a point on the curve, rather than a loss distribution, and have not grasped the impact of non-modelled loss contributors.” Where it comes to true comprehension of models, it seems there’s still room for improvement.
Some companies are using two or even three independent models in the belief that averaging their outputs will make things more reliable. Vander-Marck says they are missing the point. “That isn’t understanding uncertainty,” he says. “It would be far better to focus that time and resource on fully grasping a single model.”
Armed with robust output that they understand, companies then have to apply it to their business decisions. The trick is how effectively the output is used.
Most of our respondents seemed to share that vision. Eighty-three percent say they see modelling as a core business competence. “Getting it right gives you a huge market advantage both in investor confidence and compliance,” says one London-based specialist. Models are seen as increasingly influential drivers of pricing too: 73% say they are now a “significant” driver and five percent say they are driving pricing “completely” (see figure 2).
VanderMarck sees understanding of models expanding from a hard core of specialists into the executive mainstream. In fact, this is becoming essential for many companies with ratings agencies and regulators increasingly treating insurers’ data and modelling as an essential pillar of their enterprise risk management.
“Some cynics suspect that the modelling firms’ upward revisions of hurricane risk and modelled losses have been motivated by a desire to armour-plate their backsides
Old habits or new paradigm?
Some survey respondents remain unconvinced of fundamental change. “Underwriters have almost entirely forgotten Katrina,” says one Bermuda catastrophe analyst. “Some controls were put it place, but not enough time and resources have been dedicated to fully understanding the true nature of the problem.” Others were less pessimistic, but stressed the importance of continued investment in exposure data capture and modelling.
Although models seem to have gained in credibility since Katrina, some cynics suspect that the modelling firms’ upward revisions of hurricane risk and modelled losses have been motivated by a desire to armour-plate their backsides. “Post-Katrina, the model companies seem paranoid about under-estimating losses,” says one reinsurance treaty technician – demonstrated, he thought, by the “appalling over-statement of Kyrill losses in the UK.” Any further model revisions, he thinks, will produce higher numbers, pushing “cedants to purchase more reinsurance… to cover even more unlikely levels of loss.”
Don’t shoot the messenger, counters VanderMarck. “We’re trying to keep the debate constructive, but there is a strong scientific consensus that the frequency of Atlantic hurricanes has increased. We cannot ignore that science.”
And for all their limitations in design and use, models do seem to be helping to steady the insurance and reinsurance industry’s financial performance. “Compare the periods 1990-1995, with almost no modelling, and 2000-2005 with modelling a pre-requisite to a transaction,” says one London broker. “Business failures in the second period were dramatically lower.” VanderMarck also points to the distance the industry has travelled: “This is a vastly more technical industry than ten or 15 years ago and that’s a continuing trend.”
The insurance industry seems to agree. “Models will always be wrong,” said a survey respondent. “But they give us a starting point to consider, and from which to manage our exposure to catastrophe events. From a capital management perspective, they are essential.”
Peter Joy is head of research at Global Reinsurance.