Now in its third year, the GR survey explains why readers trust catastrophe modelling more now than 12 months ago.

Catastrophe modelling has come a long way since the late 1980s when it was first applied to underwriting. Each major event, starting with Hurricane Andrew in 1992 and ending with last year’s Hurricane Ike, improves the industry’s understanding of loss events. Advances in science, loss data and engineering have helped to increase the sophistication of modelling tools, but uncertainty remains a fact of modelling.

The discovery that models were imperfect came as a surprise to many. The shock of Hurricane Katrina, which caused insured losses of $45bn when it submerged New Orleans in 2005, led many to blame the models. Feelings remained raw when we conducted our first survey in 2007, with nearly three-quarters of respondents saying the models had been “heavily” or “moderately” responsible for the scale of the losses from Katrina.

Paradigm shift

Two years on and there has been a marked shift in perception. We asked respondents for their expectations of the catastrophe models to accurately estimate the insured loss from a large catastrophe. Seventy-five per cent believe that “models should be used as an input to loss estimates but other issues (like bad data, unusual event characteristics, non-modelled perils) mean modelled estimates sometimes vary significantly from actual losses”.

For Paul VanderMarck, executive vice president of Risk Management Solutions, this represents progress. “It’s not just that people’s expectations have evolved, but the sophistication of the market’s use and understanding has evolved to the point that there is recognition of inherent uncertainty.”

However, he remains concerned about the 12% that said the models should provide an accurate loss estimate (to within 10% of the final loss number) within 48 hours of an event. “That suggests a meaningful minority have a completely unrealistic expectation of the models in an immediate post-event environment,” he says.

The large footprint of Hurricane Ike, which made landfall at Galveston, Texas, on13 September last year, is testament to the uncertainty in the characteristics of a large event. Many were surprised that a Category 2 hurricane caused more than $10bn in insured damage. This was partly down to Ike’s characteristics. It was a large storm that took much longer to dissipate. And, as it made landfall and moved inland, its track took it through parts of Texas that had extremely high insured values.

While a minority still expects the models to get it right first time, most are realistic in their expectations of cat model performance. We asked if catastrophe modellers were doing enough to educate users on the inherent uncertainties in modelling. Fifty-five per cent of respondents thought they were providing some information but could do more. “I believe cat modelling companies … understate the uncertainties in the results of their models,” replied one.“ Their job is difficult. By the same token most purchasers of cat models are naive about the uncertainty and have very unrealistic expectations about the accuracy of a good model.”

VanderMarck says the feedback is consistent with the broader market feedback of the past couple of years. “We believe there’s a lot more we can do – and are doing – to give our clients more transparency into the models. For example, we implemented sensitivity tests in our new North America earthquake model to enable clients to quantify how uncertainty in science propagates through to uncertainty in their modelled results.

“We’re at the early stages of a paradigm shift in catastrophe modelling. Increasingly the focus is turning from the ability to quantify the risk to the ability to understand more robustly the uncertainty in that quantification of risk, and to use that explicitly in making decisions.”

Improving data

One source of uncertainty is the quality of the data going into the models, which varies considerably between re/insurers. Eighty-four per cent of respondents thought a company should be measured by the quality of its data. “Those companies that inspect their risks frequently, capture the latest risk data, and update their valuations with current market conditions, have the most accurate model output by far,” said one respondent.

This is consistent with a wider move in the industry to improve the quality of exposure data companies are using. Earlier this year, AM Best announced that it would look at companies’ data quality more closely to assess their risk management processes. Such a factor could affect future ratings.

But not all of those surveyed were convinced that better data was the key to more accurate results. One respondent argued that reinsurers have a tougher time improving quality because they are reliant on information provided by primary insurers.

Another questioned a direct correlation between improved data and better model output. “In some cases improving input data may not lead to model results which are significantly more accurate … and it may not be convenient for modelling companies to say as much.”

This is something VanderMarck disputes. “We’ve seen that improving data quality can have as much as a 30% impact on portfolio loss estimates for our clients – and can have a much bigger impact on individual accounts. We have a balanced view.

Data quality is not the only source of uncertainty – there’s substantial uncertainty in the science and engineering of the models themselves. But data quality as a key input to the models is another material source of uncertainty and, most importantly, is the one source that is controllable and able to be reduced.”

Constantly improving data and scientific understanding givecatastrophe modellers the chance to update their models. But respondents were torn between the prospect of having more frequent updates (and more accurate model output) and dealing with the upheaval of constant new releases.

“I realise that the cost of the models will keep rising if there are many updates but, as long as the alterations to the models – and therefore the results – can be easily identified, there should be value in having the most correct results possible,” said one respondent. Fifty-six per cent said they wanted the models to reflect the latest science, even where this meant frequent changes in results, while 31% would prefer major changes only every three to four years.

“We bear a responsibility to determine when we really think the science has improved to an extent that it gives a better answer, versus to when it’s just changed but isn’t an improvement to the overall answer,” says VanderMarck. He thinks there is a correlation between the growing numbers surveyed who say catastrophe models have a “significant” or “moderate” bearing on their underwriting decision – 83% compared with 71% in the 2008 survey – and the broad appreciation from those surveyed that models need to be improved and updated.