New quake models could lead to downward pressure on rates. But how shaky is the science on which the new models are based, asks Tim Evershed.

The promise of up to 35% reductions in loss estimates that accompanied new earthquake models this year has set alarm bells ringing, with some experts concerned about the ongoing unpredictability facing underwriters.

Both RMS and AIR Worldwide have updated US quake models this year, incorporating the 2008 US Geological Survey (USGS) national seismic hazard maps. According to RMS, the new models are likely to lead to a reduction of 10%-25% in

US earthquake insured loss estimates for the average insurer across all lines of business, with more modest changes in loss estimates for commercial business lines and larger reductions for residential lines.

The most significant changes in North America will be in California, where modelled loss estimates will reduce by approximately 5%-15% for most commercial portfolios and 25%-35% for the majority of residential portfolios, RMS says.

Catastrophe models and the reinsurance industry have a chequered history, with heavy criticism of the models following losses such as Hurricane Katrina and Windstorm Kyrill. California quake is globally the second-largest pool of natural catastrophe aggregate, following Florida windstorm. Even a small drop in loss estimates could mean significant reductions in rates and the capital necessary to support portfolios.

“You would not expect to see the full impact of the changes immediately, but I think reinsurers will be reducing their prices for California quake, although the quantum is yet to be decided. Primary insurers should expect to free up some capital,”

Guy Carpenter’s managing director, Dickie Whittaker, says.

Peculiarities

There remain difficulties and uncertainties in the models, however, and therefore also in the loss estimates. “There has never been a large magnitude quake where the models have over-estimated the loss. The three vendor model loss estimates for the Northridge quake turned out be a tenth of the actual loss,” Karen Clark, president of Karen Clark & Co, warns.

Dr Adrian Chandler, a visiting professor at the Benfield Hazard Research Centre and an expert in earthquake engineering, adds: “On average, the losses might go down but there will be peculiarities. I think they have underestimated both that and the spread of the losses. Insurers need to be aware of that.” So is the industry about to repeat the mistakes it made with its over-reliance on windstorm models for quake risk?

The key change in the new models incorporates work on how ground shaking decreases with distance from an earthquake’s fault rupture, known as ‘ground motion attenuation’, based on the Next Generation of Ground-Motion Attenuation Models (NGA) project.

“The way in which the ground shakes changes from the site of the quake to the places where it hits; it spreads like ripples on a pond but it is not uniform because of the different ground types, rocks and other topography,” Chandler explains. And although the data used by the USGS is the latest available, it is still very much a work in progress.

HOW MUCH DATA?

Clark says that a large degree of uncertainty remains in the models as “we have so little data on quakes”, particularly for the New Madrid earthquake zone, which is the second most important in the US.

“After 2004 and 2005, the wind models changed based on some pretty hard evidence, but we have not had that with quakes. So these models are changing on the lab data,” says Rob Stevenson, group head of exposure management at Kiln, which is currently testing the new RMS model.

“There has been a tremendous number of dollars spent on research by the USGS and others. What we have done is to embody all this research,” Dr Jayanta Guin, AIR Worldwide senior vice-president of research and modelling, says. “Our view on the risk associated with big quakes has not changed. However, the probabilistic view of risk has changed due to the developments in NGA and engineering.”

“Although the data analysed behind these new models is extensive, between 24% and 40% – depending on the model used – comes from the Chi Chi event. This, combined with sparse data from extreme events, means we have to understand the uncertainty this causes before fully appreciating the true impact,” Whittaker says.

RMS chief products officer Paul VanderMarck comments: “There is real data behind that science from around the world in earthquake zones like Turkey, Taiwan and Alaska. There has been a fourteen-fold increase in the amount of ground measurements obtained from quakes over the last few years.”

But he adds: “There is still some uncertainty in the models. The changes are principally driven by science, not by events.”

At present, the homeowner rate of insurance take-up in California is 12%. So there would be a significant gulf between insured and economic losses. A magnitude-seven tremor hitting the Hayward fault today would be expected to produce economic losses of around $230bn, of which 13%-16% might be insured.

But the take-up of earthquake insurance could rise, as Fannie Mae and Freddie Mac are being forced to scale back their share of mortgages; private capacity is filling the breech and insisting on increased protection of the collateral for home loans, which is driving up demand for insurance. So it is crucial that reinsurers use their new models effectively.

“The trouble is that they don’t allow insurers to take control, but then again do the insurers have

the necessary expertise to do that?” asks Chandler. “The modelling teams might be made up of graduates but they do not have engineering qualifications and the models are based on engineering science.”

Clark says: “There are a lot of talented people around the models, but the trouble is they are producing lots of numbers and may not be digging in and really scrutinising those numbers. They have the right resources but they need to refocus them. The real danger is using point estimates.”

So how should a reinsurer test and implement the models?

“At Kiln, we would do about three months of running both models together to understand the differences. We need to dig in and make sure we are comfortable with changes and understand why they have changed,” Stevenson says. “We are not a model-driven underwriter, but the model certainly plays its part in the pre-underwriting analysis, capital management and capital allocation of the portfolio.” VanderMarck says: “It is an important juncture for our clients to understand their portfolios and the uncertainties within it. Poor data quality can lead to an increase in losses, so we have told our clients to really scrub their data.” Non-modelled losses can also cause a significant increase to loss estimates, he believes.

According to Chandler, it is crucial that modelling teams get their four primary modifiers correct.

These are construction class by material, age of buildings, height and occupancy (residential, commercial and so on). But he adds that some secondary modifiers, such as ‘soft storeys’, are as important as the primaries. A soft storey is a level in a building, usually the ground or first floor, where the quake resistance is lower than the rest of the building, for example glass shop fronts, large entrances and car parking.

“In some cases, having a soft storey can make a 10% difference to a loss. So putting it in as a secondary factor is open to question when it could in fact be a major factor,” Chandler says.

“Of course, every company must make its own decisions,” Guin concludes. “There will always be uncertainty and it is up to them to decide their risk appetite.”

Tim Evershed is a freelance journalist