Could the industry's catastrophe models better predict Katrina today? After last year's wake-up call, Helen Yates assesses the recent changes made to improve the models' ability to analyse hurricane exposure.

After last year's testing hurricane season, the industry is watching and waiting with a growing sense of unease to see what this year will have in store, which at the time of writing is still a big unknown (although it must be pointed out that major, industry-changing hurricanes do have an uncanny knack of making landfall just after press deadline). In the mid-Atlantic, hurricanes occur during tropical cyclone season - from 1 June through to 30 November - and meteorologists can provide a fairly accurate forecast of how active the season will be (this year Tropical Storm Risk is predicting a 74% probability of an above-average hurricane season with four tropical storm strikes on the US, of which two will be hurricanes). But predicting the exact frequency and severity of hurricanes is still largely a guessing game. Moreover, anticipating if and where these hurricanes will make landfall is next to impossible.

It's just under a year since Hurricane Katrina devastated Louisiana and the industry is still suffering the consequences of its exposure to the most active Atlantic hurricane season on record. Losses are still being revised and downgrades are still on the cards. While Katrina dominated the headlines, Wilma was briefly the most intense hurricane ever observed in the Atlantic, and Rita, which also became a category 5 storm, caused extensive damage to offshore oil platforms in the Gulf of Mexico.

It is their ability to measure the risks inherent in Mother Nature's natural hazards that keeps the catastrophe modellers in business, but they've had to fight their corner these past few months. Post-Katrina there were huge discrepancies between the loss projections provided by the catastrophe models. They had predicted insured losses of as little as $10bn in the event of a powerful hurricane hitting the city of New Orleans. In actual fact, Katrina cost over $40bn (according to ISO's Property Claim Services), far exceeding what most insurers' and reinsurers' probable maximum loss indicated.

Any modelling expert will tell you that there is no such thing as a typical hurricane event. Unfortunately, a typical event is just the thing the catastrophe models are designed to predict. A highly unique event on the other hand, with unique characteristics, catches everyone off guard, and as in Katrina's case, raises questions about the models' reliability. The finger of blame settled briefly on the models, but this "quickly turned into a much more productive discussion," insists Paul VanderMark, executive vice president of products at Risk Management Solutions (RMS). This productive discussion and the lessons learnt have enabled the catastrophe modellers, including RMS, AIR and EQECAT to fine-tune their offerings ahead of this year's hurricane season.

To recap, when Katrina barrelled down on the coast of Louisiana on 29 August 2005 what the models didn't predict was the storm surge (which happens when strong winds cause coastal water to rise above normal levels), the breaching of the now deemed inadequate levee system protecting New Orleans (80% of which lies below sea level) and the ensuing flood. Nor did they predict the en masse evacuation of the city, looting and general mayhem which followed, and the spiralling costs that went with each stage of this major catastrophe.

As the waters subsided, and the claims came rolling in, loss estimations were revised and revised again. It quickly became apparent that the models had been off the mark in putting a figure on the world's most expensive natural catastrophe ever. The general feeling was that there were important lessons to be learnt. Fast-forward 11 months and the lessons from Katrina can be witnessed everywhere, from the rating agencies' higher capital adequacy requirements, the recalibration of the catastrophe models, to the reduced hurricane exposure and increased diversification of many reinsurers' portfolios. Hurricane Katrina may not have been the only learning opportunity of the last couple of hurricane seasons, but it was certainly a catalyst for change on a massive scale. "Katrina was the wake-up call and maybe Rita and Wilma were the snooze alarm," says Moody's senior analyst Pano Karambelas.

A brief history

Catastrophe modelling is fairly new to the world of insurance and reinsurance. While it was introduced in the 1980s it didn't receive much attention until after Hurricane Andrew hit southern Florida in August 1992. In the aftermath of Andrew, which cost $15.5bn in insured losses, there were seven insolvencies in Florida. Insurers and reinsurers recognised that in order to reduce the likelihood of severe losses from hurricanes and other natural hazards they needed tools to help them better estimate and manage their natural catastrophe exposures, and many turned to the models to help them do that. Since then, the modelling business has grown dramatically and the use of models is now widespread throughout the industry.

The models help predict the likelihood of different types of natural hazard within a particular region, as well as the potential losses given the scale of the hazard. From the output of the models, insurers can then map out an exceedance probability (EP) curve to show them what the probability is that a certain level of loss will be exceeded over a given period of time. Along with exceedance probability curves, statistics can be produced for average annual loss, aggregate exceedance probability and tail value at risk. "Each of these higher-order set of statistics requires more time and specialisation to interpret," says Tom Larsen, senior vice president of EQECAT.

It is important to understand how the models arrive at their predictions, and as with anything statistical the output is only ever as good as the data that gets fed in. "For catastrophe models to provide companies with more detailed and accurate assessments of their catastrophe risk, the exposure data input into the model must be complete and accurate," explains Dr Jayanta Guin, vice president for research and modelling at AIR Worldwide. There have been significant improvements over the last decade but Dr Guin believes there is still a long way to go. Nevertheless, the data provided by hurricanes Charley, Frances, Ivan and Jeanne in 2004 has been invaluable to the modellers. "We do learn things every time a hurricane comes along," says Dr Steve Smith, an atmospheric physicist at ReAdvisory, Carvill's consulting arm. "Think back to 1992 when the first models came out. There's been a quantum leap since then and we're now light years ahead."

Model changes

Recognising the learning opportunity from the storms in 2004, RMS put together a large team to work alongside clients in the aftermath of the four hurricanes, which has now gathered $13bn of detailed claims data. "There is a lot of learning we've done from the 2004 events, for which the claims data is entirely finalised now," says VanderMarck. One of the things the data provided was information relating to the vulnerability of buildings. It showed that RMS had been overestimating damage to newer buildings and underestimating damage to older buildings. It also revealed that the vulnerability of commercial buildings had been underestimated. "One of the things we've seen now is that the occupancy of a commercial building has a significant bearing on its vulnerability," explains VanderMarck. "What that means is if you're dealing with a hotel rather than an office building, even though they might both be mid-rise reinforced concrete buildings, you should expect the hotel on average to be more vulnerable than the office building."

Other lessons from the 2004 and 2005 hurricane seasons have been incorporated into the models. All three catastrophe modellers have looked at the impact of the increased frequency of hurricanes, while issues such as storm surge and flooding, demand surge and wind damage functions have either been introduced or improved. EQECAT has also produced a Gulf of Mexico offshore energy platform model and RMS has enhanced its platform model to help insurers and reinsurers better estimate their potential losses in this region.

One issue the modelling experts have consensus on is that we are currently in a period of elevated hurricane activity, and that higher sea surface temperatures are giving rise to more frequent and more destructive storms than average. Whatever the cause of this heightened activity, be it global warming or the persistence of a "warm" Atlantic Multi-decadal Oscillation (AMO) - and the debate in the scientific community rages on - this increased activity is now being reflected in the models. The long-term historical average is no longer indicative of the true frequency of hurricanes in the Atlantic and so near-term models have been developed to better reflect what is happening now with the climate. "What the issue really comes down to is the clear consensus in the scientific community that the activity is elevated and is set to remain elevated for the foreseeable future," explains VanderMarck. "So if we're building a forward-looking model it's appropriate that we reflect that increased frequency in the model."

The near-term models may be looking at a four to five-year horizon but they still provide the same form of output, only the probabilities are higher, or rather the EP curve has been shifted up. "It doesn't mean the model only produces a five year return period loss. It still produces the full range of losses - 1,000 year, 100 year and so on," clarifies VanderMarck. The models are being offered in conjunction with long-term risk models, which have also been updated to reflect new data, such as from the 2004 storm season. But despite developing a near-term sensitivity model AIR believes its original offering is the superior choice. "Our standard US hurricane model based on over 100 years of historical data and over 20 years of research and development is still the most credible given the uncertainty arising from the sparse data available for projecting the next five years," says Dr Guin.

Aggravating loss

But it's not just heightened activity and the increased probability that more hurricanes will make landfall in any given season that needs to be analysed. The risk of several events occurring in quick succession, or aggregate risk, can be as potentially devastating to insurers' and reinsurers' balance sheets as a "mega cat" event like Katrina. This loss amplification, or demand surge - whereby materials and labour are in short supply after a hurricane, causing a steep increase in the cost of repair and reconstruction - was witnessed again in the aftermath of Katrina, particularly with respect to business interruption (due to the disruption caused by the evacuation, damaged infrastructure and the stalling of economic activity). This has been more accurately factored into the models ahead of this season. "What we've learned from the last couple of seasons is that demand surge really is real," explains VanderMarck. "It's also clear that it can be even more severe than we modelled it before. So previously we modelled it capping out around 26% (of the insured loss) but now we think it can get to a level more in the order of 40%."

More insight into demand surge was provided by Katrina than any other recent single event. "Katrina's given us a window into those effects like we've never seen before," confirms VanderMarck, making it seem likely that once all the claims data is finalised, further adjustments may be made to the models. Another big learning opportunity from Katrina was the affect of storm surge and flooding, previously a non-modelled peril and a major reason why initial loss estimates following Katrina were so far off the mark. The affects of water damage is now better accounted for through storm surge/riverine flood models. "The storm surge model has been enhanced with higher resolution elevation data to more accurately estimate the extent of ocean water incursion over land," explains Dr Guin. "The update also better captures peak surge levels resulting from intense storms."

"The fact that a firm's 'really existing' cat risk potentially differs significantly from the perspective offered by cat models was well understood," insists Karambelas. "It's just that perhaps after a few years of a hard market and relatively benign cat experience it's easy to let the models run the process and the gap between best practice and actual practice widens." The models took the brunt of the criticism last year but it is also regularly pointed out that the models' users need to be better aware of their limitations. The modelling agencies agree that better education of clients is needed on this and also on how to get the best use out of the model output. "It is very clear that we and our clients have a shared responsibility and I think in some cases the needles had pointed too far towards clients relying on the models as is more than appropriate," concedes VanderMarck.

One recommendation is that insurers and reinsurers stop presenting output as return periods - such as the 100-year or 250-year return period, which according to Dr Guin "are so often misinterpreted to mean that such a loss won't occur on my watch or in my lifetime" - and instead opt for exceedance probabilities, such as a 1% or 0.4% probability of experiencing a loss. "For example, there is a 1% probability of an insured property loss exceeding $100bn this year," explains Dr Guin. "That may appear small to some, but the probability of experiencing this loss or greater over the next 10 years is almost 20% when the continual growth in the number and value of exposed properties is included." He also adds that the model output should be just one input into reinsurers' decision-making process.

Buyer beware The fact is that losses from a unique event like Katrina will always be difficult to predict. "If you only calibrate a model to Katrina it could be very misleading in terms of modelling the general population of hurricane events," warns VanderMarck. The models have come a long way and all the latest recalibrations should improve loss predictions for future landfalling hurricanes, but there is still a substantial degree of uncertainty. "Models are a great tool, they do what they say on the tin. But they are just a tool," reminds ReAdvisory's Dr Smith. It remains to be seen how the rest of the season will pan out, and whether, if there are major hurricane losses, the modelling agencies will again come under fire for perceived shortcomings. "We would absolutely expect that the models would do a better job of representing actual loss experience (this year)," adds VanderMarck. "But there's a really important caveat on that ... and the one thing I can guarantee you is the models will never exactly replicate the loss from any events."

Helen Yates is editor of Global Reinsurance.

- EQECAT updates earthquake and storm models

- Fitch: Industry relies too much on cat models

- Hurricane forecasts heighten value of cat models


A year before the modelling agencies launched their near-term models to account for the current period of increased hurricane activity, Bermudian reinsurer Tokio Millennium was already doing just that. Tokio, which specialises in providing natural peril catastrophe risk coverage, has more reason to heed the advice of its in-house modelling experts than other, less cat-exposed reinsurers. Early in 2005, when its experts were warning of another active hurricane season, the decision was made to increase Tokio's frequency assumption of major US hurricanes by 50%. "What we did is take the curves from the three vendor models, blend them in the way that we normally do, but we also added an average throughout the EP curves of about a 50% increase in frequency," explains Tatsuhiko Hoshina, the company's director and chief underwriting officer. Because it had a more extreme EP curve and therefore much higher risk probabilities than its peers, Tokio's underwriters declined business that might have ordinarily been accepted and cut back on aggregates during the June and July 2005 renewals, especially in the case of Florida and Gulf-exposed policies. While Tokio suffered its first ever bottom-line loss following hurricanes Katrina, Rita and Wilma, a net loss of $50m, its risk management approach undoubtedly softened what could have been a much bigger blow. "Our proactive approach probably distinguishes us from other companies that just rely heavily on the output of the models alone," adds Hoshina.