The events of 2004 exposed a number of shortfalls in the industry's ability to assess its hurricane exposure, reveals Peter Cheesman

Despite efforts to adapt, the insurance and reinsurance industry still remains somewhat conservative when it comes to anticipating catastrophic events and the subsequent potential losses. Perhaps, with only the past to rely on as a guide to the future, this is not surprising. Hurricane Andrew, the Northridge earthquake and the 9/11 terrorist attacks are examples of catastrophes that failed to be anticipated and, in consequence, have tested and ultimately shaped the technical evolution of the industry.

Hurricanes with their seasonal frequency allow objective scrutiny of the socio-economic measures put in place to mitigate their impacts. Many of the current mitigation measures were enacted as a result of the catastrophic consequences of Hurricane Andrew in 1992.

From an analytical perspective, Hurricane Andrew exposed the traditional methods, such as trending historical hurricane loss totals, as inadequate for managing and pricing catastrophe-based exposure. Before 1992, only a few visionary thinkers were contemplating the use of simulation models or techniques to assess catastrophe-based risk, but, as is often the case, it takes a major event, such as Andrew, to create an environment where new techniques such as stochastic catastrophe simulation are considered and subsequently accepted by the wider insurance industry.

Catastrophe models provide a more structured and transparent approach to insurance pricing. They are subjected to regulatory review by the Florida Commission on Hurricane Loss Projection Methodology (FCHLPM), and without their approval the models cannot be used for pricing in Florida.

The four hurricanes that occurred in the space of six weeks in 2004 caused over 1.5 million claims resulting in insured losses in excess of $22bn.

Despite some reasonably significant events since 1992, the 2004 events provided the first real test the insurance industry, with its increased capacity and improved modelling capabilities, had faced since Hurricane Andrew. Although many insurers incurred significant losses, with one reported insolvency (there were 12 in 1992) and some rating downgrades, the general impression was that the measures introduced after Hurricane Andrew helped the insurance industry absorb the hurricane losses and provided a degree of stability in the insurance markets.

However, the events of 2004 did expose a number of shortfalls particularly in how the industry assessed its exposure to hurricanes. The majority of the insurance products in place were concentrated around the risk associated with one single major storm rather than the multiple mid-sized events that characterised the 2004 season. Thus products, especially on the reinsurance front, were geared more toward top end vertical cover.

Because of this single large loss focus, rather than the large aggregated losses that were actually experienced, the Florida Hurricane Catastrophe Fund and the majority of reinsurance programmes suffered limited losses due to the high retention levels in place. For the 2005/2006 contract year, FHCF have responded by introducing a drop-down retention, whereby the FHCF $4.5bn per event retention will apply to a company's largest two losses. The retention then reduces to $1.5bn for any other hurricane losses.

Traditional reinsurance programmes with high vertical cover and limited reinstatements leave insurers vulnerable to significant surplus losses in high-frequency periods, as was evident in 2004, due to the accumulations of retentions and exhausted limits in the lower programme layers. In future, these programmes need to consider the lower end loss frequencies associated with a series of smaller, yet still significant, individual catastrophic events, as well as the single large losses associated with the traditional probable maximum loss analysis.

From the catastrophe modelling perspective, it seems that the actual events were represented in terms of their physical nature and multiplicity, but the subsequent loss assumptions associated with a cluster-type scenario were not predicted accurately.

With much of the uncertainty associated with the modelled results pointing towards the vulnerability component, the challenge for these modelling companies in the future will be to incorporate the lessons learned into refining their products. A key consideration will be to allow for frequency of impact to account for the damage caused to the same properties by storms with overlapping tracks. Physical damage and post-loss inflation (or demand surge) are both factors of multiple impact. Currently, models presume that damage from one event is independent of damage from another, but the losses from 2004 tend to suggest, in some instances, that the damage could be correlated.

As has been seen in the past, the insurance industry does adapt and refine its solutions when confronted with major catastrophes that were previously ill-considered, and the events of 2004 will not be an exception. But perhaps the industry should push the boundaries and explore new solutions.

For instance, a largely unexplored concept is to attempt to incorporate aspects of the short- to mid-term seasonal hurricane forecasts produced by the likes of Dr Gray's Tropical Meteorology Project in the US and the Tropical Storm Risk Centre in the UK, into the analytical risk assessment process and subsequent reinsurance and insurance solutions. A key lesson to learn is that catastrophe models should not be used to predict exact losses from specific events, but as tools to anticipate the probability and severity of future losses, to help a company's risk management process.

- Peter Cheesman is head of risk modelling at Glencairn Group.