The combination of a heavy reliance on catastrophe models and uncertainty in model outputs has caused widespread unease regarding their usage in the industry, explains Dr Herve Castella.

Catastrophe models are pervasively utilised in the insurance, reinsurance and broking industries where they are an essential tool in the underwriting and risk management process. They help to quantify the enormous accumulation risk that natural catastrophes pose in the face of ever-growing exposure in hazardous areas, given the lack of reliable loss experience from such rare events. At the same time, catastrophe model users are confronted on a daily basis with widely different loss estimations that the various models can produce for the same risk portfolio. The last two hurricane seasons also highlighted some weaknesses in hurricane catastrophe modelling which led to significant changes being made to certain models. It is therefore helpful to consider the origin of the divergences between models and to discuss ways of dealing with this inherent uncertainty for both pricing and risk management.

Model uncertainty

Probabilistic catastrophe models quantify the loss to an insured portfolio based on an "event set" of reconstructed historical and simulated events, converting the damaging physical parameters of these events into a value of expected loss to the portfolio. This modelling exercise is marred with significant uncertainty at all stages of the evaluation process. First, it is likely that the natural hazard is not yet fully understood and expert opinions on the subject can diverge substantially. Second, the damage caused by the catastrophe is subject to an inherent randomness that can only be partially reduced when looking at a large number of risks. Third, the financial cost can vary significantly between portfolios or events due to economic factors such as cost inflation, claims adjustment practices or changes in insurance coverage.

Despite the large body of scientific research on the topic of natural catastrophes, our understanding of these complex and rare phenomena remains limited. Various experts may therefore have significantly different opinions on how best to model them. One reason for this difference is the incomplete nature of the data available on historical events. In Europe, for instance, it is very difficult to find time series data of observed wind gusts that have been measured in a consistent way over a long period of time. Most weather stations provide consistent data for at most, the last 20 years. This period of time is clearly too short to analyse severe storms with return periods of 50 years or more. In particular, few of the significant windstorms in the 20th century have reliable wind gust observations. Early windstorm models have therefore mainly relied on pressure maps to approximately reconstruct the wind gust experienced in past events.

Our knowledge of natural catastrophes is also limited by the complexity of the hazards and of the physical models to describe them. Numerical weather prediction (NWP) models have become the preferred tool to construct a set of extra-tropical windstorms using all available observations in a consistent way. NWP is a state-of-the-art methodology capable of modelling the three-dimensional physics of the atmosphere and topographical effects, deriving predictions of the state of the atmosphere at a very high spatial resolution. These models, however, are very complex and their use is resource intensive. This complexity can cause a large sensitivity of model results to the input data used.

PartnerRe recently collaborated with Meteo Swiss, the Swiss Federal Office of Meteorology and Climatology, to build high-resolution wind fields for its CatFocusTM European Windstorm Model. Figure 1 shows the maximum wind gusts for windstorm Lothar in 1999 over the whole period of the storm at a resolution of seven kilometres. This simulation was performed using a special technique to force the calculated pressure and wind speeds towards observations whenever available.

A comparison between our modelled wind fields and a normal NWP simulation revealed significant differences between model outputs, with a large geographical shift of the storm and less intense wind gusts in the normal NWP simulation. This exercise very clearly showed the uncertainty in the description of the windstorm hazard; even when state-of-the-art techniques are being used. In the end, any model is only an abstraction of reality.

Each catastrophic event is unique, exhibiting particular patterns of damage and claims. Storm Erwin presents a recent example of such particularities. This storm, also known as Gudrun, hit Denmark and southern Sweden in early 2005 and was remarkable in the amount of forest damage it produced in southern Sweden. However, Erwin produced losses in Denmark that were lower for a given wind speed when compared to earlier storms, such as storm Anatol in 1999. While part of the differences could be attributed to changes in insurance covers and building code, it remains unclear as to whether Erwin losses are a better reflection of current windstorm risk in Denmark, or whether they illustrate the natural variations in loss experience between storms.

How to use the models

How can we best work with these models in both the pricing and risk management process? There are several ways in which to do this, including the explicit modelling of uncertainty, the use of different catastrophe models as expert opinions and avoiding sole reliance on model results.

Models explicitly consider the uncertainty - Most catastrophe models explicitly consider inherent modelling uncertainty, be it primary uncertainty due to the wide spectrum of events that can occur, or secondary uncertainty as a result of the random nature of actual damage for any particular event. Assessing and incorporating an allowance for modelling uncertainty is essential. Model users need to know if and how this has been done and to decide whether or not this is in fact adequate. The secondary uncertainty mainly considers the spread in loss data for a given intensity of the event. An allowance is also sometimes given for so-called "epistemic" uncertainty to reflect the lack of knowledge over the physical phenomena and the resulting variety in models to describe it. For instance, it is not unusual for catastrophe models to use several methodologies to describe the lessening of earthquake ground motion with distance.

Use of multiple models - Another way to deal with divergences in model outputs is multi-modelling, ie the systematic use of several models to price risk. Such an approach considers each model as an expert opinion and, if these opinions vary, then that in fact needs to be taken into consideration. By obtaining outputs from several models, blended underwriting decisions can be made based on the various outputs and with reference to understanding of the respective model strengths and weaknesses. The weighting of model outputs requires a deep knowledge of catastrophe modelling that in-house modelling expertise can greatly enhance. Reliance on a single model limits understanding in respect of potential result inaccuracy, creating a sort of "blind faith" in the output of that particular model.

Risk management

The model uncertainty can also be taken explicitly into account for risk management. At PartnerRe, we have extensively studied the impact of rising hurricane frequency on our overall risk profile. Whatever the origin of the increase, be it natural variations of hurricane activity or warming trends in the oceans, it is extremely difficult to quantify precisely the magnitude of the expected future increase in frequency. The resulting uncertainty can be explicitly modelled into the estimation of the annual loss burden from hurricanes by including an allowance for statistical error in the average frequency. Only by considering this uncertainty can we estimate the impact of potentially large frequency increases on the risk profile for the overall company and on its capital needs.

In addition, the industry could reduce its reliance on catastrophe models for risk management by using simpler but more robust models. Exposure control is an example of essential risk management activity where this could happen. Diversification of risk, a key risk business strategy for many insurance and reinsurance companies, can only be applied given effective exposure accumulation control. But, exposure control is often based on probable maximum loss (PML) estimations from catastrophe models that can carry a large uncertainty. For many reinsurance treaties, however, modelled PML could be replaced by occurrence limits to measure and record exposure as these confer the certainty that no single event can exceed them. This is how PartnerRe has always been accumulating the exposure of its catastrophe reinsurance business. Modelled PMLs would preferably, therefore, only be used if a treaty has no such limits, in which case some kind of allowance needs to be included to take into account the uncertainty in quantification of the capital requirement for that risk.

Underwriting can benefit enormously from the quantification of risk provided by catastrophe models. However, model outputs are just one component of the underwriting process. Accurate risk analysis can only be achieved if underwriters are able to make informed judgements regarding model output, to understand their limits and differences and to balance this information with underwriting experience and expertise.

- Dr Herve Castella is head of research at PartnerRe.