Jeffrey Radke believes fluctuations in predicted losses reflect differences in how catastrophe models are used
The answer to the question of how catastrophe models performed in 2004 depends heavily on whom you are asking. At PXRE, we found the overall performance to be quite acceptable. However, many of our customers have told us of less satisfactory outcomes from their use of the catastrophe models. We have found that much of the variance between predicted losses and actual losses depends on a broad range of factors, and that a risk modelling system such as PXRE's "Crucible", which carefully considers each of these interrelated factors in detail, is likely to be more accurate in estimating incurred loss.
The most important source of variance between actual and predicted losses lies in how the models are used. For example, many clients tell us that they analyse their business with demand surge, storm surge and secondary uncertainty options turned off. It is therefore hardly surprising that the four Florida hurricanes, which involved significant storm surge and large demand surge, caused significant differences between the predictions of such overly optimistic use of the models and the actual losses.
We have observed that the biggest variance between modelled and actual losses came from a small number of transactions where we believe that systemic errors led to miscoding of material amounts of the exposure information.
Obviously, no model will be accurate if the input is faulty. In consequence, during the underwriting process we focus on a client's potential for making errors in collecting exposure information. Do agents enter the information?
Are the risks simple and homogeneous? Has the client sufficient support staff to collect, check and analyse the information? The answers to these questions are critical in our assessment of the information risk posed by a client.
The storms of 2004 affected several geographical areas more than once.
It is obvious that the vulnerability of a building with a tarpaulin over a hole in its roof is higher than that of an intact structure. After the initial damage had occurred, the vulnerability estimates in the models were therefore rendered inaccurate. It is difficult to criticise the models for this fact - but it is yet another consideration that a responsible user must take into consideration.
In seeking to make models more accurate, it should be noted that a comparison of actual and modelled losses from an event can give some insight into the uncertainty associated with the severity function, but that our experience in 2004 cannot shed any light on the accuracy of the frequency assumptions underpinning the model. In the final analysis, changes in the assumptions about the frequency of medium to large-scale events will have an equal or greater impact on expected loss than do changes in severity assumptions.
In summary, responsible users have to understand the assumptions underlying their model and make their own judgement as to which of them require adjustment.
Turning off features, which increase the modelled risk, is not justified by history. Finally, models are only tools, for each catastrophe is unique.
As a result, the models need to be used in conjunction with prudent contractual exposure limits, so that "modelling error" can never call into question a company's ability to meet its obligations.
- Jeffrey Radke is president and CEO of PXRE.