Ultimately, insurance is about the real world. To act on the basis of any prediction as to future events is necessarily to take on some degree of risk. As professionals in the world of risk, it falls to us to determine which of the more sophisticated intellectual mechanisms for comprehending and anticipating evolving patterns of risk can usefully be applied to the management and funding of risk. One area of thought that is currently making a valuable contribution is extreme value theory (EVT).

AXA Corporate Solutions has been incorporating EVT among a range of other theoretical models and tools within several areas of its business over recent years. In February 2001, we hosted a workshop at our Paris offices on ‘EVT and its applications to risk management'. At this event, a combined team of RiskLab – a laboratory co-financed by the ETHZ (The Swiss Federal Institute of Technology, Zurich) – and reinsurance/finance professionals provided a lucid and accessible theoretical presentation before discussing how EVT can be applied to issues such as pricing catastrophe covers and calculating the value of a CAT bond.

Better measures
In the light of recent loss patterns in the world of insurance, it has become hard to say what is an extreme event and what is normal. To do so, we know that stochastics, which lies between probability theory and its application to real world statistics, provides the scientific framework to model random events. However, over the last few years major natural events have routinely flared up on a scale far beyond conventional predictions of their frequency and/or severity. We have too often found ourselves in the anomalous situation where ‘once in a century' events occur two or three times in a decade with higher-than-expected loss patterns. It appears there is an urgent need for more adequate measures against which to judge ‘extreme' events and a more accurate modelling of their randomness.

To start with the measurement, a widespread quantitative tool for financial risk management is ‘value at risk' (VaR), technically defining the ‘upper' (resp. lower) threshold of a given loss (resp. profit) with a high degree of confidence. In fact, VaR is a sexy designation of a statistical quantile. Often wrongly associated with the concept of maximum loss, this concept can be positively dangerous, potentially leading to misplaced confidence. In reality, VaR is actually the statistical estimate of a loss threshold, ignoring somehow the risk profile of losses over it. Beyond being more accurate at estimating than VaR, EVT provides a valuable methodology for estimating specifically what goes above the threshold area, the sector we essentially deal with in reinsurance and large risks.

The challenge of modelling these extremes, as EVT seeks to do, is that data is – by definition – thin on the ground. Not only is a reliable statistical record that covers the types of event applicable to EVT in an insurance context woefully lacking in depth, but rare events, by their very nature, occur too infrequently to yield the kind of information normally required to support statistical analysis devoted to mass observations. If EVT is expected to play a role in generating predictions or estimates of future frequency, it must therefore operate at the very limit of – and even beyond – the available data.

Venturing to enter slightly into the technical aspects of EVT, suppose L1,... Ln denotes n successive losses (by definition extreme ones being few) to a portfolio, total losses being captured by Sn = L1+ ... + Ln. The normal distribution model (bell curve) offers a reasonable approximation of the statistical distribution of Sn/n, the mean loss. From a risk management perspective, however, the few extreme loss observations are unfortunately drowned in Sn/n, this latter indicator characterising a generally true situation that, as it turns out, is quite wrong in this context. Our key concern is in fact with the largest loss and it has to be modelled by Mn = max (Ln, ... Ln). In this context, the bell curve becomes emphatically the wrong tool to apply.

Fortunately within EVT, the magic of statistics combined with some heavy convergence theorems shows us that Mn = max (L1, ... Ln) can be modelled by other well-known distribution functions, typically one that is ‘skewed' or asymmetrical. The principal functions concerned – Fréchet, Weibull and Gumbel – have been subjected to detailed study by statisticians and are now used throughout the insurance and financial markets. They therefore contribute to a theory capable of assimilating the challenges of estimating rare events and conditional losses over a threshold. Whilst the standard bell curve tools can be used for the central section of the curve, these newer models cope better with calculations concerned with the tails of data. When a high loss occurs, they enable us to estimate its size better.

Of course, as with any statistical exercise, outputting worthwhile estimates depends on inputting reliable data in the first place. The key decision to be made here is from what level of loss (upwards) should we begin to fit the data, i.e. which portion of the data should be used in estimating the tail? It would obviously be undesirable for small events to exert a disproportionate influence on the modelling of larger events. In an insurance context, the most obvious point of reference for this exercise of fitting data above a certain given threshold (far in the tail) is the actuarial pricing mechanism applied to excess of loss reinsurance treaty.

Fast developing into a valuable - if tentative – toolkit for the integrated risk manager, the cutting-edge field of EVT offers a new way of looking at extreme events in the most objective way possible, given the available data. The methodologies it proposes are in the process of being tested in many different applications, reapplied and extended to tackle a wide range of different requirements. For instance, AXA Corporate Solutions New Product department, headed by Jean-Luc Gourgeon, uses it to quote critical day structures for temperature-based weather derivatives when the strike (attachment point) is far out of the mean.

Pertinent advantage
A key advantage of EVT is fitting tails of distribution based essentially on data far out in that tail. One result of this is that EVT typically suggests greater values than conventional methodologies and wide confidence intervals. From a statistical viewpoint these estimates are clearly superior to other less rigorous methods, though it is obviously a question for the underwriter to determine how much weight should be given to the conservatism such analysis would appear to urge.

However, as Professor Paul Embrechts of RiskLab points out, “EVT offers no panacea for the difficulties of anticipating the effects of extreme phenomena.” To us, it is certainly not a technique that offers the type of guidance one could ever rely upon in isolation, neither for a risk manager nor for an underwriter. Even saying it is the best tool available might be putting the case a little too strongly. We can at least say that EVT is the ‘least bad' methodology currently at our disposal – particularly since most alternative approaches lack even a basic theoretical justification.

To say that a thing is difficult to achieve does not mean that we should not attempt it. There can be little doubt that in future, EVT will mature into a standard weapon in the repertoire of risk managers in the fields of finance and insurance.

EVT is still an evolving discipline. Its techniques and premises will no doubt be subjected to review and modification as experience affords us further opportunities to test theory against reality. If you are interested in learning more about EVT and the pioneering work being done by Paul Embrechts and his colleagues at ETHZ, their website at www.risklab.ch offers an in-depth appraisal of the subject.