Modeling (noun, verb) Modeling is the property catastrophe underwriting panacea.

The science of the reinsurance business has now been boiled down to a series of clicks which direct computerised data crunching. Modeling has transformed the pricing process from a complicated procedure combining expertise, statistical analysis, market knowledge, gut instinct and competitive chutzpah into the simple act of running software. That, in turn, has famously transformed the market into a creme brulee: hard as a rock until the glassy ceiling of underpricing is penetrated, but soft as custard after that. Modeling sets the thresholds.

Modeling: the early daysThe move from no modeling through crude Monte Carlo simulations to the current 'model saturation' in reinsurance pricing has been lightening-fast. Models existed before the early 1990s, but until the painfully costly Hurricane Andrew in 1992 nobody believed them, because they suggested the unthinkable - losses exceeding $8bn from a single storm. Andrew drove the evolution of those primitive engineering models into today's refined probabilistic systems that test the limits of computing power. The latest models consider such meteorological esoterica as the correlation between a hypothetical hurricane's forward speed and the depth of its central pressure. Such knowledge is apparently important when assessing whether it is more likely that a scary swirling cyclone will crash through the sleepy suburbs of Providence, Rhode Island than through the towering insurance offices of Hartford, Connecticut (or the reinsurance layers sold there).Although everyone uses models today when making catastrophe reinsurance decisions, just a few years ago they were regularly consigned to a special room corralling bespectacled quants who rarely interacted with the market-makers on the other side of the office. People had trouble reconciling model outputs with their old-fashioned calculation of the right price.Often the former was discarded in favour of the tried and tested (if not actually very good) conventional approach. Little else could be expected in the softening market of the late 1990s, since modeling yielded prices that were multiples of market rates.The mantra of technical underwriting - the cure-all that many pundits have prescribed for the ailing reinsurance sector - has changed opinions.It has driven the supremacy of modeling to its current peak. The phenomenon has never been more pronounced than for US exposures in the past eight months, since the introduction of a next-generation model by the major supplier prompted almost the entire market to change its catastrophe reinsurance buying patterns.

Saturation dangerThe extended focus on modeling in catastrophe reinsurance underwriting is now over the top. As one executive from that model developer recently confided, many companies are treating model outputs as gospel, and ignoring the uncertainty that is inherent in any predictive technology. And the quants and the front-line underwriters don't always tell each other everything.For example, models have consistently underestimated reinsurance losses from US Midwest windstorms. The reason was discovered last year by reinsurer Converium: modeling companies count each little storm as a single event, most of which are too small to exceed retentions. In reality, reinsurers group together as a single event clusters of storms that occur within 72 hours, allowing insurers to make a recovery. Thus reality has a diabolical effect on the accuracy of model outputs.In 1998 Jim Stanard, the erudite chieftain of RenaissanceRe, the first company to incorporate modeling into its business plan with extreme success, issued a warning. "Effective use of modeling is a key factor in our ability to produce industry-leading results," he wrote. "But it is vital to understand the weaknesses along with the strengths of these models. Perhaps these models should come with warning labels: 'improper use can be hazardous to your financial health - don't try this at home'."

Plotting: its downfallsThat seems unlikely for most of us, since the science behind the models is more than a little arcane. Extreme Value Theory looks well beyond traditional value-at-risk models by abandoning the bell curve, but to use it in catastrophe frequency analysis one will have to understand the Generalised Extreme Value distribution (GEV) and the Generalised Pareto Distribution (GPD), probably combined in the Peaks-Over-Threshold (POT) method. GEV distributions - also known as the Frechet, Weibull, and Gumbel models - have been proven effective for modeling the distribution of excesses over an extreme threshold.Combine your findings into the POT statistical model (typically exceedances occur as a Poisson process, whereas the excesses follow a GPD) to estimate possible return periods for various events, which should give you enough information to build the cost of extremely rare events into your pricing model.Or you could just drop your prices by 5%, or whatever seems competitive based on the market trend, and write as much business against your capital as the ratings agencies will let you get away with. Some talk of modeling is bound to help.

Topics