Unless modelling makes common sense and you can explain it to the customer, it won't work, say S&P analysts

The increased prominence of predictive modelling tools over the past dozen years has generated reams of data that can help property/casualty insurers set prices, sharpen their underwriting criteria, and get a handle on market dynamics in everything from typhoons to auto coverage.

But by itself, a computer model won't turn a weak insurer into a strong one, agreed three insurance executives at Standard & Poor’s Insurance 2007 conference, held June 3-5 in New York City.

Without pricing discipline, a sound strategy, and an understanding of the market, predictive modelling is just another tool that can be used or misused. "Discipline may be more important than all the modelling in the world," said panelist Stephen Way, former CEO of HCC Insurance Holdings Inc. and founding partner of SLW International LLC. "The best modelling is your own experience."

Predictive modelling, agreed the panellists, has its greatest use in predicting loss levels in lines of business where data is plentiful and underwriting standards are comparable, such as the workers' compensation or automobile lines of business. Such modelling can still be useful--but somewhat less so--in predicting catastrophic losses from hurricanes, earthquakes, and the like.

"The models are only as good as the data going into them," said Peter Nakada, senior vice-president and managing director of Risk Management Solutions. He noted that modellers must be sure that the type of data they choose to collect is correct, that the data entered is valid, and that underwriters who use the information can double-check the data and incorporate any changes into the final model.

The lack of data or incorrect data can wreak havoc in the property/casualty sector when it comes to catastrophes, however. "After [Hurricane] Andrew in 1992, everybody used catastrophe models," said Nakada. "Everybody runs them, but they don't always use them well. You have to have some intuition around how you use the model and incorporate it into executive thinking. Hurricane Katrina, however, was a big example of where modelling fell down. The models were pretty good with wind damage, but they definitely missed the flood in New Orleans."

In fact, modelling, the panellists agreed, can be at its most uncertain for those rare catastrophic events that cause the most claims. "A lot of the events that would lead to large probable maximum losses in the Northeast haven't occurred," said Thatcher. "It's tough [to model them] because we haven't had enough experience to know."

"The heavy use of predictive modelling, in fact, began when markets were hard, while insurers were making money and prepping for the next cyclical downturn," said Thatcher. "It started when markets were hard, to be able to [minimise rate] reductions when markets turned soft. Predictive modelling was laid on top of historical profitability."

What rates an insurer actually charges can be different from what they should be based on predicted losses, even when the models seem to make actuarial sense. "Unless modelling makes common sense and you can explain it to the customer, it won't work," added Thatcher.

Modelling tools continue to evolve. Some catastrophe bonds are now modelled so that they are triggered not on total damage loss but on wind speeds. And new ones could be based on things as esoteric as a flood in London or the threat of terrorism at the Beijing Olympics.

Yet even with new mechanisms and tools, the best risk managers will have to use the elusive mix of common sense, intuition, and experience. Otherwise, said Way, they will be caught in the same trap that always bedevils insurers. "The losses don't decline as much as the rates," he said. "Modelling is just one tool you use. Then you have to decide how conservative you want to be."