The strength of decisions can only be as good as assumptions. In catastrophe modelling, Kenna Mawk writes, high-quality data has become a competitive necessity.

Risk management has become much more sophisticated in recent years. As catastrophe models have become better at differentiating risk, insurance and reinsurance companies have become more adept at using them, making model results a critical part of the underwriting and reinsurance placement process.

However, the credibility of modelled results is only as great as the integrity of the exposure data fed into them; poor data can have a dramatic effect on model output. In fact, when missing or incorrect information is enhanced, loss estimates can change by a factor of four on a single building or as much as 25% across a whole portfolio.

Catastrophe models deal mainly in probabilities, with no precise answer. However, data is more specific – you either know where a building is, what it is used for and its value, or you don’t. Indeed, exposure data quality is the one element of uncertainty in catastrophe risk models that can be controlled.

Improving data quality therefore has become a major concern for many (re)insurance companies. Rating agency requirements and new regulations that put the onus on enterprise risk management (ERM) are helping to drive the trend. While the value proposition for good quality data is changing, the incentives are increasing. Those companies that get the data issue right not only benefit from having fewer surprises after a catastrophe, but can immediately gain advantage through improved financial strength ratings, securing the right level of regulatory capital and better reinsurance terms and conditions.

Many (re)insurance companies are now trying to establish a holistic, well-informed approach to data quality by incorporating data quality best practices into their underwriting, assessment and checks, portfolio management, and capacity allocation processes. Some organisations are even enforcing underwriting discipline by using data quality to determine how much capital each underwriting business unit receives. Rather than allocating capacity solely on the basis of modelled risk, those business units with good quality data have a first call on capital, all other things being equal.

As the models and analytics become increasingly sophisticated – and the industry embraces their use – the data quality bar continues to rise.

The value of quality data

For larger insurers – who typically offer a broad range of insurance products and services – exposure data can vary in completeness and accuracy. These companies usually access the market through many

different agents and brokers, and maintaining good data quality across the board is a constant challenge. As such, leading organisations are “mining their data quality” to understand better issues further upstream in the process, and assessing data quality by cedant or account as part of the underwriting process. They are also examining the source of the information, such as agent, branch, underwriter and broker. By improving the quality of exposure data, (re)insurance companies are starting to realise that they can score crucial points over their competitors.

“Exposure data quality is the one element of uncertainty in cat models that can be controlled

According to a survey of reinsurers by Ernst & Young in February this year, the quality of cedants’ data was identified as the greatest concern around the ability to underwrite property cat risk. The respondents indicated that if cedants were to eliminate some of the uncertainty from their data quality, they would reward them for reducing their risk. Moreover, some reinsurers are now charging insurers a loading of up to 25% for those that have poor data (or are withholding a quote altogether), and crediting companies they believe have good data by up to 10%.

Over the past 18 months, rating agencies have begun to take more interest in understanding the quality of exposure data – for the credibility of (re)insurers’ reported modelled catastrophe risks and the strength of their enterprise risk management practices. At the Benfield 2008 Catastrophe summit, A.M. Best said: “There are several key factors to a strong catastrophe management discipline, and data quality is the foundation.” Increasingly, the rating agencies are looking for hard evidence of quality data, and improvements in data collection. They take the view that if a company does not have control over issues that impact the balance sheet – like data quality and modelling – it is unlikely to have robust internal controls.

Focusing where it matters

Data quality should inform pricing and renewal discussions, with quantifiable illustrations of the impact on modelled results. In a soft market, this is even more critical. But measuring quality in a systematic way can be challenging, particularly as there is no industry standard or reproducible measure. Therefore, establishing proper objectives, managing the desired outcomes, and incorporating the metrics into business decision-making is not always straightforward.

So what are the pre-eminent companies doing to improve their data quality processes? First, they are assessing their current data quality to benchmark improvement efforts; what gets measured gets managed. They are also trying to understand the degree to which their modelled results can be affected by data quality – what is the cost of poor data quality in terms of model uncertainty?

Perhaps most critical of all, is that they are beginning to understand the costs and benefits of data improvement.

But some companies seem to have a false sense of security about their data quality, relying on unsound metrics like percentage of street-level geocoding and completeness of vulnerability attributes such as year built and construction class. These can be easily gamed, so leading companies are rooting out critical insights to ensure that the data quality is excellent where it matters – in high-hazard, high-limit locations. They are checking for hidden biases, such as undervaluation or optimistic coding, and any processing issues that are systematically corrupting data and biasing results. Even if there are no overall bias issues, they want to know if poor data quality is causing regional skews in model results, impacting reinsurance decisions and setting the landscape for post-event surprises..

The benefits of data quality are evident both after a catastrophic event (fewer surprises and “private” catastrophes), and ex-ante. Overall, the quality of the industry’s exposure data has improved significantly. As catastrophe models become more sophisticated and (re)insurers increasingly use the analytics for critical decisions, the stakes of failing to address exposure data quality are getting higher.

Kenna Mawk is director of Data Validation Solutions at RMS.