Aspen head of group catastrophe risk managment Alan Calder talks about the uncertainties caused when data is limited and warns against the simplicity of merely viewing the world as emerging and developed markets

Alan calder 450

Significant uncertainties in cat risk assessment are caused by limited data, but Alan Calder cautions against a one size fits all mind-set - dividing the world between “developed” and “emerging markets”. Examples of robust datasets are emerging for all the key components of models and further progress is expected. Being at the forefront of understanding and deploying this technology continues to be an important part of Aspen’s underwriting strategy.

Diversity

Many reinsurance and insurance entities look to emerging markets to diversify risk profiles and seek new opportunities for business growth and expansion. This is not a new trend, but one that has accelerated of late, as mature and highly competitive markets stagnate. The opportunities in emerging markets over the long run are attractive with demographic and economic changes representing significant growth opportunities, but there is no such thing as a “free lunch” and there are downside risks to manage, including new catastrophe risk scenarios to assess. This was most clearly brought home by the Thai flood losses in 2011.

Assessment of cat risk within emerging markets is not new. In particular, a series of models have been available for a number of the most heavily exposed earthquake zones ranging from Chile to Indonesia since the 1990s. Awareness of the significant growth in exposures from increasing urbanisation and the growth of megacities in many seismically active areas is also a well-recognised trend. Still, the Thai floods crystallised the awareness of the many “gaps” in the tool kit for assessing cat risk outside of the traditional peak peril zones (i.e. U.S. hurricane and earthquake, European winter storms, Japanese typhoon and earthquake), and highlighted the diversity of such markets, perils and available exposure data.

 

Earthquake evolution

Earthquake has traditionally been well provisioned in terms of model availability. “Quake” models have in some senses represented the easiest to provide of the major cat perils.

Forms of global seismic data have long been readily available through agencies, such as the U.S. Geological Survey, and assumptions on building codes can be translated across borders or from event survey work. However, many of the earthquake models in use today are still relatively unproven from a large insured loss perspective, and may not always be perfectly tuned to the local situation. Yet each model revision usually incorporates a much greater focus on use of local seismic records, awareness of building codes and practices to give more confidence in results. Chile provides a good example of the evolution where the initial models of the 1990s were updated as data quality improved.

By the Maule earthquake in February 2010, modelling practices were well established and models generally represented the risk, even - in our view - arguably erring on the side of conservatism in some contexts given the response of the insured Chilean building stock thanks to Chile’s robust building code. Chile does benefit from a long standing open economy with international insurance groups well established in the domestic market place. The latest generation of Chilean earthquake models now includes physical simulations and the impact of tsunamis generated by the earthquake events. This addition has been made only shortly after the release of similar effects within “peak zones”, for example Japan.

Such evolution is by no means universal. Frequently we see global earthquake catalogues applied that have significant gaps or inconsistencies to more recent regional or national studies. Vulnerability functions may be limited to broad risk types only, or not reflect the sub national variations. This is in contrast to more established regions. The approach taken to the difference of general building stock quality and that of the insured buildings is a particular example. Many emerging markets still have low take up rates, with typically only the more wealthy insured. Users need to be aware, consult the documentation and carry out validation processes to ensure models match their risk profile and expectations. Consideration needs to be given as to whether additional assumptions to account for gaps in the modelling, such as the lack of secondary effects (e.g. tsunami impact or landslides), is a worthwhile exercise. Where applicable, Aspen incorporates these additional assumptions.

 

Wind and water

For tropical cyclones, again models have been in place for many years but, much as with earthquake initial versions, understandably tended to borrow perspectives from more well-trodden ground, particularly the Atlantic basin models.

There is, however, a very different risk driver with rainfall and flooding generally causing more impact in Asia in contrast to the impact of severe winds proving a greater feature in the U.S. terrain helps explain the difference as, for example, Taiwan’s most densely populated areas are sheltered on the leeward coast by mountains. Florida, in contrast, is flat as a pancake, and more vulnerable as result. New generation cyclone models in Asia now focus on rainfall elements and the resolution needed to determine landslide risk. There is also a trend towards an expanded footprint of countries, such as Vietnam, recognising the full extent of the North West Pacific basin impacts beyond a core set of the most exposed countries such as the Philippines.

Flood is widely recognised as the biggest gap still in the offering from traditional cat model vendors, particularly for many emerging zones with significant riverine or flash flood potential linked to cyclones or other climatic activity. Given rapid urbanisation and insurance take-up, this is a challenging peril to price and assess using traditional actuarial approaches or catastrophe models as they currently stand. We are seeing several responses though, with new tools and data emerging.

Models for flood are being provided by “traditional” cat model vendors, brokers, new entrants, and through research initiatives, for example, Global Flood Analyser. Indeed, Aspen Re’s research team recently assessed more than 10 different companies offering flood risk solutions for a diverse series of markets. The range extended from primary underwriting focused risk maps for risk selection to full catastrophe models deploying stochastic event-sets to assess portfolio correlations and accumulations. Such choice in itself presents challenges to obtain the right balance and fit and depends upon the objectives set and portfolio mix.

Now the challenge is to maintain and develop models in a rapidly evolving risk landscape. Flood impact, more than other perils, is directly influenced by human activity, with the risk profile altered by land use changes and defences. Greater awareness and investment by government agencies and support via initiatives, such as those from the United Nations and the World Bank, are resulting in flood management schemes. For example, Jakarta, Indonesia, suffered significant flooding over the years, but is now subject to a new comprehensive scheme to manage flood risk following the completion of a new canal in 2011. Improved defences for industrial parks have also been constructed following the significant 2011 floods in Thailand.

 

Data improvement

The theme of diversity continues with insured exposure input data placed in cat models across emerging markets. The quality of input has long been rightly viewed as material to the benefit derived from using such a tool to assess risk outcomes. As use has evolved there is now a much greater range of data levels and approaches which contrasts with the rather coarse, highly aggregated data of the past. For example, in parts of Latin America, such as Colombia and Peru, reinsurance cedents have provided high resolution latitude and longitude exposure data, which is of a standard that is arguably ahead of the resolution of many of the risk models available. Data is now frequently split by postal-code or detailed administration codes in many emerging markets rather than Catastrophic Risk Evaluation and Standardising Target Accumulations zone. But the picture is neither uniform nor consistent.

High resolution exposure data is essential for leveraging the benefits of flood modelling. One challenge in this respect is how best to take advantage of satellite derived industrial cluster datasets for Asian countries. This helps to determine the correct portions particular insurers have of such risks for relevant reinsurance treaty coverages. At Aspen, we believe it is important to have a dialogue to ensure the appropriate assumptions are developed and used.

The improvement in data quality is a very welcome development as this demonstrates strong risk management approaches. We recognise this takes time and investment in the required systems to capture and process but the ability to finely locate and assess exposures is a cornerstone of effective catastrophe risk management. Undoubtedly, there is a long way to go with certain markets suffering from institutional limitations such as basic address standards. Ultimately, we envisage progress and hope that, despite the cycle, value is seen in improving risk data so that partnerships are enhanced.