ERM ambitions are rising as firms upgrade their models. But don’t expect your standard computing networks to keep up

The days of performing model runs once or twice a year are a thing of the past. Those making capital allocation and underwriting decisions increasingly base these decisions on sophisticated analytics, which are now a key component of reinsurance companies’ ERM programmes.

Regulators and rating agencies specify the use of economic capital models to analyse risk on a holistic basis and calculate solvency capital requirements. But getting the best results from these actuarial models is putting pressure on traditional computing processing power.

In Europe, the Solvency II regime, which comes into force in 2012, will require insurers and reinsurers to either use a standard actuarial model or to develop their own internal models. Many firms will have opted for the bespoke route, as this more accurately reflects the nuances in their risk profile, and are therefore busy upgrading their models and systems.

The aim is to have a joined-up approach to risk management, says Towers Watson’s global head of Solvency II, Mark Chaplin. “If you move towards having that more holistic view, it creates a competitive advantage through more efficient risk and capital management.

“But to achieve this, you need to have more strength in the modelling systems to provide that data at the top level so decisions can be made. Only then can you monitor risks on a frequent enough basis and do supplementary ‘what if’ analyses. The bottom line is that, to support your ERM ambitions, you need a flexible and robust modelling framework that is quick.”

Bigger, better, quicker

The use of sophisticated capital models will lead many insurers to seek greater computing power, says a white paper jointly published by Microsoft and Towers Perrin. The paper, High-Performance Computing and Insurance Actuarial Modelling, predicts that as models become bigger and more complex, they will quickly outgrow company computer networks.

“Increasingly, companies want better risk management information,” the report states. “Fast, frequent, detailed projections allow them to manage risks more effectively, use capital more efficiently, set prices more accurately and target their portfolios better.

High-performance computing (HPC) leads to robust, timely analysis to support quarterly, monthly and even daily decision-making.

“Companies want to run ever-more complex ‘what if’ calculations, including intensive stochastic-on-stochastic projections. It is no longer acceptable to wait several days to get an answer.”

Whether firms go the standard or bespoke capital model route, all will be required to put their models to greater use than in the past. But those developing their own internal models will require greater computational power to cope with a higher degree of complexity.

Chaplin explains: “Stochastic calculations have many similarities to those required for older statutory balance sheet or traditional embedded value calculations or cash flow projections, but have to be performed 1,000 or 10,000 times even before multiplying up for sensitivity runs.

“That kind of scaling up of the number of calculations required is where you potentially put enormous strains on existing systems.”

The aim is to aggregate risks across all the entities and subsidiaries of an insurance company. For multiline, multinational insurers and reinsurers, this will require a greater volume of data and complexity of model calculations.

“HPC or grid processing enables you to spread the load by splitting complex stochastic models into smaller jobs and running them in parallel, speeding up the delivery of results,” Chaplin says. “Grid processing also enables insurers to make better use of available hardware and provides the flexibility to scale up significantly by taking advantage of dedicated hardware estates internally or through an outsourced provider.”

Reinsurers that outgrow their existing networks have a number of options. HPC can be bought or rented, and utilised on a continuous basis, or tapped into during times of peak capacity, such as during renewals.

Blue sky thinking

Cloud computing also offers insurance companies the potential for massive scalability. There has been a lot of buzz surrounding cloud computing in recent years. The cloud – a metaphor for the internet – allows companies to tap into vast computing resources. It is not a new idea, but is just beginning to be utilised by insurers and reinsurers.

Google, Amazon and Microsoft all provide cloud solutions and have been competing to offer their customers access to dynamically scalable computing power. Concerns over security have held back some IT departments, but as these fears are assuaged, the boundaries between traditional computer networks and outsourced HPC are likely to blur.

Enterprise grid computing (EGC) also allows companies to scale up by bringing together a large number of loosely networked computers, which work together to achieve high-performance calculations.

From this year, catastrophe modelling firm Risk Management Solutions will begin to offer its clients the option of using its models in a HPC environment. EGC offers reinsurers an opportunity to better examine their books of business by allowing them to run a greater number of analytical cycles.

“It’s about using the models, understanding their uncertainties and making informed judgments on the model fitness for decisions,” Risk Management Solutions’ chief executive, Hemant Shah, explains.

As insurance and reinsurance companies develop and utilise complex actuarial models, high-performance computing will become increasingly vital. Those that continue to rely on their own networks could face bottlenecks during times of peak capacity, putting them at a competitive disadvantage.

Brain power

Beyond the buzzwords themselves, the cloud, grid and HPC platforms allow reinsurers to crunch numbers on a scale that was previously the domain of research professionals only. But although new insight into risk will be the result, it must not be forgotten that models still needs to be interpreted by individuals. The challenge for reinsurers will be how to translate model outputs into informed decisions.

“At no point will you be able to build and then run a model that tells you how to run the company,” Chaplin warns. “This is about providing more accurate, more timely, more robust information and allowing a wider and deeper investigation of alternative strategies.

“But management will still need to develop the alternative strategies and the board will need to settle on where it believes it can take risk with greatest effect in terms of adding value for shareholders and policyholders – these models are just tools to facilitate this, and it is crucial to understand their limitations.” GR

The critical list

The hallmarks of top-performing ERM programmes:

1. Board-level commitment to ERM as a critical framework for successful decision-making and driving value.

2. Dedicated risk executive in a senior-level position, driving and facilitating the ERM process.

3. ERM culture that encourages full engagement and accountability at all levels of the organisation.

4. Engagement of stakeholders in risk management strategy development and policy setting.

5. Transparency of risk communication.

6. Integration of financial and operational risk information into decision-making.

7. Use of sophisticated quantification methods to understand risk and demonstrate added value through risk management.

8. Identification of new and emerging risks using internal data as well as information from external providers.

9. A move from focusing on risk avoidance and mitigation to leveraging risk and risk management options that extract value.

Topics