The financial crisis was the ultimate test for many financial institutions’ ERM. We ask whether the models failed, or were they simply misapplied?

“The first step in the risk management process is to acknowledge the reality of risk,” author Charles Tremper wrote. “Denial is a common tactic that substitutes deliberate ignorance for thoughtful planning.” Pre-2007, financial institutions would have asserted they had a full grasp of the reality of the risks facing their business. And then came the credit crunch.

Before the collapse of the US subprime market and the resulting financial crisis, the widespread view was that the banking sector was vastly superior to the insurance sector in its approach to enterprise risk management (ERM). Banks used sophisticated analytical models, such as value at risk (VaR), and performed thousands of runs on these models using vast databases.

But when the crisis reached its height in 2008, many felt the banks’ ERM frameworks had failed. Lehman Brothers collapsed, Merrill Lynch was taken over, and insurer American International Group (AIG) was bailed out, costing the US government and taxpayers hundreds of billions of dollars. The house of cards that the whole system had been built on was revealed for what it was. All it had taken for the system to fall apart was a number of subprime borrowers defaulting on mortgages they could not afford.

The centre cannot hold

So were the models used by the banks flawed, or was the data incomplete? Or had senior management simply failed to consider the worst-case scenario as they happily rode the crest of a bull market wave? AM Best vice-president Ed Easop says: “The financial crisis taught all of us an important lesson, which is that tail events or catastrophic events of a financial nature are not just scenarios – they can happen.”

Easop doesn’t think the insurance industry should lose faith in ERM because of what happened in the banking sector. “We don’t believe the process of ERM failed. We think it’s a good process – having a way to understand all your different risks individually and collectively makes a great deal of sense.

“What we think happened is that there were gaps in the implementation process that were compounded by the extreme and rapidly changing economic conditions. The bottom line is that we believe a lot of it comes down to senior management and how they used the information provided. No matter what tools and metrics you have in place, it still comes down to a group of people sitting down at the board or senior management level and making decisions.”

The unimaginable

In his book The Black Swan – The Impact of the Highly Improbable (April 2007), Nassim Taleb asked why the banks were using Black Monday, 19 October 1987 as their worst-case scenario benchmark, when there was potential for a far greater economic disaster.

Even so, he claims banks should not attempt to predict ‘black swan’ events – unexpected and high-impact occurrences that lie outside their

model predictions. Instead, Taleb recommends they identify their vulnerabilities and build enough robustness into their businesses to withstand such events. Many now feel his was a voice heard too late as the global financial system unravelled.

“Globalisation creates interlocking fragility, while reducing volatility and giving the appearance of stability,” Taleb wrote. “In other words, it creates devastating black swans. We have never lived before under the threat of a global collapse. Financial institutions have been merging into a smaller number of very large banks. Almost all banks are interrelated. So the financial ecology is swelling into gigantic, incestuous, bureaucratic banks – when one fails, they all fall.

“The increased concentration among banks seems to have the effect of making financial crises less likely, but when they happen they are more global in scale and hit us very hard. We have moved from a diversified ecology of small banks, with varied lending policies, to a more homogeneous framework of firms that all resemble one another. True, we now have fewer failures, but when they occur … I shiver at the thought.”

One reason the banks had failed to consider such a disaster was that the VaR models, commonly used to measure risk, struggled with extreme tail events. Scenarios such as the collapse of the US subprime market, where house prices had not declined so rapidly since the Great Depression of the 1930s, were given little weight in the models because they were considered so unlikely.

“I think over time if you go back to the risk management culture, companies were loosening their standard metrics as they were looking at the underwriting process,” Easop says. “The real estate market was booming, there was money out there to be lent, and people were looking to get mortgages as the real estate values were rising. So people who really couldn’t afford mortgages were getting mortgages.

“That process worked fine as long as the markets continued to go up – the problem is that things don’t go up forever.”

It was these decisions in how the models were used that left many investment banks so ill-prepared. As Taleb had predicted, the interconnectedness of the financial system was to prove its downfall. Subprime credit risk had been packaged up, highly leveraged and sold on through the banks, magnifying the collapse when it came and infecting all involved. The models’ outputs had failed to predict the crisis because the assumptions behind them were flawed.

Culture clash

The banks’ compensation culture was also a factor. Much has been written about the large bonuses paid out to derivative traders – rewards that ultimately encouraged a risk-taking culture and discouraged whistleblowing. This is less of a concern in the insurance sector, Easop believes.

“Growth for growth’s sake is dangerous. In the past, many insurance companies’ compensation process was largely driven by sales and premium growth,” he says. “Now they are doing a better job of balancing top-line growth, bottom-line growth and the overall health of the balance sheet.”

Easop thinks that the attitude and mindset of a company is just as important as the tools and analytics it uses. “The culture of a company is paramount – it’s as, or more, important than any type of quantitative metric tool, statistics, or corporate dashboard the company may develop. The only way all this stuff actually ends up with a company making sound strategic decisions is when the senior management takes all these tools and metrics with a disciplined, common-sense approach.”

The risk management process is about understanding that, where there are rewards, there are also risks, Easop says. Applying this to an insurance setting is potentially a more straightforward task. Every underwriting decision is based on weighing up those two factors and then pricing the risk appropriately.

A different business

It is true that insurance companies have fundamentally different business profiles to investment banks. They are less susceptible to a ‘run’ than the banks, gaining their income through premiums as well as investment returns. The problems experienced by AIG, and a small number of other insurers, were primarily a result of the firm’s non-insurance operations, in particular its involvement in complex credit derivatives.

History shows the insurance industry is capable of making similar mistakes in risk management, however. Some liken the financial crisis to the LMX (London/Lloyd’s market excess) spiral in the late 1980s. Here, underpriced retrocessional risk was passed around the market and concentrated among too few players – a fact that only became evident when the losses began to accumulate.

The claims related to a series of events, including the Piper Alpha oil platform explosion in the North Sea, the Exxon Valdez oil spill in Alaska, Hurricane Hugo, the San Francisco earthquake, and the growing trickle of asbestos-related claims on casualty books.

Over a decade later, September 11 was another unforeseen event, showing that large losses could occur across lines of business that were previously thought to be uncorrelated, such as aviation, property and fine art. Catlin’s head of ERM, Paul Martin, says: “We picked up claims from areas of business we didn’t expect to.

“They cancelled the whole of the NFL programme, for example, so we had to pay all the contingency claims and losses for that. Before 9/11, we couldn’t have foreseen an event where the NFL would have been cancelled like that. So there are knock-on impacts – you learn how some of the products you sell are linked when extreme events happen.”

It is because the insurance industry has experienced its own black swans that insurers and reinsurers are potentially better prepared for extreme events. Firms with catastrophe exposures have long understood the high-impact nature of hurricanes and earthquakes. But Hurricane Katrina came as a surprise even to them. The magnitude of the insured loss, intensified by the collapse of the levees in New Orleans, caught out some reinsurers, with Rosemont Re, Alea and Quanta Capital entering run-off.

“With the catastrophe models, management always overlays a prudent load because there is a recognition that the models are just models,” Martin says. “If your data is poor, it doesn’t matter how good your model is. With Katrina, a lot of the surprises in companies’ modelling were to do with data quality – and the whole industry is moving to improve that. But if you take models blindly, you end up making poor decisions.”

A lesson learned

Understanding that models are tools, which contain imperfections and are based on assumptions, was one lesson insurers and reinsurers learned in the aftermath of Hurricane Katrina. While some blamed the catastrophe models, others accepted they had placed too much reliance in their outputs.

Aon Benfield’s ERM practice leader, Chris Myers, says: “The financial crisis was an extreme event for all markets, but when you think about Hurricane Katrina and Wilma and other natural catastrophes, insurance companies have had a lot of experience with extreme events.”

Myers thinks insurance companies have a different philosophy to the banks in how they look at risk taking, which will affect how ERM is applied.

“Where insurers are in a better position is that they’ve traditionally been more conservative in their risk taking – particularly when you think about some of the extreme events that can impact the insurance industry,” he explains. “There is a tolerance around acceptance of exposure to extreme events and how to respond to them when they happen.”

“Modelling agencies that insurance companies use have over time been helpful, but have never been perfect in their ability to forecast the impact of events for insurance companies,” Myers adds. “Those insurers that were overly dependent on that type of tool were either lucky and dodged a storm – or they were unlucky, and so have had to rethink the use of models and how they are going to be applied.”

ERM is not just about the use of analytical models. These are an important component, particularly for assessing risk-based capital, but however sophisticated the models are, they need to be easily applied to day-to-day decision making. Under Solvency II, the ‘use test’ looks specifically at whether a firm’s internal risk management tools are user friendly. This includes having an appreciation of the uncertainties inherent within those models.

“Models are definitely a key component to an ERM process, but one common understanding, whether it’s looking at catastrophe events or some of the financial issues that plagued markets in 2008, is that models have inherent imperfections,” Myers says. “Understanding that there are limitations to models will help the user apply them appropriately.” GR

Topics