Maria Kielmas looks at the use of credit risk models in business.
One of the most enduring lessons learnt by business from the recent rounds of corporate bankruptcies and debt defaults is the need to identify a potential disaster long before it happens. However, it is not enough to monitor the fortunes or otherwise of an individual company or bond issue; it is also vital to understand the effect a default in one item within a company's business portfolio will have on its other holdings, the effect on other companies in the same business sector, and the knock-on effects within a changing macroeconomic climate. Until very recently, many companies viewed credit risk as something that could be determined by the appropriate symbols issued by credit ratings agencies. This is no longer the case. The agencies are still reeling from their failure to predict spectacular bankruptcies such as Enron and WorldCom, and perhaps even, as in the case of Enron, precipitating them.
The other traditional strategy for tackling credit risk has been to treat it as the ultimate hot potato. Those who could best assess it - usually the banks - passed it on to those least able to price it, often insurance and reinsurance companies. But a series of events starting with the US savings and loans crisis in the 1980s, through the Asian financial crisis of 1997 and the Russian default in 1998 alerted international financial regulators about the need to enforce credit risk management on the banking and re/insurance sectors, and the need to quantify the risks not just at the inception of any deal but throughout the entire period of exposure.
Imminent changes in bank regulation under the New Basel Capital Accord, and in the regulation of insurance and reinsurance companies under the European Commission's Solvency I and II proposals (see sidebar), have given a significant impetus to the development of quantitative credit risk models. According to Bernhard Kaufmann, senior consultant at the financing management and consulting department at Munich Re, these regulatory changes will also mean a move towards greater transparency in the financial sector.
Computational modeling tools of a diverse range of phenomena, from movements in the earth and atmosphere to movements in the financial markets, have become a must-have throughout business. Originally developed by scientists and engineers as a trial and error method of illustrating reality, the financial world hopes to use them to predict the future. But however sophisticated the mathematics employed, such models provide projections of the future, not predictions, and are tools which operate on the 'garbage in, garbage out' principle. According to Kay Giesecke, post-doctoral fellow at the School of Operations Research at Cornell University, Ithaca, New York, it is important to recognise these limitations. One of the most frequent, though incorrect, assumptions made by modelers is that the analyst has complete information on any event. Michel Dacorogna, head of financial analysis and risk modeling at reinsurer Converium in Zurich, equally stresses that such tools are mere instruments, and need to be employed in conjunction with professional expertise.
A further fundamental problem is the distinction between credit risk and market risk. Giesecke notes that credit risk positions could be illiquid, such as bank loans, and their market prices are not easily determined. In addition, bankruptcy or debt default is an extreme event - a catastrophe. Historical data on such events are sparse and tend to be specific to a particular contract. Default rates for securities rated say, BBB-, can vary between 1% in one year to 10% the following year. In addition, for financial institutions holding a multitude of different positions, the aggregate rather than an individual risk is significant. But this risk aggregation depends on the interactions between the risks within a business portfolio. A risk model must illustrate this dependence through a variety of mechanisms in order to identify any form of relationship which could trigger a default.
Tools of the trade
Academics in the field of mathematical finance have developed two distinct types of quantitative credit risk models over the past 30 years, both based on the concept of option pricing pioneered by US economists Fisher Black and Myron Scholes.
The structural approach, first devised by US economist Robert Merton in 1974, makes explicit assumptions about the evolution of any individual company's assets, its capital structure, its liabilities and its shareholders. It uses publicly available information and provides a cause-and-effect picture of default. The assumption is that a form will default when its assets are insufficient to pay off its debts, or that default will occur when its asset value falls below a pre-determined threshold. The problem here is that the model assumes that interest rates are constant, a legacy of the original option pricing model upon which is has been based. The model is also based on the assumption that default can be anticipated, something which does not reflect economic reality. Defaults are usually sudden and unanticipated.
The reduced form approach was pioneered by another US economist, Robert Jarrow, in the 1990s to address the specific need to illustrate how default is unexpected and that there is a dynamic relationship between a company's capital structure and its financial condition. This model has been compared with the actuarial approach in insurance. The term 'reduced form' refers to the model's ability to reduce the complex mechanics of default to a simple expression. The model assumes from the outset that information on the company is incomplete. It employs credit spreads (the excess yield demanded by bond investors for bearing the risk) to model the random process underlying default.
Most analysts agree that the credit profiles of the majority of publicly quoted companies can be best modeled using a structural approach. The reduced form approach is usually best suited for large corporations with very liquid public debt, but in the case of privately held companies, credit quality may be best assessed using financial ratios and macroeconomic factors.
The commercial application of credit risk modeling was first developed by investment bank JP Morgan through its then financial analytics division, RiskMetrics. In 1997 it issued its CreditMetrics package, based on the structural approach, which evaluated risk across an entire corporation. Last year RiskMetrics, now an independent New York-based company, together with JP Morgan, Deutsche Bank and Goldman Sachs, created an online service called CreditGrades, which analyses default probabilities using data from equity markets. This model effectively estimates how much it will cost any company to buy default protection. However, for all of its pioneering work, JP Morgan has not been immune from credit risk management mishaps. The bank's executives admitted in early May that its procedures were insufficient to prevent it from becoming overexposed in industries such as telecoms, and that it would have to change its credit risk management and introduce a new risk model over the next two to three years.
Credit rating agencies which hold vast quantities of historical data either have developed some of the best known modeling products or have bought up financial analytics companies specialising in this market. San Francisco-based KMV Corp developed another structural model based on the work of economists Stephen Kealhofer, Andrew McQuown and Oldrich Vasicek, which estimates default probabilities by determining a so-called expected default frequency (EDF). This tool uses share prices, market volatility and debt levels to estimate the likelihood of a company defaulting on debt payments. Re-runs of this tool show that it indicated the likelihood of an Enron default more than six months before the event. A version of the KMV model is also being used as a credit risk management guideline in the Basel II accord.
Last year, Moody's credit rating agency bought KMV Corp to create Moody's KMV. This company aims to use all its in-house historical default data and acquisitions of new data to expand its credit modeling capabilities internationally. But analysts in the credit rating agencies still worry about the validity of using historical data to forecast the future. They believe that structural models using data from the equity markets are fundamentally unreliable because these markets have been over-hyped and manipulated over the past 15 years. So while the models are good tools, human judgment is still a necessary input. Nevertheless, credit risk modelers counter by noting that the rating agencies have consistently failed to predict default within a range of three to 36 months prior to default. This is the time period within which the models have performed better.
The challenge for credit risk modelers now is to combine the rigours of the structural approach with the realism of reduced form approach to create a flexible, so-called 'hybrid' model. Such models acknowledge that a user does not have access to inside information on any company and that publicly available information could be misleading. So the user is able to calibrate the model according to his own level of confidence in any firm's performance. One of the pioneers of this hybrid approach has been Honolulu-based Kawamura Corp, a risk management software vendor, where Robert Jarrow is a director. Kawamura's core idea is that it is possible to gain a greater insight into a company if a number of different quantitative models are employed to tackle the same problem.
Munich Re has developed its own credit modeling expertise. "Our modeling capacity is very specific. We like to do as much on our own as we can," notes Kaufmann. He adds that on the credit insurance side, Munich Re's in-house knowledge and internal data are better than what is typically provided in the market by credit rating agencies and risk consultants. One of the results of Munich Re's approach he comments is that it has avoided the trap fallen into by other reinsurers of overexposure to credit derivatives.
New capital adequacy standards for financial institutions
The proposed new Basel Capital Accord on international banking standards, known as Basel II, is due to be announced by the end of 2003, with implementation by 2006. This accord will replace the current 1988 agreement and consists of a detailed set of rules comprising three 'pillars'. The first pillar retains the current 8% risk-weighted capital charge but banks will also have to set aside funds to cover potential losses from fraud, malpractice, terrorism and technology malfunction. The second pillar obliges banks to prove the effectiveness of their risk management procedures. They can do this in two ways, either by adopting a standard approach to credit risk management or by using an internal model which satisfies the regulators. The third pillar aims to improve market discipline, calling for more disclosure about risk evaluation and hence capital adequacy.
The European Commission (EC) began a fundamental review of the financial position of the re/insurance industry in early 2000. The Solvency I directives were published in October 2000 and entered into force in March 2002, and must be applied to accounts for the financial year beginning 1 January 2004. The Solvency II project addresses areas such as the regulations covering assets and liabilities, reinsurance and risk aggregation. The aim is to create a new system which will provide regulators with appropriate qualitative and quantitative tools to assess the overall solvency of an insurance or reinsurance undertaking. It is taking as its starting point the Basel II three pillar structure and should build on a risk-oriented approach which will encourage re/insurers to measure and manage their risks.
Discussions between the EC and the re/insurance industry are ongoing. Munich Re's Bernhard Kaufmann expects Solvency II to create a stimulus to develop and implement internal models in the insurance industry in the same way as Basel II has done for credit risk models in banking. He also says there is a clear objective that insurance regulators will follow bank regulators in providing a strong incentive in the form of capital relief for those companies which develop internal models exceeding the standard approaches stipulated in any future regulations.
By Maria Kielmas
Maria Kielmas is a freelance journalist and consultant.