Richard Clinton explains how using multiple models can reduce the financial impact of catastrophe modeling volatility
Given the impact that catastrophe losses have had on the insurance industry, it is not surprising that catastrophe models have become mission critical tools and that the leading companies are moving towards the use of multiple models. There are a variety of reasons for this trend, but the overriding one seems to be that many executives are uncomfortable making such key financial decisions using a single model perspective.
However, another compelling reason for using more than one model has emerged - model volatility.
Models are subject to periodic change and 'improvement'. Sometimes the change is due to new science, and sometimes due to improvements in modeling methodology. These changes almost always impact the results produced by the model, which, depending on the extent of the change, can have a major impact on the model user's business. This became painfully clear last year when one of the modelers made changes in their model that increased probable maximum losses (PMLs) by 30% to 50% in many cases.
As a direct result of this model change, companies that relied exclusively on this modeling company were forced to change how they did business, in that they were compelled to purchase more reinsurance, cut back on writing business or a combination of both. While companies are free to switch models, for reasons of consistency and credibility they cannot easily change reference models from one year to the next. As a result, some CEOs and boards have made the decision to never be overly dependent on the eccentricities of any one modeling company.
Some companies have tried to mitigate the single model problem by getting a second opinion from their brokers. While this is a positive step in the right direction, it is a simplistic answer to a fundamental business problem. The reason this is a simplistic approach is that it does not enable companies to really understand what drivers are causing the differences in the loss estimates and/or to potentially capitalise on these differences.
In addition, it does not enable companies to use this information to set pricing, select risks, or mitigate the impact of modeling changes on their business. In other words, this approach only provides a snapshot view of the exposure, which is primarily intended to assist in the placement of reinsurance. As a result, it really does not adequately address most of the issues related to relying on only one model.
Therefore, the only effective way to take advantage of multiple models is to integrate their use into the ongoing catastrophe management practices through the combining and blending of model outputs. This can be done very effectively for both pricing and accumulation management. Such practices have been used for years in banking and finance where multiple models are used to manage the impact of currency, interest rate and economic fluctuations.
Blending model mean results such as expected annual loss (EAL) is fairly straightforward. One can simply apply a weighted average to the mean and variance of each model's EAL results. The weighting values are assigned based on the company's view of the relative credibility of each model.
However, combining multiple model results to price layers of risk or for accumulation management is a bit more difficult because it requires the development of a loss exceedance or exceedance probability curve (LEC/EP).
There are two approaches that can be used to develop combined LEC/EP curves.
LEC/EP simulation method
Under this approach, LEC/EP curves are developed for each model. Each model's curve should reflect the combined results for all perils and geographic areas covered in the portfolio of business being analysed or managed.
The individual model portfolio curves are based on each model's individual methodology for handling issues such as correlation. The next step is to assign a credibility weighting to each model. This credibility weighting is then used with simulation techniques to develop the combined LEC/EP.
The simulation part of the process first involves the decomposition of the LEC/EP curves into the severity and frequency components. Then a stratified sampling technique is applied to the decomposed curves, in conjunction with variance reduction techniques (if one wishes to focus on a particular segment of the curve tail). The results produced through the sampling techniques are then combined together using the credibility weightings.
The end result is a new combined LEC/EP curve reflecting the weighted aggregation of all the models. The combined curve can be used to price layers and manage accumulations. EQECAT has developed tools that enable the development of the combined LEC/EP curves and has provided advice to companies regarding the utilisation of this methodology. The one disadvantage of this approach is that the individual identity of events is not visible for those managing accumulations using specific events (such as Lloyd's realistic disaster scenarios (RDS)). However, running the necessary event output reports and using the weighted average approach to combine the events easily overcomes this disadvantage.
Combined event set outputs
Most models produce damage and gross loss output for each stochastic event in its probabilistic model. This 'event by event' output can be used to combine the results of multiple models. To do this, companies must first identify comparable events for each hazard source in each of the models to use as the basis of 'apples to apples' combination. This requires mapping event IDs across the models to develop a common subset of similar events to be used for the aggregation. Once the common event subset is determined, companies can use weighting techniques to create a combined weighted average LEC/EP curve.
The primary disadvantage of this approach is that it requires the user to redo the event mapping process every time any modeler changes their event set. Also, the uncertainty associated with the loss given the event needs to be considered in the combination process. The advantage is that event identity is preserved for those managing accumulations for specific scenario events (such as Lloyd's RDSs).
Selecting the model
If the decision is to go with all three models then what to look for in the models becomes a moot point because there really are only three viable vendors. At that point, the issue is how to efficiently integrate all of the models into the company's catastrophe management program - as discussed previously. Therefore, the real issue is what should companies look for in the second model if they decide to only go with two models.
The essential element in selecting the second model should be the robustness of the underlying modeling methodology. In other words, the driving force in the selection process should be to find the most technologically robust model. The reason for this is that the first model was probably a compromise selection based on a combination of factors, such as ease of use, fit within the organisational or business structure, leading model at the time of selection (i.e., perceived safe choice), underlying technology, etc. While this may have been a good process for selecting the first model, the second or third time around, technology should be the overriding element in the decision.
Finally, companies should not be alarmed if the second model provides different answers from their current model. In fact, it can be argued that they should look for a model that provides answers that vary from the current model because it will provide them with a better perspective on the risk, though they should be sure that they have selected a model with strong underlying technology.
One of the first things that an underwriter learns is the importance of spread of risk, and the financial world certainly knows the importance of diversification in investing. Both of these axioms are based on managing risk and reducing volatility in results or earnings. Companies need to apply these same concepts to the management of their catastrophe exposure.
The use of multiple models enables this by providing the spread of risk and diversification that companies need to effectively manage their exposure.
At the same time, it minimises the impact that any one model can have on the business, resulting in lower overall volatility. Finally, it provides companies with a more comprehensive view of the exposure and better enables them to leverage their catastrophe management programs. Therefore, the companies that are only licensing one model should seriously consider licensing a second model and, if at all possible, license all three of the leading models.