Peter J. Kelly and Alexandra I. Zakak develop a loss-based model for catastrophe treaty reinsurance and conduct a simulation to analyse the characteristics of this alternative structure.

The catastrophe reinsurance treaty is a well established mechanism for managing volatility in an insurer's balance sheet that could be even more useful if the common treaty structure were replaced with a simple alternative based on payment of a discrete number of losses. This alternative, called “loss-based treaty reinsurance,” performs surprisingly well given its simplicity.

The model developed to explore this alternative shows better recoveries in low-loss years and comparable recoveries in high-loss years. Combined with a portfolio stop-loss programme, loss-based treaty reinsurance could provide insurers with a superior mechanism for managing income statement volatility and downside loss protection. This could signal a shift of responsibility for managing the reinsurer's exposure away from insurers who have had to create incredibly complex programmes to the reinsurers who can utilise share participations and retrocessional arrangements to manage their exposure.

A market in transition
Before examining this alternative, consider for a moment the rich history of reinsurance treaties and the evolution of its unusual pricing convention. Why does the reinsurance treaty contract require such complex negotiations? Why are there often many reinsurers participating on a single contract? Must it continue to meet the objectives of all participants in a transaction? This re-examination of pricing conventions represents a real opportunity for improving the financing of risk.

Structural design of treaties
Historically, reinsurers who participated on treaties came to the transaction with experience in either reciprocal inter-company arrangements or facultative reinsurance arrangements. As such, they entered into contracts with an expectation that their liability was well defined and limited to known levels of risk. In the reciprocal arrangements, there was an implicit make-whole arrangement and in the facultative arrangements, there was an ability or “faculty” to decline risks that were not consistent with the risk appetite of the reinsurer.

Herein lies the genesis of the structural design for treaties. The insurer desired coverage for any and all properties and risks placed during the term of the contract to facilitate its operations. The reinsurer wanted a mechanism to provide capacity in ways that limited the exposure to known and manageable levels. This potential obstacle to doing business became the framework for conducting business - a treaty layering and participation scheme enabled reinsurers to provide capacity and restrict their liability to predefined limits and gave the insurer the ability to secure a treaty for a broad portfolio of changing risks. However, because a typical treaty may have from three to 10 layers, and the number of companies participating can easily exceed 100, its pricing and placement are difficult and time-consuming tasks.

For the highest layers, often the largest, there is the least historical loss activity, perhaps as few as one or two such losses for the insurance company over a 20-year period. How is a reinsurer to price these layers when there is little or no data? Overall, the process is fraught with extensive and potentially contentious negotiations. The result is very high frictional costs. Is there a better way to do this? Is there a treaty design that could reduce the volatility in the primary insurer's income statement without the complexity of a standard layer-based treaty?

A new approach to pricing reinsurance treaties

At the end of any year, an insurer can tell a reinsurer exactly which losses should have been reinsured, according to the terms and conditions of the policy. The problem is the losses that needed to be reinsured to achieve the insurer's objectives, vary from one year to the next. In fact, the range of characteristics that describe the optimum losses to be covered is infinite. In some years, it would be the four largest losses in a $50 million layer above $25 million. In other years, it may be the two largest losses in a $200 million layer above $50 million.

There is one unique and constant quality about the losses that should be reinsured, and this is the basis for the treaty design we propose. The losses that need to be reinsured are always the largest ones. That simple fact is the basis for a new treaty design called loss-based reinsurance. Instead of buying a certain amount of coverage above a retention, an insurer would buy a treaty that pays for a discrete number of the largest losses, say three for example.Knowing that there are always at least three losses per year (in fact, there are orders of magnitudes more), reinsurers would possess adequate information for pricing. Consider the following example:

In the first year, the largest three losses are $30 million, $28 million and $15 million. In this year, the pool of reinsurers pays a total of $73 million. In the second year, the largest three losses are $300 million, $150 million and $100 million. For this second year, the reinsurers pay $550 million. Note that in this year, the insurer must pay 100% of all losses less than $100 million, even though in the prior year, the reinsurers paid all losses above $15 million.

While this loss-based treaty design is obviously simple and should be easy to market (and hence result in lower frictional costs), what about its effectiveness? Could such a simple structure provide the income statement improvement that the insurer is seeking? We designed a simulation to test such a programme.

Stochastic aspects of reinsurance
The pattern of losses, especially the largest in a given period, is truly unpredictable. Commercial property losses are especially prone to wide swings in frequency and severity due to rare but severe facility explosions, fires and natural disasters. These two types of losses, individual location or “risk” losses and natural disaster “catastrophe” losses, need to be carefully studied to fit a distribution to their historical pattern of occurrence.

Individual loss distributions are highly site dependent. Research by Arkwright Mutual Insurance Company (now FM Global) indicates that site loss distributions for commercial facilities can be well described with a beta distribution. The general form of most beta loss distributions that describe risk losses are those with alpha parameters less than one and beta parameters between 10 and 40.

The distribution of catastrophe losses is portfolio specific and dependent upon the geographical distribution of the book of business. Generally, the distribution of the portfolio's catastrophe loss potential is a blend of the individual peril distributions, such as earthquake, flood, and hurricane. Because of the rarity and severity of such events, a discrete frequency distribution and some type of traditional, continuous severity distribution generally suffice to describe the exposure to natural disasters.

Other sources of uncertainty in the area of losses and reinsurance include uncollectable reinsurance, salvage, subrogation, claims expenses, litigation and reinstatement premiums. These are meaningful and sometimes material components to the insurer's income statement, but they are beyond the scope of this study.

Experimental design
A commercial property insurer's portfolio may number from 20,000 to 100,000 insured locations. To simulate the exact portfolio would require development of a model that would generate at least four random components for each location: risk-loss frequency, risk-loss severity, catastrophe-loss frequency and catastrophe-loss severity. Even for a 20,000-location portfolio, this could be an enormous undertaking.

With respect to risk losses, a shortcut is appropriate in the interest of efficiency. Since commercial customers generally maintain a substantial deductible, the frequency of large losses in a given year is very low. As a result, a representative portfolio can be constructed and the frequency distribution adjusted to achieve the appropriate number of losses. A typical severity distribution can then be used to determine loss size.

For catastrophe losses, a similar shortcut can be used with the same representative portfolio. An event frequency distribution is first developed. Then for each event, a location count distribution (a form of frequency) is developed. Finally, for each location involved in a catastrophe loss, a severity distribution is used to generate the loss.

In our experiment, we used a portfolio of 100 locations, ranging from $1 million to $200 million in insured values. Risk losses were designed to occur at 18% of the locations, plus a percentage from a random normal distribution with mean 30% and standard deviation 10%. For those locations with risk losses, the severity was drawn from a random exponential distribution with alpha equal to 10% of the insured values. We added this value to a non-simulated base level of $400 million to produce total risk losses.

The distribution of locations was chosen to be representative of locations insured by large commercial property insurers as measured by the profile of locations historically insured by Arkwright (verified using proprietary data). Additionally, the loss distributions were chosen so that the profile of large losses was representative of the gross large-loss pattern seen by large commercial insurers (again, verified using Arkwright historical experience.)

Empirical results
Gross losses form the basis for the analysis, since recoveries can only occur if there is a gross loss. The run of the simulation with 10,000 iterations yielded the results depicted in the accompanying table, in the columns marked Risk losses, Catastrophe losses and Total losses.

Note several things about this output. First, overall the most severe years can be more than 60% worse than the mean year. Secondly, the best years are catastrophe loss free, so in many years, much of the purchased traditional protection will go unutilised. Nonetheless, the amount of protection needed to stabilise earnings is significant. Perhaps the occurrence of high loss years with demands for very significant protection is the rationale for the complex structures of today's reinsurance.

The second phase of the experiment addresses that question using two reinsurance structures. The first, a traditional layer structure is defined by the following terms:
Loss attachment $10 million
Capacity $190 million
Coinsurance 10%
Reinstatements unlimited
This structure was chosen to give the maximum benefit to the traditional layer structure and highlight the finite-occurrence nature of the loss-based alternative. In reality, traditional approaches significantly limit reinstatements for catastrophe-loss capacity and most programmes attach higher than this or provide lower capacity.

The loss-based structure is defined as follows:
Losses paid 2
Attachment $1
Capacity infinite
Reinstatements none

This structure was chosen to demonstrate best the differences between the two approaches. In reality, the loss attachment would probably be at a point in excess of some retention, although this could be done through notional funds-held accounting or claims performance bonuses. The results of the 10,000-iteration programme using these two structures appear in the accompanying table in their respective “net-loss” columns.

What is the cost of adopting the simplified version of the reinsurance? Even before starting the analysis, it is noteworthy that the traditional layer-based reinsurance is expensive to design, document, market and settle. The loss-based reinsurance alternative has enormous advantages in ease of administration that should lead to lower frictional costs, the amounts for which are excluded from this study.

More to the point, the loss-based reinsurance has clearly achieved some important goals. First (and by experimental design), it achieves essentially the same improvement to the overall mean amount of loss that is recovered. In both cases, the mean improvements are approximately $100 million.

Another important observation is that loss-based reinsurance outperforms layer-based traditional structures in good years - seen from the lower minimum and lower 5th percentile observations. Upon reflection, this makes sense. Loss-based reinsurance will definitely pay out the largest x losses in every year but layer-based reinsurance has a high likelihood of not paying out substantial amounts in years when gross losses are low.

Proponents of traditional structures may argue that this is as it should be, since reinsurance is expected to respond to fortuitous loss events and not meant as a financial guarantee. A counterpoint is that from the buyer's perspective, the goal is not to pay few losses in good years; rather, it is to smooth financial results and cap downside exposure in the worst years.

How then does loss-based reinsurance do with these two objectives: smoothing and capping? Surprisingly, the programme does a good job, but not quite as good as traditional reinsurance - with unlimited reinstatements. However, even with a large simulation such as this, the loss-based reinsurance still provides $272 million of recoveries versus the $367 million from the traditional programme. The key observation here is that the worst losses are always very few in number, so the loss-based reinsurance design conforms very well to the frequency and severity characteristics of the reinsurance needs of the primary insurer.The result of the lack of recoveries in the most extreme years is a greater standard deviation of losses (and volatility of overall return) when using loss-based reinsurance. Again, loss-based reinsurance does provide an improvement and result in smoothing ($62.4 million net versus $92.8 million gross). However, it does not achieve the same level of smoothing as the traditional programme ($46.6 million net versus $92.8 million gross). The assumption of unlimited reinstatements here gives the traditional programme an obvious and unrealistic advantage.

Loss-based treaty reinsurance: why not?
The loss-based reinsurance programme performs surprisingly well given its simplicity. Our experiment was designed to give maximum benefit to traditional programmes by allowing for unlimited reinstatements. A structure of equal mean recoveries provided better recovery performance in years with gross losses less than the median, it provided comparable recoveries in most years with losses greater than the median, and only performed significantly less effectively in the most extreme years. With a limited number of reinstatements, as is generally the case, the performance of the traditional programme would likely look worse in extreme years than the loss-based programme. Further study could explore the impact of relaxing the reinstatement assumption in the traditional programme and developing a base multi-loss retention for the loss-based programme.

If a company were concerned about the downside risk of extreme, multiple large-loss year events in a single year using the loss-based reinsurance product it could purchase high attachment stop-loss portfolio reinsurance, for example. 100% excess of a 250% net loss ratio. Given the tremendous improvement in design that the loss-based programme has over the traditional product, it is conceivable that savings from lower frictional costs could offset the cost of this stop-loss cover.

The loss-based treaty, therefore, provides a comparable level of protection with a minimal complexity and, from the buyer's perspective, it warrants serious consideration as a substitute for traditional protection. Should reinsurers wish to continue using layers to mitigate their risk (versus shares), they can accomplish that through retrocessional reinsurance placements and relieve the insurer of the burden of mitigating their risk - much as the original customer does with the insurance company. The loss-based approach could prove to be an important tool for the insurer's chief underwriter in portfolio management.

When this article was prepared, Peter J. Kelly was vice president and manager, underwriting research and development, at Arkwright Mutual Insurance Company (now FM Global) and Alexandra I. Zakak was assistant vice president, business development, Asia at Arkwright. Both are now with other companies.