Terrorism modeling is in its infancy, but growing quickly, says Richard Clinton.

"There is a consensus within the government and among private scholars that terrorism will be with us for a long time. The approach and kind of specific attacks will constantly change as defensive measures are taken to meet them. The United States will be a principal target of this assault... experts expect that the most terror-prone regions will continue to be Western Europe, the Middle East and Latin America."

Source: The Inman Report - Report of the (US) Secretary of State's Advisory Panel on Overseas Security.

The events of last September revealed that every country is vulnerable to terrorist attacks and demonstrated the severe devastation that a major attack can cause. Unfortunately, as the Inman Report indicates, this exposure will be with us for many years to come. Therefore, the insurance industry needs to find a long-term solution for managing its terrorism exposure. The approach so far has been to exclude the exposure, but this does not work in every line of business or in every country. Also, as with any crisis, opportunities are created for the innovators and risk takers. Therefore, if the exposure can be modeled you can count on these people to use it to find a creative and potentially profitable solution to this problem.

A basic principal of insurance is that if a risk can be quantified, it can be priced; and if it can be priced, it can be insured. The question is, can the terrorism exposure be realistically modeled?

Every peril has unique modeling challenges but terrorism may have more than most. The singular challenges associated with modeling terrorism exposure are estimating the frequency of events, understanding the characteristics of the hazard and the development of certain vulnerability and loss functions. Estimating the frequency of events is particularly important since terrorism, perhaps more than any other peril, should be managed probabilistically. Also, a probabilistic model is required to properly price the exposure.

Frequency-severity assumptions
Development of frequency-severity estimates is part history, part science and part art. The historical piece involves analyzing terrorist statistics, which have been gathered for years by various agencies and are readily available to everyone. Frequency-severity assumptions can then be developed from these statistics. The problem with historical records is that they are composed of mainly small events (principally bombs) and do not reflect the larger events that most experts believe can and may well occur. Hence, historical data alone is not necessarily a good indicator of the future, especially since the game changed on September 11.

Science plays an important part in developing frequency assumptions since it can help in allocating the frequencies to the various weapons that may be used by terrorists. It does this by providing information on the destructiveness of the various weapons and the relative difficulty of obtaining, creating and deploying them. Given current terrorist operational goals, more destructive and easily obtainable weapons should have a higher frequency of occurrence assumption in the model. Game theory can also be used to help assign frequencies based on terrorist and counter-terrorist strategies. The art is the process of gathering and synthesizing the opinion of various terrorist experts into frequency assumptions that enhance the historical pattern and reflect current scientific realities. The modeling companies are working with experts in the field to develop the frequency-severity assumptions used in the models; at EQECAT we have worked with both internal and external experts to develop what we believe are reasonable and realistic frequency assumptions. The final methodology and assumptions have also been reviewed by outside experts who are in general agreement that the approach and assumptions are reasonable.

Hazard characterization
As the Inman Report indicates, the list of potential weapons is extensive and constantly changing. Therefore, modeling companies need to select a representative list of weapons to model. The key to selecting this group of weapons is to identify those that are most likely to be used by a terrorist and will produce results similar to other weapons in the same category. To be complete, the representative weapon groups should include chemical and biological weapons, conventional bombs, dirty bombs, small nuclear bombs and aircraft.

The most difficult part of the hazard characterization is quantification of the impact of the attack. This process involves creating a series of footprints for each weapon, with severity contours that properly reflect the area impacted. Since this is a highly specialized area that is a critical to the modeling process, EQECAT drew on the experience and expertise of its parent organization, ABS Consulting, to jointly develop the footprints.

For the chemical, biological, nuclear and radiological (CBNR) weapons, ABS's MIDAS-AT software was used to develop the footprints. The MIDAS-AT model takes into consideration the impact of wind conditions and cityscapes to develop outdoor dispersion patterns. It also models dispersion flows and rates inside buildings. The software has passed several independent tests for accuracy of results and has been licensed for many years by the US Marines and other governmental agencies to combat terrorism.

For the blast footprints, ABS's Blast software was used. This software takes into consideration the impact of building height on the pressure wave dispersion for each bomb type and size. It was developed by ABS's blast analysis and design group in San Antonio, TX and is used in modeling explosion mitigation designs that reduce the risk of damage to both properties and occupants. This software has been used to help both government agencies and individual companies assess and manage their exposure to various types of blast (conventional and non-conventional).

Vulnerability and loss functions
The vulnerability functions used to model terrorist events consist of some that are relatively well established and others that had to be specifically created or adapted to handle the unique requirements of the insurance industry. For example, many of the CBNR agent toxicity characteristics are well-established and it is a fairly straightforward process to develop appropriate vulnerability functions for expected death rates. However, the number and types of non-fatal injuries are more complicated and require expert opinion to develop. Also, since the toxicity level is usually associated with length and concentration of exposure, it is important to have a very good dispersion model to ensure that the vulnerability functions properly model the exposure. Another difficult area to model is the potential business interruption loss due to the clean-up time and cost for CBNR agents. The vulnerability functions for this loss are also largely based on expert opinion and intimate knowledge of the characteristics of the agents involved.

Blast, on the other hand, is a physical phenomenon that is well understood by engineers who specialize in this area. They are able to develop very good vulnerability functions for the damage to buildings and the potential business interruption exposures. The area that is not as well-defined is estimation of the number of deaths and other types of injuries caused by the blast. This is another area that relies heavily on expert opinion.

Based on the hazard and vulnerability functions, the terrorism model develops damage and death/injury rates. The model must then convert the damage and injury rates into insured loss estimates. The loss estimates for property and business interruption losses are developed using the policy conditions and are very similar to the methodology used for the other perils. The difficult part of this process is developing the loss estimates for deaths and injuries, because the loss estimation requires knowledge of both the types of injuries that will occur and the payout patterns, which vary significantly by region. As an example, the benefits for workers' compensation in the US vary state by state and, as a result, the payout patterns vary as well.

Terrorism modeling process
The terrorism modeling process is, in reality, very similar to modeling other catastrophe perils. It consists of quantifying the hazard, overlaying the hazard on the exposure, determining the damage/injury based on the vulnerability of the exposures given the intensity of the hazard, and finally estimating the insured loss based on the damage/injury level, policy conditions and/or payout patterns.

Probabilistically, this has to be done for every type of event that can occur and each event is associated with a probability of occurrence. This requires the creation of a stochastic event set. Since many of the events will not create large footprints, hundreds of thousands of events have to be created to properly model the exposure. Modeling shortcuts can be taken by reducing the number and/or types of events, such as only modeling events that can cause losses greater than a certain threshold, for example, $1bn. While this approach may simplify the modeling process, it also reduces the accuracy and usefulness of the model. As an example, failure to consider the full range of events and event frequencies renders a model unsuitable for rate making. The inability to price a risk makes related risk management decisions difficult and subjective. Such a partial or simplified model would be useful, at best, to manage the accumulation problem, and then only for the most extreme events.

Another important component of a terrorism model is the ability to factor in the local characteristics of the exposure. The terrorist model requires underlying databases such as the built environment, employment statistics, census data, local terrain features and weather patterns. This is because prime terrorist targets are areas with a high concentration of people, high property values, facilities that have the potential to cause mass destruction, and/or government agencies. This demographic information is an important element in developing both the frequency assumptions and also the damage and injury rates that drive loss estimates.

Conclusion
It is inevitable that there will be uncertainty in terrorism modeling. But then there is also uncertainty in modeling any catastrophe peril such as earthquake, hurricane and windstorm. The key is making sure that the model properly reflects the uncertainty in the final results. However, simply including uncertainty in the modeling process is not a substitute for proper modeling of the underlying peril. Therefore, you need to do your due diligence before using any model, to ensure that you are comfortable with the modeling company's knowledge of the peril and underlying methodology.

So, coming back to the original question - can the terrorism exposure be quantified and priced? I believe the answer is yes and, more importantly, so does at least one segment of the insurance industry. EQECAT is actively working with the insurance industry to develop a solution for a major line of business where the exposure cannot be excluded. Are the current models perfect? No, but then neither were the earthquake and hurricanes models when they were first developed. Therefore, given the reality of the terrorism exposure, can you afford to wait for a perfect model to begin managing the risk?

By Richard L Clinton
Richard Clinton is the president of EQECATInc, Oakland, CA. He is a certified property and casualty underwriter. EQECAT is a leading catastrophe modeling and consulting company. RClinton@ABSConsulting.com