Amidst continued recovery from the Midwestern floods and heightened concerns regarding the US levee and dam infrastructure, the insurance industry braces itself for the onset of the hurricane season

Still uncertain of losses resulting from America’s worst inland flooding in 15 years, insurers are forced to look forward as they finalise their positions for the 2008 hurricane season. Preparations include re-calculating exposure, reviewing contractual obligations and, when appropriate, securing additional cover. This event also provides an opportunity to appraise the adequacy and sophistication of available flood risk management tools and process.

Traditionally, industry focus has been on the catastrophic effects of wind, while the development and adoption of tools to profitably manage flood risk has lagged behind. This lack of progress is attributed to a couple of factors. First, the US federal government’s subsidisation of flood insurance through the National Flood Insurance Program (NFIP) has largely deterred private sector participation. Consequently, ratings agencies de-prioritised efforts to establish related compliance requirements. In the insurance industry, a lack of underwriting activity combined with minimal rating agency oversight spells minimal related innovation. Recent advances in flood risk management technology and data collection, however, now offer risk managers, carriers and reinsurers a compelling new opportunity to assess the adequacy of their existing flood exposure management tools and techniques. And, in the case of many primary insurers, these advances will result in renewed considerations to actively and accurately underwrite flood to increase premium revenue.


The insurance industry’s aversion to flood risk is well-documented. At its heart are many truths, many assumptions and, also, many myths. One common misconception—and perhaps the most prevalent—is the belief that not proactively underwriting flood is synonymous with avoiding exposure to it. On the contrary, almost every property insurer assumes unexpected flood risk.

Risk arises in a number of ways, including market demand, whereby a large commercial customer requires flood coverage as part of a broad all-perils policy. Market demand flood risk, especially prevalent in today’s soft market, compels insurers to offer flood coverage at no extra charge in an effort to remain competitive. Policy limitations, such as the failure to explicitly exclude flood damage on high risk properties (often mistakenly identified as no or low risk due to the lack of appropriate risk management techniques), also contribute to exposure. Finally, concurrent causation—the inability to distinguish between wind and water damage—also continues to play a role in industry exposure.

Another mistaken belief labels flood as an uninsurable risk. Supporting arguments cite adverse selection and the reliance on the NFIP to adequately addresses the business opportunity. In response to adverse selection concerns, consider the industry’s experience with similarly problematic perils such as earthquake, terrorism and windstorm. By diversifying risk over an extended geographic area and employing enhanced pricing techniques, these lines have been extremely successful.

The main distinction between flood and the other listed perils is the availability of sophisticated solutions to manage risk; solutions that, until recently, did not exist for flood. Regarding the NFIP issue, it is important to remember that although the government offers a competitive pricing alternative, the terms and coverage are limited and often unacceptable to knowledgeable residential and commercial property owners. Moreover, rapidly rising property values often exceed NFIP policy limits, providing significant opportunity for excess coverage.


The first component and cornerstone of a flood risk solution, or any natural hazard management solution, is spatial analytics. Once limited to simple geocoding (identifying the geographic location), significant advances in technology and data collection have broadened spatial analytics capability to include a contextual understanding of a property’s actual location relative to a specified hazard.

Perhaps more than any other peril, accurately managing flood risk requires determining the exact location of a property. Traditional geocoding techniques are riddled with uncertainty unacceptable for managing flood risk. These techniques commonly misplace properties hundreds, if not thousands, of feet from their actual location. Such variances can result in substantial differences between the property’s assumed and actual elevation, making reliable flood risk assessment impossible.

The level of geocoding detail, or granularity, has improved exponentially with the advent of parcel-level geocoding—an approach that assigns a latitude and longitude based on the centre of the parcel on which a structure is located. This improved granularity drastically reduces variances between a structure’s reported and actual location. Although several geocoding providers now provide limited parcel-based geocoding, the scope of geographic coverage varies significantly. When selecting a parcel-based solution, companies need to verify the extent of the provider’s geographic coverage and the timeline for expanding coverage.

Certain properties or policies require an even more precise geocoding technique. For multi-structure commercial properties or high-value homes on multi-acre lots, manual or “roof-top” geocoding processes can be used. As the name implies, this technique involves a GIS specialist identifying a specific structure using aerial imagery and then pinpointing its exact latitude/longitude based on the structure’s rooftop.

“One common misconception is the belief that not proactively underwriting flood is synonymous with avoiding exposure to it

Equally as important as the quality (accuracy and granularity) of the spatial analysis solution is the consistency of its use throughout the entire risk management process. The use of multiple techniques or technologies during the policy lifecycle (for example, underwriters and portfolio managers using different geocoders) significantly impacts risk perception. Ideally, the selection of a geospatial solution involves members from all the affected groups (marketing, underwriting, claims, compliance and portfolio management).


To determine site-level flood risk, underwriting guidelines have historically been based on binary decisions such as determining whether a property is “in” or “out” of a US federal flood zone or within a set distance of the coastline. Federal Emergency Management Agency flood maps are the result of one of the most thorough and expansive risk engineering studies ever conducted; however, their intended use and reliability as a risk management tool is limited. Relatively new to the industry, flood risk scores are beginning to replace these traditional approaches to significantly improve insurers’ risk identification capabilities by delivering a more comprehensive, analytical and calibrated risk assessment far exceeding a simple flood zone determination. In addition to underwriting, risk scores also benefit portfolio management capabilities, offering a more sophisticated and reliable set of risk attributes against which aggregate and zonal limits can be managed.

Determining a risk score is a two-step process. First, all of the relevant hazard data or “risk factors” must be collected. These risk factors include the elevation of a particular property relative to the flood hazard and flood zones, topology, and the presence of infrastructure impacting the course of floodwaters (whether designed to manage water flow or not), such as levees, dams and railroad tresses. At the time of writing, the recent flooding in the Midwest witnessed the failure of over two dozen water management structures (primarily levees and dams) resulting in billions of dollars in crop and property damage. The most advanced inland flood scoring techniques use granular elevation data (10m x 10m), incorporate water drainage basins, and are capable of both identifying such structures and also qualifying a particular structure’s condition and estimated threat of failure based on the published reports of governmental agencies such as the US Army Corps of Engineers.

Once identified, the factors are incorporated into a proprietary scoring algorithm, typically based on a combination of geo-scientific principles and actual historic events. For example, when calculating the proximity of a property relative to a hazard, measurements reflect water flow as opposed to straight-line distance. The result is a numerical score (e.g., 1–5) or risk category (e.g., low – moderate – high).

Modelling the exposure

Probabilistic financial modelling differs from risk scoring in the ability to not only identify the risk but quantify the financial exposure as well. Catastrophe models create a mathematical framework to estimate the probability of low frequency/high severity events that are inherently complex. Modelling utilises complex algorithms to estimate financial losses from a number of possible events based on key building characteristics and insurance conditions. Although relatively late to the modelling game, as compared to other catastrophic perils, flood catastrophe models are becoming more readily available and are essential to any flood risk management solution.

Flood catastrophe models simulate a large number of flooding events to determine the impact on a single location, schedule of locations or entire portfolio. A simulated flood event results in a depth of flooding, which in turn results in some amount of damage to a property. The extent of the actual financial damage is then calculated based on the replacement cost and type of structure involved (e.g., residential, commercial or industrial). Finally, the insurance conditions, such as deductibles and limits, are applied to determine the insurer’s exposure.

To achieve an accurate loss estimate, a flood model requires a granular and robust event set. Due to the nature of the flood peril, it is critical that the risk be determined at a localised level. Considering how the damage caused by a flooding event can vary drastically over a city block, the resolution of event simulation should be measured in terms of feet, not miles.

Additionally, the significant geographic scope of the peril requires an extraordinarily large number of simulated events (in the hundreds of thousands) to achieve adequate coverage. An effective flood model must be able to quantify the spatial correlation of loss across a schedule or portfolio to determine aggregate loss potential. For example, a model must be able to quantify the impact of a single flooding event on multiple properties located within a single drainage basin.

As the insurance industry readies itself for the 2008 hurricane season, the ability to accurately assess and measure flood risk is increasingly paramount. New flood risk management tools and technologies continue to be deployed in an effort to help the industry better define flood exposure. While these tools challenge the conventional wisdom surrounding flood risk, they also provide insurers with a way to profitably manage its flood exposure.