Catastrophe Models

Last Updated 1/16/2019

Issue: Catastrophe models (cat models) have been rapidly evolving since their introduction in the 1980s. Technological advances and higher resolution exposure data have accelerated this evolution in recent years. Catastrophe losses are extremely complex to model. These recent advancements provide more accurate risk assessment amid growing extreme weather events. They also increase the complexity of catastrophe models and the need for them to be more transparent and flexible.

Cat Model Basics: Cat models are designed to quantify the financial impact of a range of potential future disasters. They are intended to inform users on where future events are likely to occur and how intense they are likely to be. Based on the estimated probability of loss, they then estimate a range of direct, indirect and residual losses. Direct losses refer to losses from things such as damage to physical structures and contents, deaths and injuries. Indirect loss refers to things such as loss of use, additional living expenses and business interruption. Sources of residual loss include demand surge, labor delays and inflation in materials costs.

There are four basic modules to all cat models, regardless of what peril is being modeled. These modules are: event, intensity, vulnerability and financial.

  • Event Module: The event module generates thousands of possible catastrophic event scenarios based on a database of historical parameter data.
  • Intensity Module: The intensity module determines the level of physical hazard specific to geographical locations using the location-specific risk characteristics for each simulated event.
  • Vulnerability Module: The vulnerability module quantifies the expected damage from an event taking into account the exposure characteristics and event intensity.
  • Financial Module: The financial module measures monetary loss from the damage estimates. Insured loss estimates are generated for different policy conditions, such as deductibles, limits and attachment points. Varying financial perspectives, such as primary insurance or reinsurance treaties, are also provided.

Key metrics provided by a probabilistic catastrophe model include the Exceedance Probability (EP) curve, the Probable Maximum Loss (PML) and the average annual loss (AAL). The PML is the annual probability a certain loss threshold is exceeded. For example, the 250 year PML represents the 99.6 percentile of the annual loss distribution. The AAL is the average loss of the entire loss distribution and is represented as the area under the EP curve. It's frequently used in pricing and ratemaking to evaluate the catastrophic load.

Background: Hurricane Andrew brought unprecedented losses after two decades of little hurricane activity. This changed the way the insurance industry viewed hurricane activity and led to the industry adoption of catastrophe modeling. Since this time, cat models have continued to evolve to reflect improvements in the understanding of the science of a peril and its loss drivers. Model advancements are frequently driven by events revealing deficiencies in a model.

The 2004 and 2005 Atlantic hurricane seasons had a substantial impact on modeling assumptions. Two consecutive years of record activity and losses brought a new focus on the impact of aggregate losses from multiple hurricanes. Unique to prior hurricanes, Katrina in 2005 resulted in more losses from secondary flooding than the original wind generated catastrophe. As such, modelers also began to incorporate the impact of secondary losses from super catastrophes into their models.

Overall, hurricanes since Katrina have highlighted the impact additional factors such as demand surge, evacuation, sociological risks and political influence can have on losses. Models are increasingly using combinations of economic and sociological modeling to incorporate loss amplification from these additional factors.

Modeling platforms have also been advancing. Modeling vendors began to use physically-based modeling technology in 2000 to more accurately simulate the actual physics of events. Today, more powerful computers and mobile communications have enabled physically-based models to reach the level of high resolution needed to provide location-specific forecasts. However, higher resolutions bring higher uncertainty and sensitivity in modeled results. This has led to a growing movement towards open models. Open models allow components to be more visual and accessible to model users. They also allow for more efficient model validation and verification.

Status: Modelers' current focus is on reaching resolutions high enough to price insurance for an individual property's specific characteristics. However, the resolution of some perils, such as flood, is still evolving. Continued advances in computing capabilities and data collection instruments have the potential to fill model gaps and provide real-time modeling. This could bring modelers' abilities to understand and quantify catastrophe risk to new levels. It is also likely to further increase the complexity of models and propel demand for more transparent and flexible models.

The NAIC Catastrophe Insurance (C) Working Group of the NAIC Property and Casualty (C) Committee serves as a forum for discussing issues and solutions related to catastrophe models. The Working Group also maintains the NAIC Catastrophe Computer Model Handbook. The Handbook explores catastrophe computer models and issues that have arisen or can be expected to arise from their use. It provides guidance around areas and concepts to consider and explore in order to become well informed about catastrophe computer models.