The use of risk models and maps to predict the likelihood and intensity of flooding has an inherent, although not immediately apparent, flaw: dependability. Flood maps and models are relatively static, updated every few years (maybe), while weather and climate are extremely dynamic. So how can static models predict dynamic events dependably? A recent study suggests they really can’t.
One of the things that make natural catastrophes inherently unpredictable is the ever-changing nature of the natural world. This is a trivial observation, but it is fundamental to understanding how peril models work and what their limitations are. This post will discuss flood specifically, but the case is the same for all natural catastrophic phenomena.
Cat models like those built by Air Worldwide, RMS, Eqecat, and others) do their best to account for the unsteady and unpredictable natural processes that cause flooding. They create huge event sets of possible scenarios, where a scenario is essentially a year’s worth of climate predictions, and analyze the results statistically. These event sets are generated upon a selection of assumptions and underlying trends, including changing global climatic conditions. The scenarios — thousands of them — are then used to generate potential catastrophic flood events which can be expressed as probabilities, which is what underwriters need to understand.
Less sophisticated are flood models and flood maps, which offer a snapshot of flood risk based on a set of assumptions and as much empirical data as possible (elevation data, river gauges, seismometers, historical catalogs, etc.). While there are regular updates to flood maps and models, it is difficult to capture every aspect of the causes of flood at a given time, and in the future. The updates are frequently responding to changes in the built environment, technological improvements, better empirical data, or different assumptions on how risk should be measured. The only way flood maps and models can account for changing climate is by projecting the risk into an unknowable future based on unknowable situations.
A more subtle inherent problem with all these approaches is the fact that the historical perspective on flood, and all natural catastrophes, is too short compared to the likelihood of events that can happen. There is no way around the fact that we cannot understand what is likely to happen every 1,000 years or more, because we don’t have the historical perspective to support it. One interesting exception to this shortcoming is European earthquake, for which records from the ancient world are used. For flood, though, there is no solution to this problem, and all models and maps are subject to the lack of history.
One way to handle these shortcomings of flood (and other peril) models is to combine raw empirical information with the models. Adding elevation data, or loss history, or independent river gauge information to a flood risk analysis can reduce the uncertainty related to the narrow temporal view we have on flood, since we are limited to a relatively short present-time in our view of the natural world. Happily, there are some things that don’t change so much, like elevation data and other geospatial, geological, geographical aspects of the hazard. Combining as much information as available and possible is a sound approach to understand risks that change more than we understand.
Comment Form