If everyone uses the same risk model, is it still useful?

Posted at Apr 9, 2015 4:00:00 PM by Ivan P. Maddox


One of the recurrent themes of this blog is to explore the usefulness and limitations of risk models. This post explores the implications of the widespread — in some cases, universal — use of these models. Is there a limit to a model’s usefulness if everyone is using it? How can a model’s limitations be overcome?

iStock_000049591616_Full-985396-editedWhen a peril is well modeled, and that model is comprehensively applied throughout a market by both the carriers and reinsurers, it becomes very difficult to differentiate coverages because everyone has priced the risk similarly. The implications of this blanket usage begin to manifest when nothing happens for a while; i.e., when no significant catastrophe fulfills the model’s predictions. The capacity to cover the expected loss is collected by everyone, and with no claims to release the capital, the market gets soft. Competition becomes tighter, and it becomes necessary to look for new markets, or entirely new activities, to maintain a constant level of premium.

This recent article from Intelligent Insurer explores this phenomenon in the current reinsurance market. The big boys are moving to specialty reinsurance and even primary insurance amid a very soft market. Naïve capital accumulates and the only outlet is a catastrophe that is unexpected — i.e., unmodeled – to release the excess capacity through claims that exceed predictions.

A way to reduce the impact of the uniform application of the same models is to introduce variety. If carriers apply the same models differently this problem begins to disappear because diverse views of risk become possible, with competition blossoming from a basis of differing experience, expertise, choice of analytics, risk appetite, and augmenting datasets. The market as a whole becomes much more resilient to capacity overflow because everyone has collected premium on different risks based on their focused efforts.

An example of this effect could be drawn with flood in the U.S. Almost the entire market bases their flood policies on FEMA data, partially or completely. If carriers begin to apply alternative flood models, extra data (meteorological, topological, hydrographical), and unique analytics, the market would become more dynamic and resilient almost overnight. Capacity would be bound to different risks and different events, and reinsurers could compete for the accumulated risks based on their own interpretation of what the carriers have done.

A dynamic underwriting environment with unique views of risk from each carrier is a much saner way to ensure market forces remain firm, rather than hoping for an unimagined catastrophe to wreak enough havoc to shed excess capital. Standard application of standard models leads to stagnation, while the introduction of variety into the use of models, with results that can be understood and applied to solid actuarial work, is a recipe for success for those carriers who can use the available tools and information intelligently.

Topics: Risk Management, Insurance Underwriting, Other Risk Models

Comment Form