Computer technology has revolutionized the collection and use of massive amounts of data, and yet, somewhat ironically, it’s frequently harder to find what one actually needs. Aphorisms abound – “big data; the devil’s in the details; garbage in garbage out.” These are shorthand references to the problem, however, and they contribute very little to the efforts of those who are tasked with understanding the increasingly complex world that ever more sophisticated computers and programs create.
Brokers, insurers and reinsurers aren’t alone in trying to make sense of all of the technical data that they can obtain, but it’s become clear that determining what’s important, perhaps vital, from what’s not; i.e. “separating the wheat from the chaff,” to use another aphorism, has become increasingly important.
Pricing risks requires estimating potential losses, and the P&C re/insurance industry has come to rely on catastrophe models in order to do so. The problem is: “the devil’s in the data.” There’s either too much of it to be adequately analyzed, or there isn’t enough of it to assess the loss potential of any given risk with certainty.