A concept of enduring utility rarely emerges from the market research business, but the Gartner hype cycle is an exception that proves the rule. It is a graph that describes the life cycle of a technological innovation in five phases. First, there’s the “trigger” that kicks off the feverish excitement and leads to a rapid escalation in public interest, which eventually leads to a “peak of inflated expectations” (phase two), after which there’s a steep decline as further experimentation reveals that the innovation fails to deliver on the original – extravagant – claims that were made for it. The curve then bottoms out in a “trough of disillusionment” (phase three), after which there’s a slow but steady rise in interest (the “slope of enlightenment” – phase four) as companies discover applications that really do work. The final phase is the “plateau of productivity” – the phase where useful applications of the idea finally become mainstream. The time between phases one and five varies between technologies and can be several decades long.
As the “big data” bandwagon gathers steam, it’s appropriate to ask where it currently sits on the hype cycle. The answer depends on which domain of application we’re talking about. If it’s the application of large-scale data analytics for commercial purposes, then many of the big corporations, especially the internet giants, are already into phase four. The same holds if the domain consists of the data-intensive sciences such as genomics, astrophysics and particle physics: the torrents of data being generated in these fields lie far beyond the processing capabilities of mere humans.