A combination of the emergence of big data, artificial intelligence and influential voices, such as Byron Sharp, calling for more scientific rigour have promoted the use of data to support marketing decision-making. But if the industry is to progress we need more than just data. We need understanding.
Before I stumbled into market research, I was an aspiring physicist. Many years after giving up hope that I could contribute anything to physics, I have been heartened to see an increasing emphasis on scientific values and empirical testing in the marketing industry. The industry is moving away from blindly following tradition, myth and instinct, to using data to support decisions.
However, it’s becoming increasingly clear to me, that while data is being used far more, it is not being used scientifically. Marketing decision-making is as blind as ever, it is just that they now blindly follow data, instead of tradition, myth and instinct. Unfortunately, data doesn’t know where you’re trying to get to. It’s dumb and if you follow it blindly, you will get lost.
Machine learning, AI and most “marketing science” are no more than pattern matching. There is a pervasive industry misconception that this pattern matching can identify the marketing activities that work, and that to be successful we needn’t care about why or how they work. This is completely wrong and it is not how science progresses. Science is the pursuit of explanation and understanding. Pattern matching can sometimes help in that task, but to mistake pattern matching for an end in itself has dangerous consequences.
Did you know that pirates prevent climate change? Apparently, the number of pirates globally is inversely correlated to annual average global temperatures. If we blindly follow this correlation, then we should train up pirates to ward off global warming. CO2 emissions also correlate with global temperature, but that connection comes from an explanation of climate change, not just from blind pattern matching.
Another, more sensible example, is navigation. A simple explanatory theory (or “common sense”) tells us that we can use patterns of stars to help with navigation, but not patterns of clouds. We know this because we have some understanding of how these variables behave. We know that stars provide fixed reference points but that clouds don’t stick around.
To a machine learning algorithm a pattern of clouds might seem just as important as a pattern of stars. It can see the patterns, but not understand them. And without training, testing and data sets designed with that understanding in mind there is no way the system will differentiate reliable from unreliable patterns.
Data patterns are not explanatory. They have no common sense. They may be completely spurious, or they may hold only in very specific circumstances. As Rory Sutherland said in in his address at Cannes “All big data comes from the same place…the past.”. You can’t blindly expect patterns from the past to hold in the future. True science is about understanding and explaining those patterns and testing those explanations. Once you have a good explanation that passes your best testing, only then you can begin to navigate with confidence.
Data patterns without explanation is reactive, it does not move us forward, so why do so many people think it will?