Technology’s crazy. We’ve come from keeping handwritten sales records on actual paper only 30 years ago to massive data sets detailing the transactions and behaviours of our customers available at the touch of a button. We’re all excited to see more numbers, because more numbers mean more information, right? They mean that we have access to everything that is going on in the market, the consumer trends. In the case of Google, we can even predict the next flu outbreak based on what people are typing in their search engines! Big data’s amazing! But is it perfect?
As previously discussed in The Key Difference between Market Research and Big Data, big data has its limitations. Relying on big data alone to provide the necessary insights for profit, revenue, and customer satisfaction advances, is a risky business. While its usefulness is unquestionable in identifying patterns, it doesn’t answer the ‘why’ question…
The Perfect Big Data Scenario
Let’s assume the brain is a perfectly engineered rational mechanism for a moment, one that isn’t swayed by personal expectations or historic events (what a simple world that would be). In this ideal scenario, big data could be objectively analysed, buying patterns and behavioural trends identified, from a completely neutral and actionable point of view. Organisations would know everything that their customers are doing and the way in which they are doing it but still not why…
Big Data Downfall No. 1
Now let’s head back over to the real world where our brain is constantly making use of cognitive tools such as heuristics or schemas to understand what’s going on. One of the issues big data brings is the lack of people who can accurately read and interpret it. There’s so much information the brain can be tricked into confirmation bias – that is, seeing the patterns and correlations that are consistent with what you expect or want to see, and neglecting those that contradict your views. If you are an experienced analyst you will be able to combat this, but there is still no guarantee that that game-changing, critical piece of insight is even there, let alone that it will be extracted. There’s just too much data!
Tweet This | |
"The big data analysis risk: Data overload can result in confirmation bias" |
Big Data Downfall No. 2
If we have learnt anything from statistical studies or the controversial vaccines dispute, it’s that correlation does not imply causation. According to good old stats, there are a number of different reasons variables correlate. Assuming that the analyst identifies the critical correlation, big data is still subject to severe misinterpretation.
Let’s focus on the two most salient examples: First, big data assumes that thing A causes thing B, when in fact thing B may be causing thing A. Second, big data implies thing A causes thing B, when in reality thing A and thing B may be caused by another factor, thing C. These are only two of the various possibilities that can affect the accuracy of big data on a causal level.
Small Data in Action
In 2002 LEGO fell fowl to both big data downfalls. Based on big data alone they forecasted that their product would be killed by the new instant gratification generation, and so shifted from producing small bricks to large blocks. It hit their bottom line hard. Then, the company decided to go into consumer homes to examine the deeper drivers of this generations’ behaviours and attitudes. In doing so they discovered a status motivation in child play, a value placed on hobby mastery. The small bricks returned coupled with more complex building projects and a movie. Fast-forward 10 years; they are the world’s largest toy maker, surpassing Mattel for the first time… all thanks to small data.
The Big Data - Small Data Solution
Big data led LEGO to believe that A would cause B, when in fact A led to something else entirely. What they discovered just in time was that people are not numbers. We do not think in absolute terms and it would be a stretch to say that we are always rational. Truth is, we rarely are. We cannot assume causation based on numerical data mining, rarely does it provide the emotional, motivational or aspirational insights that we require and when it does, the nod towards them can be so subtle that they are buried in the overload.
Simply put: big data identifies correlation, small data identifies causation. The solution: a big data - small data combination… with a little quant thrown in for good measure.
Tweet This | |
"Big data identifies correlation - small data identifies causation" |
Step 1: Big Data - Identify a correlation, trend. Create a hypothesis.
Step 2: Small Data - Test the hypothesis. Qualitative (small data) research techniques, including interviews, focus groups and ethnography are perfect for testing big data hypothesis. By examining a hypothesis it in a qualitative research setting we will discover causation: feelings, habits, perception, culture, etc.
Step 3: Quantitative Research - Scale the results. Once you have your small data outcomes calibrate them with dedicated quant research to ensure large-scale relevancy.
By following this process, only the profitably proven big data hypothesis will see their way into your business strategy.
In Conclusion
Even in the big data perfect world scenario, analysis can’t explain why consumers do what they do. The key to avoiding the overload is to give big data meaning by using small data, perfectly balancing real time numbers with real people’s feedback. In this way, correlations can be understood and either dismissed or pursued for maximum return.
I think it’s also important to note the huge rewards that can come from jumping straight into the land of small data with no preconceived ideas. You will inevitably identify those deep routed emotive insights missed or clouded by the data masses. And as LEGO found, these are simply invaluable.
“If you take the top 100 biggest innovations of our time, perhaps around 60% to 65% are really based on small data.” Martin Lindstrom