Looking at the industry landscape, there is a broad range of topics, but agility, efficiency, automation and machine learning all notch up a large proportion of that debate. Underlying it all is a pursuit of speed. We talk about efficiency and we talk about automation, but it is actually the pursuit of speed that drives most of this; it is true that machine learning and automation are far sexier technological discussion points. Agile research also has a resounding echo of speed; in a recent LinkedIN poll by my colleague, 20% of people voted that Agile Research meant ‘quick and fast’.
There is, of course, a resoundingly good reason for needing to speed up research processes; consumers are fickle and habits change fast, so businesses need to speed-up their decision making. For example, ‘Fast Fashion’ has been a significant winner in recent times, showing how a culture of ‘fail fast’ can speed up the process of finding the winning consumer trends. Research, must surely, follow this trend to remain relevant.
But in the pursuit of speed, we must balance the cost to quality. To be able to make sound business decisions, we all need accurate, up-to-date data and high-quality analysis, error free. If we don’t, have that, then, there really is no point to the data at all; we might as well have made the decision based on gut-instinct. It is the quality-speed trade-off that must be understood. And yet, I find little analysis of this topic; why? Is it an irritable question is it, just, ‘fake news’? Or do we all take it for granted?
The Need for Speed and Impact on Research
Speeding up research isn’t a new phenomenon, either, which may also explain so little discussion on the topic; we have all been pursuing it for twenty years and more. Ever since research was invented, we have been speeding up processes. The most recent was the major shift from telephone interviewing to online, which cut at least 7-days from a fieldwork cycle. But there was a cost to quality, just one that seems an acceptable trade-off for the cut in price and speed. Samples are less random than they once were and we do have more professional responders in our samples. We seem to have accepted those trade-offs.
|The insights industry has accepted trade-offs between cost, quality and speed in the past - but priorities are shifting over time.|
This is a personal battle, not just an industry analysis. Here at FlexMR we’ve been on a mission for the last 13 years to bring agility and efficiency into research processes. We’ve done this by building the unique InsightHub research platform, but this hasn’t been at the pursuit of pure speed. Our driving philosophy has been to make it easier for brands to get closer to their customers; making it easier to iterate designs and development, making it easier to bring qualitative research into projects and improving efficiency so that more decisions can be informed by insight. Our philosophy hasn’t been to make research as quick as possible. We’ve approached AI and machine learning with caution and only added automation to some manual data processing tasks.
But the pressure is building to go further, do more and be more radical in pursuing speed. The drive towards technological automation and machine learning has, seemingly, brought with it a new zeal for further speed reductions. So, what are the costs to quality? Let’s look at the main steps in a research project:
- Sample design and recruitment. Who are we talking to and how easy are they to reach? This is a tricky step because it is the foundation for the whole project. In a recent pitch for ‘agile research processes’ the client outlined their problem; “whenever an agile provider provides nat-rep assumptions, the speed of the fieldwork is fine, it’s only when we apply our sample frame, do the problems occur”. They wanted us to ‘solve’ this. However, there is a pure simple truth behind this; the smaller a population is, the harder it will be to locate and therefore recruit and sample. If we ignore the quality question and only seek to ‘solve’ the speed part of the equation then we will cut some corners, because at the core, the client wants us to talk to humans who are in a minority group. Making them available at speed will either mean paying them more, keeping them on a retainer or slackening the sample criteria. All are solutions, but all will bring compromises to the quality of the output. They might be acceptable but all should be acknowledged in setting the brief and being able to respond freely.
- Research design. Another critical step to getting the right questions. In recent times I have witnessed very good research design, poor questioning styles and overly-zealous questioning. But getting the right questions can take time; not a deliberately slow process, but simply a deliberative process that should go through stages and iteration to refine. Speed solutions include libraries of past questions, template scripts, crash training courses. All improve the speed and the quality, but none will be as high a quality as a full deliberative process.
- Data checking/Quality Assurance. I nearly jumped this step, but it is one that any agency researcher will tell you is critical if the sample is to be ‘valid’. Weed out the people who chased the incentive, the disingenuous and the professional participants. Here, there is a direct ‘fight’ between automation using rules and AI, versus the human-eye. Both are needed, but the amount of time and effort invested in each will affect just how clean your data is. Data cleaning steps range from the simple rule-based cleaning through to visually checking each response. It does depend on the question you are asking as to how far down the list of cleaning steps you choose to go. But there is a choice being made.
- Analysis and reporting. The final step, but the one we all think about most. In pursuit of automated speed, these stages can be ‘standardised’. We can create standard outputs and templates. We can apply standard analysis frames and models, all of which will speed up the process and reduce the amount of human time invested. Here, though, it is important to understand the difference between a template, a standardised test and a custom project. A custom project, is just, that, a tailored analysis to a bespoke one-off objective. A template is an aid to repeating something, making the processes a little less bespoke, but still allowing the analysis to be unique. Whilst if you standardise the analysis and the output, then you must also standardise the questions (the input) so that you can control the output. Standardised projects are not right for every project; standardised tests are projects where a score is required; by nature of standardising the input, the throughput and the output, you control the test and get a score, which is not the same as finding insight.
|There are opportunities to improve research processes not just in fieldwork stages, but across recruitment, design quality assurance and reporting.|
In each of the above steps there are clear ways to speed up that stage. But for each decision there is also an impact on quality. To be clear, the pursuit of speed has always been required. And businesses do need to make decisions faster. But we must all pursue speed, not just for the sake of speed, but in balance with the need for quality and with an open and conscious effort to understand the trade-off at each step.
This is incumbent on all of us, not just the technologists that create new solutions or the agencies challenged in the RFI pitch, but also the client in asking for the solutions in the first place; understand what it is we are seeking and the potential trade-off that it brings. We all need to dig-deeper and lift the lid on the pursuit of speed in order for us all to move forward and find the most efficient – and highest quality – solutions.