Man vs. Machine: An Insight Community Analysis Challenge (IIeX Europe 2018)
Those of you who attended IIeX Europe earlier this month may have had the opportunity to hear me present the outcomes of our recent qualitative analysis exploration in ‘Man Vs Machine’. At FlexMRwe very much believe in innovation and education so I was delighted to win the Best New Speaker Award for this presentation in particular.
For those of you who weren’t able to make it to Amsterdam, I have condensed my 20-minute presentation into the blog that follows. It includes some fascinating learning for qual researchers, both agency and client side, that I am keen to share widely.
Man Vs Machine: The Insight Community Analysis Challenge
If you have read any of my previous blogs you’ll know that Ihavesat very much on the ‘man’ side of the debate – the side that believes that qual (or certain types of qual at least) need a human to analyse them accurately and effectively and that AI cannot completely replace the human touch. But what if? What if now, as we sit here today, I'm wrong… And the challenge was born...
Methodology Rather than using an open text question in a survey (which we know works for text and sentiment analysis software), we wanted to see whether text and sentiment analysis can work for true scaled qual.
By true scaled qual, we mean an online insight community where there is collaboration between participants, where the moderator gets involved, prompting and probing to unearth feelings and emotions. Here, participants aren’t just responding to the initial research question, they are discussing the wider issues surrounding the topic and building the context. Answers are reflective and considered.
Our insight community centred round the home and living category, from home styles, to product advertising, inspiration, browsing and purchasing. It ran over 5 days with 50 participants and in those 5 days alone our participants generated 62,000 words.
Analysis On completion of the insight community, we conducted two analyses, one via text and sentiment technology and one in the ‘traditional way’, via a qual researcher. The text and sentiment analysis output was then passed on to me to review and conduct a third combined analysis independent of any others. All three approaches were timed and the outcomes cross examined.
To give you some context, the text and sentiment analysis software we used for this experiment was a very basic, entry level software. Our aim - to see whether the machined analysis of this rich data did offer any added value in principle (before taking it further).
Outcome #1 - Qual Analysis Speed
So, let’s get to it – the first question you are probably all asking, who was faster?
Compared to the human only approach, the machine was a very clear winner.
However, speed isn’t everything, despite the ever increasing responsive need at the forefront of everyone’s minds, there’s no point in being quicker if you’re not providing quality insight. Insight must remain accurate and above all actionable.
The main positive that came from the combined approach - whilst it was slower than the machine it wasn’t anywhere near as slow as without.
Outcome #2 - Qual Analysis Focus
One of the main contributors to their analysis speed is the structure associated with each of the three approaches.
The human analyst has no structure initially, they literally have pages and pages of transcript to read through and make sense of. This reading and sifting however, does ensure an understanding of the context.
The machine with its pre-programmed algorithms provides structure in its output but it doesn’t provide the context behind the charts – severely detrimental to insight quality.
The machine output is an excellent starting point for the combined approach. It provides an immediate focus for in depth exploration and chart contextualisation. Without doubt this is one of the biggest time saving aspects of text and sentiment analysis for insight community outcomes.
Even the most neutral researcher will ‘go in’ with some element of confirmation bias. Because they are expecting (however subconsciously) to see something or not see something, they may sift through the data with a focus on proving/disproving their expectations. The home and living ‘Amazon surprise’ revealed by the machine provided us with a great example of this.
The qual researcher didn’t expect Amazon to play a significant part in vintage shopping experiences. Their analysis did highlight Amazon as being heavily featured in generic home and living shopping and purchasing but it didn’t pick up on vintage specifics.
The text and sentiment analysis highlighted Amazon’s role (4th most mentioned brand) in vintage shopping. It didn’t explain why however. The machine has no confirmation bias but as previously mentioned it doesn’t give any context at all.
With the benefit of the machined charts, I examined the vintage Amazon association in a human transcript deep dive, pulling this and the reasons for it out as key learning. For manufacturers and retailers dealing in handmade, vintage or bespoke home and living items this finding as well as a true understanding of its basis would be fundamental.
Outcome #4 – Qual Analysis Sentiment
Text and sentiment analysis software can produce a sentiment analysis for any and every community discussion output you choose to run through it. Again though, there is no context to explain sentiment chart reasoning so it’s impossible to understand what’s driving it. Indeed, my human transcript deep dive demonstrated that on many occasions the machine based sentiment analysis was totally incorrect.
The big learning here – when using automated sentiment analysis on insight community qual use it with caution! Results should never be thrown around as definitive sentiment until you have done a human sense check.
Why doesn’t the machine always provide accurate sentiment? Quite simply, insight community discussions often expand into and flow between different topic subjects and as sentiment analysis doesn’t read context it doesn’t know which subject the sentiment relates to. It will also conclude a negative sentiment based on a high number of negative words within the comment stream where the contextual word meaning it is much more light-hearted and positive.
Because of this, I found that sentiment analysis worked best with topics that are more concept orientated, or discussions that are ring fenced by their nature. In these circumstances it was very effective.
Outcome #5 - Qual Analysis Empathy
Having no statistical validation to back up their interpretations, qual researchers have a tendency to sit on the fence and not commit to a sentiment in their analysis unless it is especially clear cut.
Perfectly highlighting this point, our traditional qual analyst didn’t commit to an overall sentiment for the community discussions surrounding brand charitable promotions. This on the other hand, was one of the few machined sentiment charts that I did pull out in the combined approach. Having sense checked the sentiment for accuracy my deep dive unearthed some fundamental home and living brand associations that would take our insights next level. My analysis established the ‘why’ and I was then in a position to run my own text analysis to statistically validate or otherwise.
Without doubt the machine added value here. It took me down a path our traditional qual analyst didn’t go down for a deeper understanding. The combined approach established empathy supported by statistics. On the client side the additional insight provided would have incredible weight.
Man Vs Machine: The Insight Community Analysis Conclusion
As has likely become apparent when reading our outcomes above, a traditional insight community analysis approach will give you ‘the what’ and ‘the why’ but it lacks the statistical validation that many clients and stakeholders look for before making a business decision.
The machine, gives you ‘the what’ and the statistical validation but there’s no why. In reality you may be misguided by relying solely on the machine analysis.
The combined approach, using text and sentiment analysis followed by a human community qual deep dive, ticks every box - the ‘what’, the ‘why’ and statistical validation. It takes the empathic positives of the traditional approach and builds on this with the focus, objectivity, sentiment and statistical validation provided by the machine. Collectively they elicit a deeper and more confident interpretation.
So, we proved that including a machined analysis element does in fact aid scaled qual insight. To help those of you interested in following in our footsteps we’ve put together a 6-step ‘man and machine’ analysis process for success. It is naturally slightly different to the challenge process. Based on my experience as the combination insight community analyst the real world process that follows is the optimum in terms of agility.
Step 1.Assuming the analyst is different to the moderator, the first step is to immerse them in the topic. They need to familiarise themselves with this, the discussion guide and research objectives.
Step 2.The analyst should sanitise the transcript making sure they eliminate moderator comments – vitally important as these will bias the machine analysis leading to inaccurate interpretation.
Step 3.Run the text and sentiment analysis software.
Step 4.The analyst should then review the text and sentiment output, identify emerging themes and run any additional charts required based on their knowledge of the research objectives.
Step 5.At this point the analyst and moderator should confer. They moderator will know if there any themes that haven’t been picked up by the machine analysis. If there are the analyst can run extra charts as necessary.
Step 6.Finally the analyst deep dive, using the text and sentiment analysis charts as a guide for focus and objectivity.
By doing this, not only will you see a huge time saving compared to that of traditional qual analysis (we saved roughly 50% analysis time on a 5 day community), you will also add value client side with improved insight quality. On top of that, your clients or stakeholders will have the statistical validation required to be able to confidently make a business decision off the back of your insight community rather than needing to run any follow up quant. Now, that’s impact!
Charlotte’s research and communication experience is invaluable when working with brand and insight managers. Her determination and inquisitiveness enables her to provide crucial support to our clients, providing accurate insight to ensure client research success.