This is a guest post from Susan Bell - a qualitative research specialist and director of Susan Bell Research. Sue loves to conduct all forms of qualitative research, including new ways such as qualitative social media research.
She writes about and teaches best practice in qualitative research and qualitative analysis. Originally trained in quantitative research, she is always happy to design and conduct all forms of research for a broad range of industries including financial services, food & drink, government and the arts - helping her clients use research to develop better products and processes, and to communicate in the language of their customers.
I have been observing the Artificial Intelligence (AI) community's very impressive and clever development of Bots and expert systems for some time now. Some of the biggest tech companies such as Google, Amazon and Apple have been putting serious money into this kind of AI. Bots can do many amazing things, some of which seem to have a research application. Some can convert text to speech, some speech to text, and some can summarise text data in a fraction of a second.
Some people in the research have suggested that qualitative research moderators will soon lose their jobs to Chat Bots. So, I thought I would help things along a little to delineate exactly what it means to be an online moderator for a qualitative research project, so we can reasonably assess which Bots will be best for the role.
I have based some of this on the thoughts of AI Pioneer Marvin Minsky, especially his concept of resourcefulness. Minsky argued that humans are resourceful thinkers because we have many ways to think. He stated in (The Emotion Machine: Common sense Thinking, Artificial Intelligence, and the Future of the Human Mind1) that humans had limited the capacity of machines to think by limiting what they can do. He was critical of machines that had been built to suit a particular purpose, arguing that they should be more like humans and capable of divergent ways of thinking.
Clearly, the Bot would have to be taught the topic and the brief and the project background, but that should be a relatively simple matter. I am focussing here on the data collection side. These are some things that the best* online moderators and interviewers regularly do as a normal part of their role, that a Bot would need to learn.
In other words, Chat Bots will have to learn to adapt to the things that people do and say quite autonomously. They will also need to be taught how to comply with the Code of Professional Behaviour (COPB) – no turning into a potty-mouthed racist like Microsoft's Tay!
They must behave ethically and must not do anything which might damage the reputation of market, social and organisational research. This must also mean telling the truth surely? If we are using a Bot to moderate, then we must tell people what we are doing.
Which of these do you think will happen if research participants know they are talking to a Bot (multiple choice)?
Our moderator Bots will need to prepared for any of these.
One of the wonders of human conversation is how fragile it is. As Paul Grice showed, for human conversations to get past hello we have to trust the people we are talking to. We start off with the implicit premise that the other person is being truthful, succinct, polite and relevant. Because all this is implicit, we are naturally highly sensitive to any hint of a breach, any hint that the person is lying or being economical with the truth or for some reason telling us more than we need to know.
Tweet This | |
"For human conversations to get past 'hello' we must trust the people that we are talking to." |
Mashable reported an exchange between an irate customer and a customer service rep (or most likely a Bot) who failed to answer the question that the customer had asked (i.e failed the relevance test), denied being a Bot (failed the truthful test), kept on saying the same thing even when the context changed (failed the succinctness test). All of these were hints that Danielle and Sophie were not human. And then of course, humans being humans, it went viral.
So, following Minsky, if our industry wants to use Chat Bots then we will have to develop Bots that can think divergently, act autonomously and truthfully within the confines of the Code, and to be human enough that people will trust them enough to tell them things.
*For me the best moderators are those people who actively engage participants online. I do know that some people have chosen to automate their own moderating process, so they input all their questions into the system ahead of time, and then pretty much exit the building. Perhaps these are the best jobs for the Bots?