At the recent ASC International Conference, I had the pleasure of running a workshop about the challenges of improving participant experience. We all know what a good survey is, right? However, if we all know that, how do we keep designing poor ones?
When I have posted about this subject in the past, in particular, when I outlined the need for a global benchmark, the Survey Quality Score, there have been counterpoints that it wasn’t needed since we all knew what good design was.
One of the key reasons, I believe, behind this continued prevalence of poor-quality surveys is the absence of the participant from the ‘design table’. Whenever a survey is drafted there is a negotiation between the Researcher and Stakeholder, who commissioned the survey. Whilst both understand the principles of good participant experience, at that point of design, neither are motivated to seek the best outcome for the consumer.
Trade-offs come into any decision and survey design is no different. When under pressure to balance the research needs, the business objectives and the participant experience, inevitably trade-offs get made. This was the basis of the workshop, to evaluate the impact of that negotiation, the type of trade-offs that get made and whether it would be improved if the participant were, somehow, represented during that design phase.
Tweet This | |
Survey design involves making trade-offs between stakeholder, researcher and participant preferences. But what affect is a lack of participant representation having on research?. |
The Damage of Poor Value Exchange
According to data from the GRBN Global Trust Survey, 67% of people believe surveys benefit organisations, yet only 54% say that surveys benefit consumers. That’s a -13% value gap in favour of organisations. In the UK, that gap is widest at -21%. It’s -12% in the USA and -19% in Australia. Only in South Korea is the perception positive, benefitting consumers more than organisations.
Our own FlexMR data shows that many people agree they are both an ‘essential part of business’ and ‘are the way for organisations to find out about how to sell their products’ and only 12% think they are just a ‘tickbox checking’ exercise. So, consumers can absolutely see the need that organisations have for them, it’s just not a lot of value to them; they can’t evaluate the end benefit they derive from it.
Perhaps the most damning statistic in the Global Trust Survey, though, is that perception of survey enjoyment is falling and fewer than a third think they are an enjoyable experience. Globally, perception of survey enjoyment has fallen from 35% in 2022 to 31% in 2024.
Commercial Risks
Poor experiences do matter a lot as evidence shows that a poor experience still influences their perception of their next survey; even where the next survey is a very positive one, their overall perception of surveys remains negative, illustrating a form of negativity echo that keeps reverberating. This is likely to be a huge barrier when it comes to wanting to clicking into the next survey link.
What is more, there is a serious commercial risk to poor survey experience; in 2024, 67% of customers said they drop out of surveys if they are too long and 19% stated they have stopped doing business with a company because their satisfaction surveys are too long. What is more, almost a quarter say they have stopped doing business with a company because it sends out too many surveys.
The Trade-Off Game
The workshop I conducted at the ASC Conference simulated the affect that negotiation and trade-off has during the survey design process.
Each person was assigned a role to play – a Researcher, a Participant or a Stakeholder. All were given the same list of 15 survey attributes, an equally balanced set of design objectives, the ‘building blocks’ of every survey. Five were weighted towards the needs of the Researcher (balanced question wording, good participant screening, detailed answer options etc), five were weighted towards the participant (short, simple-to-understand questions, engaging design, short length etc) and five towards the stakeholder (answers pressing business challenges, answers the objectives etc).
There were three ‘rounds’ to the negotiation; the first allowed each individual to pick the eight elements they wanted in the survey. Then they formed groups of like-minded ‘role persona’ to compare notes, discuss and select five that they could all agree were most important to them as a group. The final round was the actual negotiation. Groups were formed differently; some didn’t have a participant, some had more researchers, some had more stakeholders and some were equally balanced.
The Results Unveiled the Negotiation Effect
There was a difference; the only group to not have a participant present did create a different output. Whilst not radically different, it was the only group to not value any aspect of participant experience – or even reward – as necessary; they had all been traded away throughout the process. Ultimately, the three things that mattered most were: ‘It can be used to solve pressing business challenges, the survey has been tested thoroughly and that it promoted security and authenticity.’
All the other groups included some form of participant balance, reflecting the negotiation that required the participant to ‘be given something’ from the process. As a result, the element that participants were most often given was ‘there is some personal benefit or reward’. That was the strongest winner, as it was something they could all agree benefited them, over and above whether the experience was enjoyable. The group without participants even left this out, though.
Tweet This | |
When participants aren't included in the survey design process, it can be easy to favour attributes or choices that lead to a poor experience. |
The group did suggest they might be 'over-playing' the role rewards play 'as a way of compensating for rubbish’. However, it does, reflect the prevalence of rewards in our industry yet, worryingly, also still suggests we are not rewarding them enough given that large negative perception gap. Surely if we know we need to compensate them, if we were doing so adequately, that gap would not still exist? The group did reflect that our existing reward structure and value might not be good enough.
And all added the caveat that the negotiation was on the assumption that the participant was ultimately adequately rewarded, which the group concluded probably wasn't the case with current reward structures.
Another interesting point was that survey length was rarely discussed, except in the very early stages. And those that did mention survey length quickly dropped the topic during early negotiations. This does reflect reality, I think, and partly explains why we still end up with so many surveys that are too long, even though we know that they can create poor experiences and even poor-quality data.
It is worth re-enforcing that the group without any ‘participants’ ignored both rewards and length in their final selection. Clearly, there is a ‘trade-off’ effect and partly explains why we end up with longer and poorly rewarded surveys.
This workshop has shown that, despite us knowing what good survey design might look like, the participant experience can often get ‘pushed out’ during the pressures and trade-offs during the survey design process. This is something that does require more thought and research going forward; it’s not enough to just reiterate what good looks like or assume people always act in the best interests of the participant.
There is a wider debate to be had around incentive and reward levels but there is also an argument to be made for the creation of a Survey Quality Score benchmark purely as a means for bringing the participant into the design discussion; if that existed then it can be used as an arbiter and proxy for the participant being present at the design table.