Don’t Get Led Astray: 4 Survey Biases to Lead You to the Truth
The truth is hard to find, especially when asking someone else for it. Sometimes, your audience won’t even respond to your pleas for feedback. And of those who do respond, respondents shroud statements in ambiguity to avoid being hurtful or disliked. To separate fiction from reality, it’s helpful to apply a few filters when reviewing responses. Fortunately, most of these filters are well documented as simple psychological biases. By knowing just a few research biases, you’ll be able to correctly construct surveys and analyze feedback so your decisions aren’t distorted and are firmly rooted in fact.
Predominantly known in scientific fields of research, the following biases can also be applied to any professional who makes decisions based on feedback.
One of the more well-known survey biases, selection bias, occurs when respondents systematically differ from non-respondents. As a consequence of the differences, the feedback results don’t reflect the state of all people who’ve experienced what you’re trying to learn more about. Without considering the state of the non-responsive population when making conclusions, you will end up with an erroneous conclusion.
To illustrate the point, let’s take an example from my career as a product manager:
When redesigning our B2B product, my designer and I thought it would be a great idea to test our prototype with a few users. Knowing we had already completed a few interviews before building the prototype, we just returned to the original interviewees for the test itself. They obliged, which was great for us. But, in the back of my mind, I knew we would be succumbing to selection bias. Consequently, we discounted our user test session’s feedback. But, we still got some validation and were happy with what we had learned.
To avoid this in the future, I’d likely consider:
- Knowing the makeup of our entire clientele and targeting each group to get equal representation in interviews and tests.
- Taking more time in interviewing / testing and ensuring we get as close to equal representation as possible
- Reaching out to a broad group of external, yet related, users if my target users are unavailable or unable to provide feedback
Similar to the selection bias, the survivorship bias is rooted in the audience who responded to your request for feedback. In contrast to selection bias where the respondents are based on those who raised their hands, survivorship bias is based on the fact that only those who survived were even considered for feedback. ‘Survive’ in this context can mean a lot of things, for example, are they dead or are they just simply not paying anymore? Regardless, it just means those that are still around. Nonetheless, those who are no longer available can even be included in the analysis, significantly skewing results based on those who did survive.
A popular method of illustrating this point are successful hedge funds:
When analysts or publications calculate and report the returns of funds, they only focus on those that are still active. Hence, they missed out on the large swath of funds that are no longer business and would likely negatively skew the reports, yet may provide a more realistic picture. As seen, the mix of live and dead funds have lower returns, albeit just slightly. However, investors who make decisions only on live funds will envision an unrealistic outcome.
The simple way to avoid this bias is by including lost customers in your analysis. If they’ve been recently active, your results may be richer because you consider the full spectrum of behavior. Moreover, splitting them out can be equally as helpful as it will inform how lost users behaved, and with that, you may be able to build mechanisms to avoid losing them.
When’s the last time you asked for feedback and received a simple ‘yes’ as a response. Unfortunately, this likely happens to you more than you’d like. There’s no exact rationale behind it, but one reason for it is that people who don’t have an opinion, default to ‘yes-saying.’ Due to people without strong opinions, your feedback can become skewed more positively.
As an example, take the following question:
The first example leads to fewer data points and less nuance, whereas the second allows the respondent to be a bit more granular and gives you more information. To avoid this type of bias, simply avoid yes/no questions altogether, unless necessary. They might be easy to add, but the ease is at the cost of accuracy.
Social Desirability Bias
Evolutionary, we’re hard-wired to want to be liked. This need springs up when asked for feedback as the social desirability bias. Specifically, it’s prevalent when we’re trying to understand behavior. When asked about if they did or did not do something, respondents tend to scan for the most likable response so that the interviewer doesn’t judge them unfavorably.
Just imagine the last time you asked a friend if they liked your groovy print button-down. In 99% of cases, even your friend wouldn’t even tell you that it was hideous. Now imagine this being applied to strangers. Why would a stranger be brutally honest with you? Even if you paid them, the human instinct to be liked will likely overpower their goal to tell you the truth.
Including all these biases as filters before a survey and after when in the analysis is critical in uncovering the truth. We all shroud things in mystery to protect ourselves, meet the expectations of others, or just simply move beyond answering hard questions. Moreover, being aware that humans succumb to these basic instincts helps overcome such realities and construct strategies to get the core of the situation. Use these and other biases to plan your feedback and analysis mechanism and you’ll spend more time in reality rather than in fiction.