- Unclear or no instructions to respondents
- The use of jargon or sophisticated words
- The use of leading questions
- The use of double-barrelled questions
- Topic order affecting later questions
- Order effects in questionnaires
- The use of unbalanced scales
- Unmasked screening questions
- Answer options that don’t match the question
- Not optimising the survey for mobile devices
Bias is the most common cause of poor research. Although eliminating bias is almost impossible, it is important to plan strategies to minimise it at all costs. There is a lot to learn about bias but getting to know the most common sources of bias and strategies to minimise it is a great starting point.
Bias (or error) falls into two distinct camps: sampling bias and non-sampling bias. Sampling bias is all to do with whether your sample is representative of the population you are researching. Non-sampling biases are all those things that we might or might not do during data collection that can introduce bias to the research and its results.
Here are our top 10 common examples of non-sampling bias or error in questionnaires.
Unclear or no instructions to respondents
It’s important that you give clear instructions to respondents when writing a questionnaire. Start with some broad instructions on how to fill in the survey. This should include instructions on how to navigate (e.g. “Use the Next button and not your browser’s Back button”), but also simple instructions for each question such as “select one response only” or “select as many that apply”.
Try to keep your instructions clear and succinct, otherwise respondents won’t read them. Also give respondents a heads-up on what’s going to happen next, such as “please watch this video and then answer a few questions”. Use common sense to get this right and check the instructions when you test the live survey.
The use of jargon or sophisticated words
Don’t assume that people understand technical or marketing words, and don’t use sophisticated words when a simpler alternative is available. Try to write with the audience in mind. If it’s a business survey, then you may be able to use words that you would expect your entire target to understand, but be careful. If your audience is general consumers, then don’t assume anything – write your survey for the lowest common denominator. Let common sense prevail and guide you to the best use of language.
Examples of business terms:
"Paradigm shift", "synergistic", "thought leader", "deliverables", "end user", "bottleneck".
The use of leading questions
Leading questions create bias. By phrasing your questions in a neutral manner and having the right scale to accommodate, you can minimise this sort of bias. For example, don’t use statements that might include a strong opinion, or a statement that suggests you should agree (socially acceptable), and don’t state something questionable as a fact. If you want to state an opinion, then tell respondents that some people think “XYZ”, and then ask to what extent they agree or disagree using a balanced scale. It is also good practice to remind respondents that there is no right or wrong answer and it’s only their opinion you are interested in.
Example of leading question:
"Are you more likely to purchase Brand X over others because of your experience?" ("Yes", "No", "Unsure")
"How likely are you to purchase Brand X over others because of your experience?" ("Not at all likely" to "extremely likely")
The use of double-barrelled questions
Double-barrelled questions are a common source of bias and should be avoided at all costs. Basically, a double-barrelled question contains two ideas that are linked together so the respondent can only respond to them both as a single idea rather than as separate ideas. An example is:
Should the government spend less money on the military and more on education?
This touches upon more than one issue yet allows only for one answer, as one might agree that too much is spent on the military but that doesn’t mean one wants more spent on education. In this case, you should consider asking two separate questions.
Topic order affecting later questions
Topic order is important and should be thought out in the planning session as we have touched on before, but it’s certainly worth another mention. If you start by asking questions about a topic which will impact responses to later questions, then you need to reorganise your questions. For example, if you’re shown a TV ad for brand X and later asked:
Which brands in category Y can you think of?
…then clearly you will have a heightened response for brand X because we have just shown it to you. There are too many instances where topic ordering can affect responses, so you must take extreme care and audit your questionnaire for topic-based bias before launching.
Order effects in questionnaires
There is an effect in psychology known as serial position effect which suggests that people tend to remember the first few and last few things shown and are more likely to forget those in the middle of the list. To negate the bias it would introduce to responses, you need to make sure that all answer options are either randomised or rotated so that each option has the same amount of time at the top, middle and end of the list. This doesn’t eliminate the bias altogether but ensures that the bias is distributed evenly across the sample. For the same reason, it is also good practice to rotate entire questionnaire sections in cases where you are showing a series of ads or new product concepts.
The use of unbalanced scales
The use of unbalanced scales can create a bias towards the side which contains more options. For example, let’s say you want respondents to indicate their strength of agreement or disagreement to a proposition. A bad example would be the use of the following scale:
A good example would be:
Neither agree nor disagree
The latter scale is balanced and includes a midpoint, so we can capture which side they err on even if only partly.
Unmasked screening questions
Masked screening questions are vital when using third-party permission-based panels or any time that you intend to incentivise respondents. A masked screener makes it impossible for the respondent to know what you are looking for and therefore is a disincentive to provide misleading information just to get into the survey and therefore to the incentive.
Let’s say you are interested in recruiting people to your survey who have drunk a certain type of beverage in the last month or so (e.g. a cola). You could ask:
Have you drunk a cola in the past 30 days or so?
Of course, the respondent would know that you need to answer “Yes” to continue. Instead, you should ask:
Which of these drinks have you personally consumed in the last 30 days?
Other soft drinks
Having a (randomised) list of options masks the “right” option and ensures that respondents will respond honestly. It eliminates any bias due to having non-qualifying respondents in your survey.
Answer options that don’t match the question
This is a common mistake that can be easily avoided, best demonstrated via example. You would be surprised by the numbers of times we have seen questions like this:
When was the last time you visited a cinema to watch a new release movie?
…and the answers provided include:
Another bad example is when the responses don’t cover all of the possible options, such as:
In the last week
1 to 2 weeks ago
3 to 4 weeks ago
Here, an event that has occurred over 4 weeks ago is not accommodated. A quick fix would be to add a code that reads “More than 4 weeks ago”.
A further pitfall when working with time or number ranges is overlap, where more than one answer may hold true for the respondent:
In the last week
In the last fortnight
In the last month
All of these are easy to avoid and just take careful attention to your code frame. If respondents aren’t given the right responses to choose from, you will likely get biased or even spurious results.
Not optimising the survey for mobile devices
Finally, it’s more important than ever to build your surveys to be optimised for completion on mobile devices. There is nothing worse than building a survey for a PC that doesn’t then scale up or down for easy completion on a tablet or even a phone. This leads to dropouts and even wrong answers. Grids (banks of statements sharing the same code frame) should be separated into single questions and buttons should be larger with question text wrapping or scaling accordingly. The only way to really avoid this is to use questionnaire software that knows which device type is being used and rescales and optimises accordingly.
There are many more sources of non-sampling bias, but these are some of the common ones. To avoid this type of bias, just use common sense and take time to audit your questionnaire and address anything that might lead to incorrect or biased answers.