How Label Analytics Solves the
Seven Nemeses of Opinion Research

Don White – Chief Research Officer – Label Analytics

Great care must be taken to design opinion research to elicit unbiased responses and reactions.

Here are seven human nature tendencies that, in my experience, are the most pervasive pitfalls for survey designers.

The challenge is to overcome these tendencies to collect information that reflects the true opinions of respondents.

I call these the Seven Nemeses of Opinion Research.

1. Pleasing the Researcher
2. Intellectualizing Answers
3. Yea-sayers and Nay-sayers
4. Order Bias
5. Respondent Fatigue
6. Inattentive Respondents
7. Outliers


1. Pleasing the Researcher:
It is human nature. Respondents in a survey want to do a good job. If they can detect what interest or client the researcher represents, either by the wording or tone of the question, the body language of the researcher, or the content of the questions, many respondents will unconsciously give the answer they believe the researcher wants.

Label Analytics’ choice-based ranking overcomes the tendency to please the researcher by avoiding any hint of which is the “correct” answer or who is sponsoring the research. Surveys are online to avoid any body language message. The URL of the survey site (your-opinions-matter.net) is neutral.

Each ranking choice has just one question which applies equally to all stimuli: “Which of the following grabs your attention first?”
All stimuli are all presented equally to reveal no information about which are the subjects of the research.

2. Intellectualizing Answers:
Perhaps as a result our years in structured classrooms, many respondents seem to believe there is a “right” answer for questions in a survey. Given an opportunity, those respondents will deliberate a rational(ized) answer rather than giving an immediate response. Most academic researchers agree that immediate, gut-level responses provide the truest insights into how a respondent feels and will best predict their future choices.

The game-like feel of the choice screen and the fact that there are several options on the screen, relieves the respondent of thinking there is one “right” answer.

The fact that the respondent is answering the same question for stimuli across five to fifteen screens, generally leads to quicker answers which leaves little time to over-analyze. The resulting measures tend to be immediate gut-level responses.

3. Yea-sayers and Nay-sayers:
When given an opportunity in a rating scale style question, some respondents tend to answer in the extreme, high or low, across the rating questions. This practice has the effect of masking the respondent’s opinion through lack of discrimination among options.
The rank ordering structure of Label Analytics forces respondents to discriminate among stimuli; it forces them to expose their opinions.

4. Order Bias:
In any list of options, some respondents tend to favor options because of their position in the sequence. Options at the beginning and end of a list are easier to choose or seem to have more importance than those occurring in the middle. Such order bias suggests a lack of engagement by the respondent.

The Label Analytics ranking system is structured to guard against order bias by randomizing the order of stimuli before each set of screens is presented so each stimulus has an equal chance of appearing in each position.

The order of stimulus selection by position on the screen is tracked and the likelihood that order bias is skewing the result for an individual respondent is calculated based on probability statistics.

Order biased answer sets are removed from the data as a part of data validation.

5. Respondent Fatigue:
All respondents are subject to becoming fatigued as they progress through a survey. This can affect the quality of their attention and response.

Rating scales in particular can fatigue respondents. Too many rating scales can cause the most diligent respondent to “glaze over”. Numerous academic papers have been written on rating scale respondent fatigue, often expressed as “response drift” as responses tend to get higher or lower as the respondent moves through a set of rating scale questions.

The first defense in any survey to minimize respondent fatigue is to keep the survey short. A Label Analytics ranking offers the benefit that the game-like feel of successively choosing the most appropriate options on each screen is engaging and less fatiguing than rating scales.

Further, respondents are asked to rank only those stimuli that are important on each screen. This shortens the survey and eliminates the most fatiguing part of a survey, assigning values to stimuli that are unimportant in the assessment of the respondent.
Depending on the nature of the stimuli, respondents can often quickly rank 50, 60 or even 100 stimuli without indications of fatigue.

6. Inattentive Respondents:

For whatever reason some respondents become distracted or inattentive as they move through a survey. Some may quit in the middle of the survey and others may simply “click through” to the end. Those who quit are obvious but those who “click through” without giving attentive responses are a problem. Their surveys look complete but their answers do not reflect their opinions.

In a Label Analytics ranking, this lack of attention will show up as an inconsistency in the response as the respondent moves through the ranking exercise. Every answer set is automatically reviewed in the data compile process and removed from the database if it exceeds validity standards.

The goal is that all answer sets that remain in the data set will be true measures of the respondents’ reactions.

7. Outliers:
Once the results are in for a survey, researchers occasionally find an answer set that is clearly outside the norm for the survey. Either the respondent is a person with a distinctively different point of view on the subject matter or the answer set is flawed in some way and does not accurately represent the respondent's opinion. But which?

The respondent may represent the leading point of a new trend the marketing manager will want to understand or, if the answer set is flawed, including the outlier could pointlessly skew the measures of the report. – Without a method to confirm the validity of an answer set, traditional research methods often call for excluding the outlier.

Label Analytics tests the validity of each answer set with two unobtrusive measures: a position order bias measure and an answer consistency measure. Respondents who are just “clicking through” the ranking exercise will be detected.

So, if an outlier answer set shows as a valid response, we can know the respondent answered the questions from a consistent point of view and their answer set has value. We do not exclude valid outliers.

Questions? LabelAnalytics.com