The Market Research Society recently released a three-part podcast series on data quality, centered around the question of whether there is a data quality crisis in the market research industry.
Like many on the podcast, I wouldn’t describe data quality in B2B market research as a crisis, but it is certainly a significant challenge, particularly for online quantitative research. Tighter budgets and the demand for quicker and cheaper data are major drivers of this problem. Additionally, organized fraud from survey click farms and AI bots is especially prevalent in B2B research, where the incentives are often higher.
Niche B2B audiences are also much less likely to be on online panels. There are very few reputable B2B panels; most are consumer panels that have profiled the employment information of their panelists. It is very unlikely that you will find 200 CFOs of FTSE 100 companies on a panel, firstly because there are only 100 in total and secondly, they are very likely to have better things to do than complete online surveys for low incentives. Establishing the feasibility of finding B2B audiences, however, is a topic for another day.
So, what can researchers do to ensure that we are collecting robust, high-quality data online?
Rigorous Quality Assurance Practices
There are a range of standard data quality checks that can easily be implemented into surveys. It’s not unusual for us to reject 20-30% of completed surveys due to poor quality. We use Forsta as our preferred survey platform, and within this, we check a range of things including:
-
Time taken to complete the survey or sections of the survey: This helps identify respondents who rush through the survey without giving thoughtful answers.
-
Flat lining: This occurs when respondents give the same answer throughout lists or grid questions, indicating a lack of engagement.
-
Duplicate text: Identifying text that has been copied and pasted into open answers throughout the survey helps spot non-genuine responses.
The research team will sense-check the data collected a couple of times a week and flag open-ended answers that are of poor quality. In surveys, we often design questionnaires that include attention checks (e.g., asking what shape is on the screen), questions to identify illogical responses, and tests of respondents’ knowledge of the category.
In addition to these checks, we also use Research Defender, a tool designed to eliminate fraudsters and bad actors. The tool identifies bots and fraudulent respondents through a range of features, such as IP address checks, an indication of the volume of surveys taken by the respondent in the last 24 hours, and country/location checks.
However, data quality checks alone will not overcome poor screening or survey design.
Avoid Revealing the Topic in the Survey Screener
The whole purpose of a screener is to ensure the person qualifies to participate. To do this fairly, the golden rule is not to give the game away by revealing the topic too early. Screeners should begin with general qualification questions and become more specific, while only including questions that determine someone’s eligibility to participate.
Caution around revealing the survey topic begins before the questionnaire and includes the email invites. For example, last week I received an email inviting me to “participate in an online survey for Fleet Managers.” Knowing that the survey was targeting Fleet Managers, I would have been able to pass the screener more easily if there had been any screening questions. The first question asked about my involvement in the decision to choose vans for my business. Anyone could have guessed the answer options that would qualify them for the study. This is a good example of how not to do it.
Three things to bear in mind for screening:
-
Don’t reveal the topic of the survey too early.
-
Include general screening questions first (e.g., employed/unemployed, country, and size of organization).
-
Use dynamic screening where questions change order, making it harder for bots to keep submitting responses.
Respondent-Centered Survey Design
We all know that attention spans are getting shorter, so surveys should be kept as short as possible and be written to suit the interview method. A good rule of thumb is to think like a copywriter and remember that survey respondents are humans.
Some principles to keep in mind:
-
It’s estimated that less than half of respondents will properly read questions with more than 35 words. Remove unnecessary words.
-
Only use question instructions when you really need to. For example, if you’re using a ranking scale and need the answers to be ordered from the most to the least important, include instructions. However, don’t explain how to answer a 1-10 scale question if the scale is already labeled.
-
Avoid using marketing or research language as much as possible. Words like “concept,” “touchpoint,” and “attribute” are not widely used in everyday language.
Conclusion
While data quality in online research is a growing problem, there are steps that can be taken to ensure high-quality data is collected. However, this may bring additional cost and time to your project—a small price to pay when conducting research to make large strategic decisions.
To discuss how our tailored insights programs can help solve your specific business challenges, get in touch and one of the team will be happy to help.