My morning routine today included one of my favorite industry podcasts. It was great to hear an episode focus on a topic close to my heart – data quality in online surveys. The guest, representing a major global sample firm, made many salient points. But he also went ‘all in’ on a common industry crutch to explain why data quality is a growing problem. Essentially, he said the survey “user experience” isn’t great.
This isn’t a new argument, and I’ve heard it made countless times through the years by most major sample vendors. This argument is why, despite calls from those same vendors to hold hands to fight the current challenges, a collaboration will never occur.
For my sample friends, I want to explain why I make this claim in a way that represents the passion of the many researchers who roll their eyes every time they hear a panel company say this.
When a sample provider says, “the surveys are poor consumer experiences,” the researcher thinks, “this is a complete cop-out by the sample firms who have completely destroyed the panels that were once producing really good data.”
The researcher will argue that sample firms own two other data quality drivers: diminished panel health and incentive degradation.
Like many of my industry colleagues, I’ve been in survey work for a very long time. I have seen surveys come through my inbox with many attributes that respondents don’t love: too many questions, painful grids, complex quotas, etc. I have also seen extensive researcher-driven innovation to address the respondent experience, including mobile-first design, hot spots, card sorting, gamification, in survey chat pop-ins, video OEs, and so forth.
Despite various experience improvements, the structure of survey design has been relatively constant. There’s a good reason for this stability – these structures drive the analytics and the backend insights that keep our industry thriving and growing.
For example, segmentation research is more valuable now than ever. Those surveys can’t effectively be turned into quick five-question polls. Constantly telling market researchers to gut their survey if they want better data is akin to telling an eCommerce site that they should remove the payments page to generate better conversions.
It can’t happen, so look elsewhere.
Meanwhile, the business of online sample has transitioned into an auction-like model behind programmatic respondent loops modeled after AdTech. This is true whether the technology is public or just back-office within sample firms. While these highly automated processes have introduced incredible efficiencies and helped solve the supply/demand imbalance, the “disruption” has also accelerated a commoditization effect creating a “race to the bottom” on pricing.
It’s the ultimate chicken or egg metaphor on who is to blame for the price war. No matter who you blame, the consequence is a thrashing of incentives and a break in the respondent value exchange.
Here’s the problem with today’s model: Respondents aren’t digital ad real estate. Today’s sampling technologies too often produce a respondent experience that treats people as commodities. They are bounced around, creating a terrible experience while sample company algorithms continually search for a “monetization event” (also known as qualified complete CPI). This gives respondents just enough time to realize that it isn’t worth their time, given the lack of any real compensation. The macro byproduct is a massive drop in feasibility as real people opt out and more time wasted for the researcher (who spends hours cleaning up garbage in the data).
It’s past time we accept that our business of surveys will never be as appealing as TikTok or YouTube. But if we want to start talking about analyzing user experience for respondents – and I agree we should – let’s begin with the programmatic sample platforms. While we’re at it, let’s revisit the value proposition with currency of higher value.