It's hard to believe how different it was, just a mere 15 years ago, to conduct secondary market research. There was no Internet (commercial at least), no Yahoo portal, no Google search, no Web-accessible databases to tap. Almost every effort required a phone call, a trip to the library, a subscription to a third-party source, a read-through of hardcopy reference material.
How times have changed.
But, not always for the better. The seemingly bottomless pit of content that makes up today's Web poses some distinct challenges to marketers looking for precise, credible facts on which to build a strategy.
This article points out some of the online research traps—which can put your marketing strategy atop the proverbial house of cards.
Misleading Definitions
It is not uncommon for a data source to use imprecise terminology. My favorite example is the oft-quoted number of cell phone users in the US. You constantly see or hear 255 million, since that number is prominently displayed on the home page "ticker" of the wireless trade association CTIA.org. It seemed high to me, and when I contacted CTIA's head researcher, he admitted the number is for subscriptions, not subscribers.
When I later saw the 255 million number used, once again, in The New York Times April 13 "Week in Review" section, I requested a correction. The Times did its own research and two weeks later published a retraction, with a new number of 226 million.
Other data definitions to double-check include households vs. individuals, visits vs. visitors, total population vs. Internet users. These types of mistakes can really throw off your work in sizing market opportunities.
Bad Math
Most mistakes in this category seem to come from analysts' providing interpretation of data, comparing one set of numbers to another.
For example, research reports frequently reference market growth. Be careful: For example, an analysis might say that a company went from 10% to 11% market share, achieving 10% growth (1 divide by 10). Misleading, and easily misinterpreted by your researcher.
A more dangerous form is where interpretations are developed on "scaled" qualitative information. For example, if 60% of people rated factor A as a "5," and 30% rated factor B as a "5," the analyst might conclude people are twice as likely to choose A over B. Nonsense. Don't be swayed by this kind of math. (The scale may be using numbers, but it's not quantitative; it could just as easily be using "OK, good, better, awesome, and way cool...")
Unclear Percentages
A common example in this category relates to surveys that allow respondents to check off multiple answers, where the total response can be well over 100%. If the source is not clear, or your researcher isn't paying close attention, some ludicrous conclusions might find their way into your marketing strategy.
Would you believe there are also surveys out there that add up to less than 100%? Last fall I came across a report on DVR usage, with key data shown in two bar charts. One mapped 36% of total TV viewing time as "real time," the other showed 32% as "time shifted."
I contacted the research company to find out about the remaining 32%—what other definition of time had they come up with? The answer led me to conclude that this particular study was completely unusable.
No Check on Reality
This trap is the most challenging, as you need some experience and a healthy dose of skepticism to spot it. It is most common with emerging topics (where almost no prior research exists) and with surveys of customer intent (what people say they will do versus how they might actually behave).
The first example comes from a story about genealogy that I read earlier this year. It cited a study claiming 75% of the US "was interested in genealogy." The skeptic in me didn't believe that 75% of the population could even define genealogy. But, being an emerging topic, it was hard to refute factually.
For these kind of traps, I use my "rule of 20." I survey 5 friends, 5 work colleagues, 5 family members, and 5 citizens (bus driver, deli clerk, lobby security guard, etc.) to see if the data is in the ballpark. Try it yourself using this genealogy example—see whether you get 75%. (I didn't come close.)
A second example comes from a research company that claimed some months ago that over 40% of US internet users had watched a full episode of a TV show online. This time I had seen plenty of other data that didn't add up, specifically for full episodes, so I contacted the source directly (see a pattern here?). Turns out the 40% was an amalgamation of several inputs, only half of them true behavioral tracking.
Conclusion
To be sure, this is not an exhaustive list of the traps awaiting us as we mine the depths of online research sources. For now, keep these simple guidelines in mind:
- Ensure data definitions are precise.
- Check the math, especially on data comparisons.
- Watch those percentages (totaling more than or less than 100%).
- If it doesn't sound right, it probably isn't: Do a reality check.
- When all else fails, don't be afraid to contact the source.
Unfortunately, there is no quality control or Good Housekeeping seal of approval for online research. Which means, of course, that's one more task the marketing department has to take on.