A wise but anonymous marketer once said that a market research report that gets described as "interesting" has failed. It's only when it's "useful" that it gets the pass mark. After all, what's the point of interesting research if it can't be put to use?

The sad truth is that most market research is not very useful and more often than not ends up as a door stop for the marketing manager's office. Not to mention that large-scale research with qual and quant phases is damned expensive.

It's also easy to forget the hassle it puts customers through. Rarely does the committed customer who responds to the survey ever hear anything back or see any tangible differences. No matter how loyal they are, next time they're likely to say no when a researcher comes knocking/ringing/emailing.

We put poor usefulness down to two factors:

  1. Poor explanation: Survey research doesn't explain much, but to be actionable the research must explain why things are happening.
  2. "Dragnetting": This is where the users of the research are part of the problem. Too little work is done before and after the research to get ducks in a row to ensure something gets done. The common dragnetting attitude is "let's do some research, see what it tells us, and then decide what to do." In other words, fuzzy prep leads to fuzzy results.

Poor Explanation

So often, companies find that market research results do not align with frontline reality and financial results. Sampling error, poor response rates, and poor questionnaire design combine to provide results that may fluctuate wildly and leave the customer with no understanding or explanation for why.

With no explanation for their results, the hapless research company is left floundering, trying to justify itself and its results. Survey results get "taken with a grain of salt" by managers (that is, just accept the results that suit you and ignore those that don't).

Despite this, many companies are addicted to expensive large-scale sample surveys, valiantly trying to use the results to measure the success of their efforts and to guide decision-making.

The question is, Can research users wean themselves off those volumes of statistics and graphs? To do so requires a letting go of using market research as a substitute for common sense, a focus on what the research is meant to achieve, and a commitment to acting on the voice of the customer.

A "longitudinal" approach to research provides far greater explanation than the traditional "cross-sectional" big-sample method. The concept is to look along a customer relationship or experience (hence "longitudinal") by re-contacting the same respondents to see how they think things have changed, if they have, and their explanation for why.

In contrast, typical market research depends on samples (cross-sections) and so talks to different people each time. The opportunity to explain change by directly asking customers once and then again later is lost, so complicated statistics elastoplast up the situation.

Take the example of a service business that has a strategy to differentiate itself by having staff provide insights and added value at every customer encounter. Rather than commissioning a quarterly random survey of customers and waiting to see whether the customer satisfaction scores move, it recruits a rolling panel (one of many tools in the longitudinal research armory) of customers who agree to be re-contacted:

  • Through re-contacting the same customers and comparing their answers this time and last time, real changes in their experience are measured and validated and reasons understood—what changes have they noticed in the staff and what do they like or not like about what they experience?
  • Emerging issues are immediately communicated to decision makers in actionable language, enabling rapid response—what can staff do now to add value?
  • Intentions are tracked against reality and changes in attitude correlated with individual customer profitability—is the strategy actually contributing to winning more business and growing the bottom line?
  • Customers have the opportunity to provide feedback, rather than being forced to respond to batteries of rating statements, many of which may be irrelevant to them—maybe they do think staff have become more knowledgeable, but is the constant contact becoming irritating?

Since useful research explains patterns in a simple fashion in addition to describing them, another benefit of this approach is short focused questionnaires: lean and smart. Imagine a five-question survey that literally takes five minutes and delivers more than the typical questionnaire 10 times its size! It costs less, it is easier on customers, and it shows them they are being listened to.

Dragnetting

In the absence of robust processes before and after the research itself, many companies end up with unwieldy and poorly focused results. The telltale sign is a massive "drag-net" questionnaire—that is, one designed to dredge out any information that might be there.

It's unfair to criticize market research managers who do the best job they can to put a good brief together for the research company, but both parties see themselves as information providers, not change makers. Even if they had the skills to address the dragnetting problem, they don't usually have the mandate.

The underlying problem is at the senior-management level, where there's a lack of "strategic logic" in the area to be researched.

If there is poor unanimity around deeper beliefs about the market and customer dynamics, then how can the research team decide what to leave out? As the old adage goes, "Good strategy is as much about what doesn't get done as what does get done."

The "two birds with one stone" strategic logic is a good example (see the marketing profs.com article by Price and Schultz for more details). A senior management team needs to be clear about its customer management beliefs. Two birds with one stone makes the clear argument that a company must simultaneously attend to the basics that annoy customers and focus on one point of difference (a "spike") to address customer ambivalence—simple, compelling, and powerful.

If that is what senior managers believe—and that's confirmed in a definite project process step before the questionnaire is developed—leaner and smarter research configured around those beliefs becomes possible. A strategically coherent questionnaire with fewer questions is less onerous on the customers who have to answer it and tells a much more focused and compelling story to the users of the research inside the company.

To avoid dragnetting, managers commissioning market research should also "write the report first." This is a simple pre-survey process that encourages involved managers to imagine (and record) what they think the research is going to say and what they would do to respond to the most likely research outcome scenarios.

There are two big benefits from doing this in a more structured fashion than usual:

  • First, it deals with the "confirms everything we already knew" syndrome. Ever been to a research presentation where people in the audience say the results confirm everything they knew already, when it's pretty clear that there's precious little agreement among the group about what they expected? If the audience is simply asked to second-guess the results beforehand, then it is very "useful" to see how the actual results map against what was expected. The typical result: The expectations of the group as a whole are wildly disparate and more people are wrong than right.
  • Second, some preplanning can go into implementation. Most market research fails because nothing gets done, despite best intentions. This happens because senior managers as a group don't discuss and agree early enough how they would respond to each of the likely research result outcomes, should they arise. There two parts to this preplanning:

    1. Figuring out what to do. A work process that uses the top-down, bottom-up principle is usually best. Senior management provides a guiding brief for those charged with figuring out what to do. That provides top-down leadership but doesn't go so far as to say what to do. For the purposes of getting buy-in and to ensure recommendations are in touch with the real world, that "what to do" is best done bottom-up, as long as the boundaries are clear. For example, a work process that adheres to two birds with one stone is very effective when one group concentrates on fixing the basics that are not being delivered (the "basics" group) while the "spikes" group looks only at strategies to combat customer ambivalence by building a "spiky experience."
    2. Figuring out how to do it. Most companies really struggle with this because they have not spent time building a shared view of how to make things happen—an "implementation model." The best laid plans will fall over if it's not clear from the outset what implementation model is to be deployed. Is it best to "blitz" it, or does the "viral" approach work best? Is a pilot viable? Who should champion it? Who should sponsor? What resources are actually available, especially for project management?

Finally, all customer research relies on the goodwill of customers. Usually a weak effort is made to make research a positive experience for them. For most customers, the bar is so low from other research they have been involved in, that it is quite easy to create a dialogue with customers—show them how you value their input and update them on progress.

One of the biggest unspoken errors in research is non-response bias. If 80% of all customers refuse to be involved in the research process, what might they have said and what does their non-response say about how useful they think the research is?

Useful research gives managers the explanation they crave—what's happening and why—but rarely get. To deliver that explanation, market research needs to look along a relationship, not cut across it.

It's not just about providing information, it's about providing impetus. Senior managers need to see the research process as merely a tool within a wider change process. A little more structure before and after the questionnaire pays big dividends—in the form of research that gets put to use!


Subscribe today...it's free!

MarketingProfs provides thousands of marketing resources, entirely free!

Simply subscribe to our newsletter and get instant access to how-to articles, guides, webinars and more for nada, nothing, zip, zilch, on the house...delivered right to your inbox! MarketingProfs is the largest marketing community in the world, and we are here to help you be a better marketer.

Already a member? Sign in now.

Sign in with your preferred account, below.

Did you like this article?
Know someone who would enjoy it too? Share with your friends, free of charge, no sign up required! Simply share this link, and they will get instant access…
  • Copy Link

  • Email

  • Twitter

  • Facebook

  • Pinterest

  • Linkedin


ABOUT THE AUTHOR

Reg Price is coauthor of the book on the emerging discipline of Promises Management titled Building Dependability Inc., published by Racom Books. He can be contacted at reg.price@managepromises.com;
Neil, a relationship and customer experience research consultant, is principal of SRD Group (Neil.S@srd-grp.com).
Katie is head of strategy and projects, Institutional and Corporate & Commercial Banking, at a major bank in Asia/Pacific.