Editor's note: Sean Campbell is CEO of Cascade Insights, a Portland, Ore., research firm.

In an ideal world, all B2B research would result in a tidy heap of statistics, graphs and charts, all pointing to a clear course of action. Company leaders would glimpse the dramatic numbers and approve strategy adjustments right away.

But as we know, fantasy is rarely reality.

This holds true with B2B research. Many B2B tech companies would love to commission sweeping surveys that yield conclusive insights about the market. Unfortunately, given that B2B tech is a rather niche field, it is often extremely difficult to get an appropriate sample for mathematically responsible conclusions.

Let’s walk through some of the unique challenges of conducting quantitative research for B2B tech.

Market research for B2B companies is very different from B2C. B2B deals are more complex. A CEB study found that an average of 5.4 buyers had to formally approve each B2B purchase, whereas with B2C, there are usually far fewer buyers involved. Conventional wisdom says that about five people are necessary to make a wise B2B purchase, as it’s usually a significant spend for the business. Hence, there is much more wooing involved to push through a B2B sale than a B2C one. Often, months or even years are spent cultivating relationships before that big B2B buy.

Small B2B target markets make for poor survey samples. In many cases, B2B companies target much narrower markets than B2C companies do. A bottle of ketchup can be marketed and sold all over the world, whereas B2B solutions, from our experience, may have a total addressable market of 10,000 or 100,000 companies or less. This is a huge difference from a B2C product that could legitimately be sold to any U.S. consumer who might walk into a grocery store.

Smaller target markets are one of the main reasons why quantitative studies aren’t always the best approach for understanding B2B business problems. To prove this out, let’s just consider the math. Say your client gives you a list of 5,000 people they’d like to hear from. According to one sample calculator, you’d need 537 responses for a confidence level of 95 percent and a margin of error of 4 percent. Response rates being what they are, you’re not likely to achieve that number.

B2B experts know to anticipate low responses. For instance, B2B marketing platform Kapost wrote of a survey effort, “First, we were ashamed of our response rate. I’m talking a tail-between-the-legs, oh-goodness-that’s-bad kind of reaction. But after speaking with others in our industry, we now know that 1.1-to-2.6 percent is actually quite good.”

Sure, your response rate could be higher than that but it would have to be MUCH higher to get an appropriate sample.

So, back to our example. Let’s say you get a 2.7 percent response rate. That means you only get to talk to 135 of the target 537 people from the initial 5,000-person list. It gets worse. Even if you got a 10 percent response rate, you’d still only get to talk to 500 people. Unfortunately, that’s still shy of a reasonable sample for the study.

Surveys are limited tools for understanding the B2B buyer’s journey. Typically, B2B stakeholders have a lot of questions about the buyers who filled out their surveys. However, they may not realize that surveys aren’t always the best way to understand the B2B buyer’s journey.

Remember, the average B2B sale involves about five buyers. To understand how they reached the decision to purchase, you’d have to hear from each of those five buyers and learn their role in that choice.

Back to our example. Let’s say you get a 2.7 percent response rate on your 5,000-person sample. That means you have 135 responses. Then, say that responses are somehow magically divided equally into five uniform roles in the purchase decision. That means you have roughly 27 responses from each persona. That’s just too small a number for solid analysis.

Use quantitative surveys effectively

There are some ways to use quantitative surveys effectively in B2B. They’re not without compromise though.

Up the sample size. First, you could get a larger sample and then generate a larger number of responses. Frankly, this isn’t always possible. Perhaps the individuals the study targets are in an extremely niche market. Or the research focuses on a certain country where the solution in question isn’t well known. There are a million reasons why it could be difficult to get an appropriately-sized sample for a decent quantitative study.

Ask the client to give you a list of people to survey. While this option could take some of the legwork off market research firms, there is an inherent challenge. Any list a client provides is likely to be biased, especially if it’s based on a mix of marketing and sales leads. These leads are already predisposed to consider or buy a solution from the client. These folks alone won’t give an accurate portrayal of the client’s position in the marketplace. For that, you’d have to also hear from competitor customers and customers that decided not to buy at all.

Concentrate on getting more responses. You could also put a ton of effort into initial and follow-up outreach to survey respondents to increase the response rate. While these efforts may get you more responses from a small list of targets, it doesn’t change the fact that the list was small to begin with.

Improve the quality of your sample. Take a page from qualitative research. Accept the fact that you’re not going to get more responses and get to know more about who responded to the survey. This doesn’t allow you to project your findings across a broad population in every way you might like to but you can have more confidence in your findings.

Stop hitting divide. Finally, you might have to limit your quant goals. For example, for those 5,000 targets in our example, if you adjust the acceptable margin of error to 5 percent, you only need 357 responses. You also might have to stop hitting divide on your calculator. Say a stakeholder wants to project the research findings onto the U.K., France, Germany, Russia and the U.S. If you don’t have enough responses to meaningfully do that, don’t try. Seriously, don’t try to take 357 responses from 10 different countries and make projections for each individual country. That’s just bad math.

Also, you may have to limit the level of analysis you do on a persona, title or industry basis. For example, if you’re doing research on key buying criteria, you might be able to figure out how an organization buys and who’s part of a typical buying committee. But you won’t be able to drill down into how each individual buyer buys or is influenced.

Don’t ship an illusion. Make every effort to avoid presenting faulty data as a firm conclusion. If the sample for your survey can’t provide an accurate representation of the populations you’re trying to study, you may have to make the tough choice to ditch quant altogether. Otherwise, you’ll be making a bunch of bad decisions on flawed statistics.

Can’t be argued with

Ditching quant can be challenging to propose because senior stakeholders often want information that they know can’t be argued with. They want insights that come with lots of numbers, graphs and charts. However, you’re not helping that senior stakeholder if you design a survey that can’t possibly get a sufficient number of responses for good analysis. That’s worse than not giving them quant in the first place. Luckily, ditching quant doesn’t mean you can’t study the problem. Perhaps you just need to use a qualitative method instead.