Editor's note: Natalia Elsner is director of research strategy at HSM Group, a Scottsdale, Ariz., research firm.

When McKinsey & Company released its employer study of health care benefits in June 2011, controversy quickly erupted because the findings were so at odds with the projections from the Congressional Budget Office, Urban Institute and RAND Corporation. While critics attacked the polling nature of the study and the fact that the questionnaire “educated” respondents, I kept thinking, “What if it’s the sample?”

Even after McKinsey released the methodology and crosstabs of the survey data, I still wondered if I could entirely trust an online panel to deliver 1,300 qualified respondents in the health care benefit space for a self-administered survey. The high incidence of “Don’t know” on some fairly basic questions indicated potential response-quality issues.

It was not the first time I wondered about the quality of the online sample for B2B research.

A lot has been written about online panels, with most authors focusing on issues pertinent to consumer studies and public opinion polling. B2B is in many ways a different animal: the size of the universe may or may not be measurable; the universe may be quite small (e.g., in managed care research); fielding costs are considerably higher; and study participants must be either in positions of influence (such as buying or influencing insurance coverage decisions) and/or in the position of knowledge (e.g., specific training or expertise enabling them to evaluate the merits of new products or technologies).

The first time I had misgivings about the quality of B2B panel respondents was when I was overseeing the fielding and performed data analysis of an online survey of dentists. I’ll refer to this panel company as Panel A. A programming glitch allowed panel members to enter the survey multiple times; this led to a discovery that some respon...