Editor’s note: Vince Migliore is a market research consultant, doing business as AccuStat, Santa Clara, Calif.

Even large firms nowadays don’t have an in-house research department, choosing rather to farm out this function to vendors. But how much do you know about these research firms? What kinds of questions should you ask to find out more about the company that’s processing your data? The following is a list of 10 industry secrets, including tips on how to handle them.

1. You may not need primary research.

Very often there is no need at all for primary research. Much of the information you require is readily available from secondary sources. It’s usually free, or can be purchased for a fraction of the cost of conducting a survey.

An example: a small software company was enjoying rapid growth for its product in a narrow niche market with only four other competitors. The company was a success even without a thorough understanding of its position in the industry, and it wanted to get market share and growth trend information. It was prepared to spend over $20,000 for a telephone survey. Instead, we downloaded the sales and investor information of the four competitors from the Web, gathered data from the library, and made a call to an industry analyst for a major stock brokerage firm. The result: we had just about everything we needed for less than 10 hours of work.

The fix: Do your homework! In the Information Age, just about anything you need to know is available if you know where to look. Start by surfing the Internet. Get in touch with a good research librarian - they are worth their weight in gold. Many firms, such as DataQuest, Standard & Poors, or Dun & Bradstreet have huge resources that you can tap into for a relatively small fee. (Mention of firms and brand names should not be construed as an endorsement of their products or services.)

2. Random selection? I don’t think so!

The whole idea of conducting market research is to gather data that is representative of the entire population that you are targeting. This requires you to use a random sample, which by definition means every person has an equal chance of being selected. All too often the sample is composed of people who happen to be home when you call, or people who filled out their E-mail address, or some other convenience factor. Studies show, for example, that the first round of daytime calling of a random telephone list yields mostly retired people, students, and the unemployed. Is that your target audience?

There is also a popular trend called panel research, where the sample is composed of volunteers who agree to be called and surveyed over and over again. They are enticed to participate in surveys by the lure of cash awards, prizes, and a chance to express their opinions. There are many instances where a panel sample is adequate and appropriate, but this selection method does not constitute a random sample.

The fix: Know how your sample is being drawn. If it’s a telephone survey, where did the list come from? Learn how many attempts are made to contact each person on the list. The more, the better. Selecting every nth name from a master list is a good way to generate a random sample.

3. You don’t always get a representative sample.

Another tenet of research is that you want to be able to project the findings from the survey sample to the entire population. To accomplish this you need a representative sample, something you don’t always achieve, even with a random sample. For some types of research, broad, ballpark measures are sufficient. The industry standard for most surveys, however, is to achieve a reliability of ±5 percent at the 95 percent level of confidence. This is a technical way of saying if you did the same survey 100 times, using the same sampling method, that 95 times out of 100, the results would be within 5 percent of the "true" findings, which are those you would get if you surveyed everybody in the target audience.

The fix: Have a plan. Define your objectives. First, decide if you need a high level of accuracy. If you are simply trying to poll the general sentiments of your retail customers, then a small sample will often be adequate. On the other hand, if the purpose of your research is to make a multi-million dollar decision on corporate strategy, then you’d better have accurate results that can be projected to the entire population. To accomplish this task, you must start with a large and representative sample. Sampling is a complex subject, and the laws of probability dictate very specific minimum sample sizes. A rule of thumb, though, is that for target populations of 10,000 or more, you need a sample of at least 400 people. Further, to reach the level of reliability mentioned above, you need a random sample, or what’s called a stratified probability sample. Finally, you must include techniques for verifying the sample reliability. To do that, include demographic questions that establish multiple profiles of those responding to the survey. For example, if you’re conducting a general population survey, include age, gender, ethnicity, and ZIP code questions on the survey instrument, then compare your survey results to U.S. Census data. If you’re surveying customers, and you know from sales data that 15 percent are in the education field, then your survey findings should reflect that.

4. Your questionnaire may be flawed.

Everyone has good intentions, but even a well-designed and easy-flowing questionnaire will often contain useless questions. "How many times have you gone to a movie theater this year?" "How many times did you go last year?" Such questions are fraught with problems. By "this year" do you mean the calendar year, or the last 12 months? If you get an average attendance of 4.2 times a year, does that convey any actionable response from your company, or are you simply going to use the results to classify your audience into high, medium, and low attenders? Can people really remember the number of movie visits they made a year ago? Finally, if you find average attendance is 4.2 times a year, does that really convey the full picture to you?

The fix: Study your questionnaire. A good way to check it is to write in the percentages that you expect to find. Then ask yourself what would happen if the survey responses were significantly higher or lower than what you expect. If there is nothing you could do or would do about such surprise findings, then why ask the question?

Finally, give the survey to friends and relatives outside of work, and see if they can detect any biased or difficult questions. Keep an open mind!

5. Our interviewers are underqualified.

Due to competitive pressures, the interviewers that conduct your survey are likely the lowest paid employees in the field service. There is generally a high turnover rate in this business. For questionnaires with highly technical content, they will often not know what the questions mean.

The fix: Demand an orientation meeting and follow-up visits. Use these meetings to educate the interviewers and give background material on the purpose of the survey. Ask that the same interviewers be assigned for the duration of the project. Ask to monitor calls, and observe the interviewing process. Provide a glossary of terms and definitions. Provide cheat-sheets and reference material to answer the most frequently asked questions from the interviewers.

6. Our data entry is shaky.

As with interviewers, data entry clerks are often overworked. Besides keystroke errors, there are many transposition errors and missing data errors. For instance, there are 35 questions but only 34 entries, with the answer for question 21 placed in the slot reserved for question 20, etc. Most researchers will tell you that data integrity is the most daunting task in all of the research process.

The fix: Ask for involvement with and oversight of the data entry process. If you can afford it, double data entry with documented conflict resolution is the best bet. One of the better schemes I’ve seen is to assign a code for every variable, whether or not a response is required. For example, use negative numbers for non-responses: 1=Yes, 2=No, 3=Don’t Know/Unsure, -1=Refusal, -2=No Response/Interviewer error, -3=No Response/Skip pattern, etc.

7. Crosstabulation tables are deceptive.

Crosstabulations have to be viewed with caution. Let’s say you’re crossing an important yes/no question by age groups, and the smallest age group has only 14 respondents. If four of those 14 say yes, then the corresponding percentage is 28.6 percent. Let’s assume further that for the total population 12.3 percent of all respondents say yes to that question. It’s easy to assume then that this age group is more than twice as likely to say yes. Not so fast! First of all, the 28.6 percent tenth-of-a-decimal-point format implies an accuracy level that is simply not justified by the number of cases it relies on. Second, the 28.6 percent is based on only four respondents, so you should suspect a reliability problem. Finally, many research firms supply crosstabulation and banner tables that do not show the statistical tests that would tell you the probability of these percentage differences being "real" or simply due to chance.

The fix: Study the total population frequencies before you order crosstabulation tables. If there are only 14 people in the youngest age group, 18- to 24-year-olds, then consider combining that group with the adjacent one, say 25- to 34-year-olds. By forcing larger numbers of respondents into fewer age groupings, you can increase the reliability of the percentages in those groups. Also, ask for the appropriate statistical tests with crosstabs. For category questions, use the Chi-square test, and for differences in averages on a scale, use the Student’s T-test, or ANOVA. A good rule of thumb is that there should be at least five cases in the smallest cell for the Chi-square test to be accurate. Last, use some common sense and good judgment when reviewing crosstabs. If the percentage of respondents saying yes goes up in a stepwise fashion as the age groups get older, then most likely the trend is real. If the age groups show only minor variations with no apparent pattern, then the differences are probably due to chance.

Here is a crosstab technique I’ve found useful. Make a copy of all the crosstabs that you can mark up. Flag those pages that contain the survey’s crucial questions, like "Would you recommend our service to your friends?" Let’s say 80.5 percent of all respondents say yes to that question. Now, scan across the subgroup categories in the crosstabs and see which subgroups are higher than that 80.5 percent. If any of the subgroups is substantially higher, and has a good number of respondents, then highlight the percentage in yellow. These are your happy customers. If a subgroup shows a very high rating, say females at a 91.5 percent yes rating, and that 91.5 percent is higher than any of the other demographic subgroups (age, ZIP code, ethnicity, etc.), then highlight that percentage and also circle it with a red pen. Repeat that for all the crucial survey questions. (Time consuming, yes, but this is why research analysts get the big bucks!) Now go back and count how many red circled percentages you find under gender, age, etc. If there are 10 red circles under male/female, and only one under ZIP code, then you know gender is more important than geography. Meanwhile, as you’re busy highlighting, you can get a feel for how much variation there is in each subgroup, and how much is required to reach statistical significance in the Chi-square tests (if you’ve run them).

8. Sorry, we don’t do that.

The standards in the market research industry are changing, and not everyone is keeping up. On-line and E-mail surveys are just a few recent examples. Many research firms have relied on telephone and personal interviewing, and have not acquired the skills needed for these new forms of research. Likewise, there are powerful and important statistical methods available that may be crucial to your project, but you won’t hear about them because the company you’re using doesn’t have the software program, or the computer hardware, or the intellectual know-how to perform them.

Conjoint analysis is a great example. Here is a potent and decisive tool for deciding which new features your customers like best for improving your product. Conjoint analysis, though, requires a dedicated software program, computer-assisted interviewing, and lots of brain power in the planning and analysis stages.

The fix: Shop around, and again, do your homework. Read the trade journals for recent developments, and break out the old statistics text, to brush up on some of the less well-known statistical procedures. You should at least know the usage for these methods: Chi-square, Student’s T-test, analysis of variance, factor analysis, and conjoint analysis. For Internet surveys, you should be able to define the following: Spam, HTML, CGI-bin, radio-button, forms-retrieval, and Web hosting.

9. Survey analysis is a voodoo science.

There is no comprehensive, one-size-fits-all method of survey analysis. Much of it is based on crosstabulations that are not always trustworthy, as we’ve seen above. Meanwhile, research companies like to convey the impression that they are experienced in your industry, but a good research analyst is rarely a subject matter expert. In order to get a meaningful report, you need an analyst who is intimately familiar with the strengths and weaknesses of statistical procedures, and who also has the ability to recognize which findings are significant to the survey objectives. This is a difficult task.

The fix: Use teamwork to bridge the knowledge gap. It’s extremely rare that one person knows all the answers. Fortunately, most research projects are conducted in an atmosphere of cooperation and friendly interdependence. It may help to schedule a brainstorming session after the survey results are in, but before the report is written. As an example, the statistician may find that males over 40 rate your product significantly lower than other groups, but the industry analyst says, "We know that, it’s the nature of our product, and we don’t expect it to change." In other words, not all survey findings are important for strategic business decisions. Discernment in this area requires input from all players on the team.

10. Follow-up? Forget about it!

"Here’s your report. Good-bye and good luck!" How many times do we hear that? All too often thousands of dollars are spent on a research project only to have the report sit on a shelf without an implementation plan. Just as likely, there is little review of the survey process, and no evaluation of the benefits it has provided.

The fix: Integrate the presentation of findings with a plan for implementation that conforms to the survey objectives. Instead of one presentation event, plan on a multi-step process of disseminating and evangelizing the survey findings. Fortunately there are usually several key players in your firm who will appreciate and champion the project suggestions. Use them. Meanwhile, mark your calendar for a day about six months down the road, where you take some time for an objective review of the survey. Did it help business? Did it provide key insights? Would you use this research firm again?

These are just 10 of the many problems that arise in the research industry. I’m sure you can think of others.