Have a plan going in

Editor’s note: Art Jackson is survey measurement manager for Boston consulting firm Novations Group Inc.

Recently, I was involved in a project consisting of many moving parts. I say moving parts because just when you think you have all the bases covered, something comes up that wasn’t considered.

First, some particulars associated with the project. It consisted of multimodal data collection including mail, telephone, focus groups and the Internet. To add to the complexity, the study included multiple languages and a host of cross-cultural aspects, spanning North America, Europe and South America .

Three phases were planned: setup, collection and analysis. Simple, right? Not so.

When beginning a project of this magnitude there are many considerations, which this article will attempt to examine. It should be noted however, that all projects, especially those that are complex, will contain additional considerations beyond what is covered here.

Process flow chart

At the outset of the study, there should be a project meeting involving a person at every touchpoint of the project. This is to ensure everyone will be on the same page. From this meeting, clear expectations should be conveyed and discussed. This is the point at which a process flow chart should be created and distributed to each member of the project team.

The process flow chart should include not only tasks but the order of tasks, the associated expectations and the person/persons responsible. This should be one of the guiding documents. Don’t be afraid to use lots of visual elements such as boxes and graphs. But many people relate better to a written document so remember to strike a balance between graphic elements and text.

Another very important aspect of a successful study is to have a contingency plan. Not everything will go as it should. In the event of a breakdown, you should have a Plan B to move to. This should be well understood by all on the team.

If there are any changes to the process, be sure to include a process change form to instruct all involved of the change. Too often I have seen studies fail due to a link in the chain not getting the relevant information.

Clear understanding

Have a clear understanding of your hypothesis. In many cases this will include the types of analysis you will run on your data. If you are looking to run factor analysis, cluster analysis, etc., it’s important you design your survey instrument correctly. There is nothing worse than getting to the end of collection just to discover your data was only nominal or categorical.

An example will help point this important issue out. Early in my career I was involved in a study that was to use simple regression analysis. However, due to some misunderstanding, it was later discovered that a dependent variable was never introduced in the survey instrument. Therefore, this type of analysis was not run due to lack of data.

Who is the respondent?

Another topic of consideration with a survey research project is who will be the respondent and what form of data collection is best in targeting that sample population. Oftentimes, clients jump to the Internet for its speed of collection and low cost. However, three things to consider here are: proper sampling methodology, validity and reliability. Not small issues.

Many individuals within our target population may not have access to the Internet, thereby biasing our sampling methodology. Additionally, even if you do have near-complete coverage, are you sure the “proper” respondent completed the survey? I have consulted on many a project where, through a validation process, we were able to determine that a friend, sibling, spouse, coworker, secretary, child, etc., completed the survey rather than the intended respondent. You do not want to base decisions on a survey that was not properly validated with respect to the intended respondents.

If a blended methodology is to be used, give thought to the various biases associated with blending each mode of collection. Often a particular demographic might weigh in heavily on one type of collection methodology, thereby biasing results of your survey.

A brief example: We ran a study involving various automobile owners who were to rate many automobile manufacturers. It was discovered that those with higher-end automobiles chose to complete the survey using the Web, while those with lower-end automobiles selected an alternative mode of completing the survey. Consequently, the data reflected this difference. Of course, the reflection in the data was a result of the demographic of the individual who drove a particular type of car.

Another very important factor is lack of cross-strength. What I mean here is the strengths associated with one method of collection can actually create bias in another. For instance, a strength of Web collection is the ability to involve visual elements - something you can’t do with a phone survey. Yet with a phone survey, you can clarify responses and probe a qualitative open-end.

Again, an example: A client felt that conjoint (phone- and Web-based) would be a great way of mapping the preferences of a particular product. Despite objections, the project went forward to the data collection phase. When it came to questions about color preferences, there was a statistical difference between those who took the survey by Web versus phone. On the Web, the respondent saw the colors, while the respondent who completed a survey by phone merely heard the names of the colors.

Short is best

When designing the survey instrument, there are many things to consider before collecting the data. Short is always best. “The shortest questionnaire needed to answer the research questions” should be the mantra.

The types of scales should be considered when designing the instrument. It’s important to include all that are needed but not too many different types of questions. This can become confusing to the respondent.

Another tip: use clear, simple language. I was involved in a study that was written for a particular target population. After collection, the client wanted to introduce the same instrument to a closely related group of individuals. However, the first group had a higher degree of education than the second group. The survey was written for the first group. The client wanted to “save money” and not rewrite the survey. As a result, the second group of individuals didn’t fully understand the questions and two things happened: non-responses increased and the data of those who did respond was weak at best.

Two other major considerations: validity and reliability. In other words, am I getting the data expected to answer my hypothesis, and, if I were to repeat this study, would I get the same results? There are many books that address each of these specifically. I would recommend understanding the various levels of validity before conducting a study.

Finally, TEST, TEST, TEST. It’s better to be slow and right than fast and wrong. Test the survey, test the process, test the collection, then dump the data and test the analysis. Ask yourself through the entire process: Did I answer the questions posed in our hypothesis?

Next are multicultural/language considerations. With these topics you might think I’m only speaking of other countries but I’m not. Short story: I’m from  the West. When grocery shopping while on vacation, I was asked if I wanted a poke with my grocery item. I tried not to look shocked but I must not have been too convincing as the clerk felt inclined to explain she meant a paper bag. This experience shows that there are many language and cultural differences just within our country. Imagine how problematic it could be if you are doing a study in a foreign country in a different language.

Consider that languages often have strong regional differences. When translating the survey, be sure to translate it to something that will be understood by as many respondents as possible. In addition, attempt to use a translator who is from  the particular region you are targeting. Also, take into account cultural difference when collecting data. Some things may be appropriate in one area while inappropriate in another. What can create richness in data can also cause erroneous data if left unchecked.

Begin the analysis

Once data collection has been completed, the next step is to begin the analysis. When I began my career, I was under the impression that analysis was quick. I have learned since that it takes time and plenty of it.

Like everything described in this article, there should be a process to the analytics. By no means is there only one way of doing it. Find what works best for you.

There are many very good programs for analysis. I personally like SPSS. It is both robust and popular. But again, use what you like.

Finally, data is only as good as the interpretation and presentation of actionable items. More likely than not, you need to be concise in what you present. Decision makers want to know the highlights. This generally means an executive summary with bullet points followed by detailed findings and any supporting documents/references.

Here again, practice. Know your data and how it addresses your hypothesis. Practice the delivery, practice the presentation, practice possible Q&As. Above all, PRACTICE.