My first experiences in questionnaire writing took place in the 1960's when, as director of research at Miller Publishing Co., I had the responsibility for providing research for 18 trade publications. At that time WATS service was not available and our audiences were too scattered to use local field services for local telephone or personal interviewing services. It was necessary for us to rely on mail surveys for most of our primary data collection.

At the time, we were conducting more than 50 surveys annually, ranging in size from double postcards to multi-page documents. Although we had considerable lead time for most of them, there were a number which were sent out within a day or two after the decision had been made that they were needed.
 
Although there has been less emphasis placed on the use of mail surveys, they nevertheless are an important part of the research mix and are often the most effective and efficient method of obtaining data. Based on my experience I believe there are certain aspects of this type of research which require special attention.

The most persistent criticisms of mail survey techniques related to the non-respondent bias factor. It seemed that whenever the results of a study using this technique were presented, the first question asked related to the response rate. Even those who had no knowledge or direct experience with research believed there was some specific response percentage which legitimized the results. It was a number which had been given to them from "on high" and had to be equaled or exceeded in order for the results to be valid. It is the type of question which is rarely asked when the results are obtained using other techniques.

Those who have the responsibility of presenting research findings do not want to get bogged down in controversy regarding the validity of the results. There are a number of ways which response rates can be enhanced to the point that this question becomes moot. Each involves careful list selection. The researcher must know as much as possible about the list which he or she is using. It will have as much to do with questionnaire development as with delivery rates.

The most obvious questions relating to any list involve deliverability. It is critical that the list be current and that the provider of the list make a sampling available for testing. Some list houses promise a 95-plus percent delivery rate, but this is not sufficient because in the mail rooms of many companies, mail addressed to individuals no longer employed is simply discarded.

Other areas of consideration regarding lists include:

1. The updating of the list . The high mobility of our population makes it imperative that any list used be updated frequently.
2. The breakdown of categories in the list . The more information available about the people on a list, the easier it will be to target your mailing.
3. The availability of special programs . These allow custom selections by state, ZIP code, random selection, etc.

Specific information regarding the types of individuals on the list is important for the development of the questionnaire. If one is to achieve a high response rate it is important that each of the respondents find that the survey applies to himself or herself. Every individual receiving the survey should see questions which are directly applicable to them. We want everyone to get into the questionnaire and feel that it was designed for them. If the researcher does not do this then it is possible that the response rate will be adversely affected and that the non-respondents will be different than the respondents. Projecting the results to the entire universe may not be valid in that situation.

One example of this problem which I recall occurred when an editor of one of our publications, Feedlot Management, decided to conduct his own editorial survey on the use of horses in cattle feedlots. It was a postcard-sized questionnaire with three questions:

1) Do the individuals have horses for use in their feedlot?
2) How many?
3) How are they used?

Over 80% of the respondents indicated they were using horses in conjunction with their cattle feedlot operations. The editor was excited because he believed that he had discovered something concerning this type of operation which had been previously mentioned in articles relating to management practices. He was ready to report the results at the time our research department became aware of his survey. We were asked to review the data.

First, we found the response rate to have been less than 10%. We also discovered problems with the sampling technique. We were able to convince the editor that he should delay reporting the results until we had done a follow-up survey to determine if the initial results were valid.

Our questionnaire was approximately one page in length. We started with some general questions which would be applicable to all cattle feedlot managers. After asking for this information the questions relating to horses were presented. The survey ended with open-ended questions concerning editorial interests. The survey was mailed to a properly selected sample of the circulation.

The response rate for our questionnaire exceeded 50%. Less than 10% of our respondents indicated ownership of horses as compared to the 80+% in the earlier survey. The initial survey had elicited responses primarily from those who saw the questionnaire as being applicable to them. By providing a questionnaire which was much more inclusive, the results were dramatically changed.

There were many advantages to learning the questionnaire writing process as I did. Preparation is more critical because once the commitment is made regarding the questionnaire and the sample, it is difficult to correct errors. Although custom mail questionnaires have declined in importance they can still be valuable data collection tools when used properly.