Editor’s note: Rajan Sambandam is director of research at The Response Center, an Upper Darby, Pa., research firm.

While traditional data collection methods (such as mail and phone) continue to be widely used, lately other data collection methods based on new technologies have started making their presence felt. These include survey methods using e-mail, Web site and interactive voice response (IVR) technologies. IVR has been used for sometime in the area of customer service by a number of companies, but only recently has been introduced as a tool for marketing researchers to collect data.

The introduction of any new method raises the question of how it compares with existing and established methods. In particular, a comparison of phone and IVR-based surveys is relevant because of their external similarity and the increasing use of the latter. In this article we will consider the relative operational and empirical merits of the two methods.

Phone surveys

Phone surveying has been around long enough to develop a good understanding of its advantages and disadvantages. The primary advantages of phone surveys are the representative nature of the sample, the ability to probe open-ended questions, the reduction of self-selection bias and the ability to efficiently handle quotas. The last is particularly useful when very specific (and small) segments of the population need to be adequately represented. But the advantages of phone surveys are also tempered by their biggest disadvantage: cost. Phone surveys are almost always more expensive than mail surveys and this creates budget problems for managers who may otherwise want to use this method.

Next, we will consider various ways of conducting IVR studies and then use a case study to examine the similarities and differences between the phone and IVR methods. The study was conducted in the financial services market and hence the results may not directly translate to other markets.

IVR surveys

IVR surveys can, in general, be administered in two ways. In the first method (pure IVR) there is no live component. Prospective respondents are transferred to an IVR system after a customer service call or are given a phone number to call (usually a toll-free number) which hooks them into an IVR system. In the second method (hybrid IVR), the respondent is screened by a live interviewer and then transferred to an IVR system. Let’s examine these two methods in more detail.

A. Pure IVR
The pure IVR method is particularly useful in measuring customer satisfaction with call centers and can be done either by direct transfer or by using a toll-free number. In the direct transfer method, a customer who contacts a call center to talk to a rep is transferred into the IVR system by the rep. This is usually done after the customer’s questions have been answered and his permission obtained for the transfer.

Once the customer is transferred, the automatic voice takes over by welcoming the customer to the system. Pre-programmed questions are asked to which the customer can reply by using the keys on his telephone. Open-ended questions can also be included for which the responses can be recorded. A variation to this method is when the customer calls in to a company’s interactive voice response system to obtain information in the form of account balances, stock quotes etc. At the end of the call, the customer is asked to answer a survey (usually about the interactive system he is currently using) and is then transferred to an IVR survey.

This method has a significant cost advantage because live interviewers are never used. But the absence of a live interviewer also leads to a higher incidence of respondents terminating the interview by simply hanging up. Hanging up on a live interviewer is more difficult than on an IVR system. The varying termination points also mean that each question may have a different base size. To remedy this, the customers could be called back (if the system tracks their phone numbers), but this would obviously increase the cost of the study.

Another potential problem is selection bias. If reps who answer questions are responsible for transferring customers to an IVR system, there is no guarantee that they will transfer dissatisfied customers. This problem can be particularly thorny if the results of the study are in some way tied to compensation. A monitoring system or an automatic sample selection system needs to be in place to address this problem.

The second way of conducting a pure IVR survey is to give the customer a toll-free number to call after his questions have been answered (this can be done both when there is a live rep and when an automated voice is used). The toll-free number leads directly into the IVR system. This also has the same problems of hang-up and selection bias (in the case of a live rep) that we saw previously. There is also the problem of the onus being on the respondent to call the toll-free number. Hence there is a good chance that the response rate could be considerably lower than otherwise and that the sample may not be representative.

B. Hybrid IVR
The hybrid IVR method can be used for most types of studies. In the call center example, the customers are called back by a live interviewer who screens them to verify that they indeed had contact with the call center. They are then transferred to the IVR system to complete the interview. This method also works for other standard surveys which do not involve a call center. Prospective respondents are called, screened, asked for their permission and transferred to the IVR system.

The hybrid IVR method avoids the selection bias that could be introduced by the rep because the rep is not involved in the transfer process. The hang-up problem may be as severe as in the pure IVR method. However, this problem is tempered in the hybrid IVR method by the fact that respondents who are unavailable at a particular time can always be called back as in a regular phone interview.

The main disadvantage with the hybrid IVR method is that it is more expensive than the pure IVR method, because the live interviewer has to spend some amount of time on the phone to screen the respondent.

Case study

Recently we conducted a comparative test between the live phone and hybrid IVR methods for a financial services company. The company was interested in identifying the appropriate method to use for a tracking study with the same population. This test and its implications are discussed here in detail.

The client company provided us with a sample of customers who had recently liquidated all of their assets with the company. The sample was randomly divided into two halves, with one allocated for the phone portion of the study and the other for the hybrid IVR portion of the study. Using this sample, in the phone method 100 full interviews were obtained, while in the hybrid IVR method 104 full interviews were obtained.

The most important difference between the two methods is the age of the respondents. Phone respondents (average age = 55 years) are statistically older than IVR respondents (average age = 50 years). This could be due to the fact that older respondents are less comfortable with technology and may not want to answer an IVR-based survey.

There were differences between the responses on some other questions also and it is possible that these arose because of age differences.

  • Phone respondents have higher satisfaction ratings (mean on a 7-point scale = 5.8) compared to IVR respondents (mean = 5.0). As seen from other research, older respondents tend to be more satisfied and this could be the reason why phone respondents are more satisfied.
  • IVR respondents are more likely to have drawn out their money for some type of purchase (65 percent to 46 percent). Previous results from the same population indicate that older respondents are less likely to take the money out for some type of purchase. Younger respondents do so because of their life cycle stage.
  • The proportion of phone respondents who say that they are Very Unlikely to invest with the company in the future is much higher (20 percent to 5 percent). Previous tracking results indicate that older respondents are more likely to say this because of age related reasons.
  • A larger proportion of phone respondents are retired (30 percent to 14 percent) and this is obviously an age related issue.

The difference in satisfaction ratings was particularly interesting because of the large number of customer satisfaction studies that are conducted by companies and the question of suitability of the hybrid IVR method for such studies. Therefore, to examine this issue further, we conducted an analysis of covariance (ANCOVA). An ANCOVA is usually conducted when it is necessary to study the relationship between two variables while controlling for the influence of a third variable. A simple example would be the positive relationship between the damage caused by a fire and the number of fire trucks at the scene. When we control for the size of the fire, the relationship disappears, indicating that the size of the fire was the true cause of the damage.

The ANCOVA was run with satisfaction as the dependent variable, method of data collection as the predictor variable and age as a covariate. If in fact age was the true cause of the difference in satisfaction scores, then this difference should become statistically non-significant in the ANCOVA. However, while the difference in satisfaction between the methods was less than before, it was still statistically significant. This implies that age alone cannot account for the significant satisfaction difference between the two methods. It may be that IVR respondents are inherently less satisfied or that phone respondents simply provide higher satisfaction ratings.

Other than the age-related differences, the data from the two methods are quite similar on issues like reasons for closing the account, talking to a rep before closing, investing in other financial products after closing, reasons for doing so and on other demographic variables. The one exception was the proportion of refusals on questions related to income and assets. In both cases the refusal rate on the phone study is higher, although the differences are not statistically significant. It is possible that the IVR respondents felt more comfortable revealing this sensitive information to an automated system rather than to a live interviewer. But, given that the finding is not statistically significant, further research is required before any firm statements can be made.

Both the phone study (average length = 6.7 min.) and the IVR study (average length = 6.4 min.) were of the same length. There were no differences in the number and type of open-ended responses, which were recorded using voice-digitized software. This may be because we started out with a somewhat small sample and the open-ended question was based off a skip pattern further reducing the base size. With larger base sizes (say, a few hundred) it may be possible to examine in more detail the differences in open-ended questions between these two methods.

A note on response rates

Response rates for IVR studies are hard to generalize at this early stage in the evolution of this technology. They can vary based on a number of factors such as whether it is a live or hybrid IVR study, the length of the study, the type of respondent, the nature of the study, and the possible use of incentives. In this study, we had many factors working against us in that the respondents were older, had liquidated all their accounts and were answering questions of a sensitive nature (on their personal finances). Still, 90 percent agreed to be transferred to the IVR system, of whom 50 percent went on to complete the entire interview. We consider this to be the lower end of the response rate spectrum and are exploring situational variations that would lead to higher response rates.

Conclusions

The conclusions from this test have to be qualified by the fact that the industry used, financial services, is unique in some respects. Specifically, the average age of the survey respondent in this industry tends to be quite high. Further, this study was conducted on ex-customers of the company who may have less commitment to completing a survey that would benefit the company. Hence any generalization of the results should be undertaken with caution.

  • IVR respondents tend to be younger and this may affect the responses to some questions.
  • It appears that IVR respondents are less satisfied than phone respondents even after controlling for the effect of age. Thus, tracking studies that change methods from phone to IVR may need to account for this difference.

Other than this, there appear to be no other significant differences between the two methods. Sample bias (other than the age bias) seems to have been averted to a large extent because this was a hybrid IVR study. This could be a reason why the samples are similar demographically and attitudinally. It is possible that there may be more differences if this had been a pure IVR study.