More satisfying satisfaction research

Editor’s note: Demitry Estrin and Ted Chen are senior vice presidents, satisfaction and loyalty research, at Angus Reid Strategies, New York.

Relatively recent advances in technology have elevated the quality of research and the insights that can be garnered via the online medium. Today we have unprecedented flexibility for engaging the respondent in an innovative and interactive way. Through our work with interactive rich-media technology we have discovered that we can stimulate more thoughtful responses and collect better-quality data while at the same time providing an engaging and enjoyable experience for the respondent.

These advances set the stage for redefining how we view satisfaction research and satisfaction questionnaire design in the online medium.

Gained traction

With its roots in the total quality movement of the 1980s, satisfaction measurement gained traction with management over the last three decades as a tool for soliciting feedback from customers regarding their experience and expectations.

Customer satisfaction is typically defined as a process or an outcome. The process definition of satisfaction speaks to the evaluative and psychological process of comparing prior expectations to actual experience with a product. The outcome definition focuses on satisfaction as an end-state of customer experience. In other words, the outcome definition treats satisfaction as the cognitive and emotional state that results from experiencing a given product.

In general, there is a well-established consensus that satisfaction is a measure of customer experience with a product or service given pre-purchase, pre-transaction expectations.

The explicit goal of collecting customer satisfaction feedback is to inform management on how customers perceive a service, product or delivery channel. The information is used to benchmark and track performance as well as to inform change and prioritize improvements.

Communication is the implicit and often overlooked goal of client satisfaction initiatives. Each satisfaction survey can and should be a branded message to the client that the organization cares about its customers’ experience and explicitly suggests that their feedback is valued.

Given the information goals of satisfaction research, the survey vehicle design deserves careful attention and consideration. Achieving high data quality is the foremost objective in benchmarking and tracking satisfaction. A well-constructed questionnaire that yields thoughtful and meaningful responses sets the stage for a successful satisfaction initiative.

The challenge with satisfaction research is the need to measure perceptions across multiple service and product areas, which often necessitates a lengthy and repetitive questionnaire design. This by itself is a fairly significant impediment to collecting high-quality data, as the survey experience often demands a substantial investment in time and patience. For many respondents, this translates into a negative experience. Given the underlying communication goal of each satisfaction engagement, this is highly counterproductive and should be avoided at all cost.

Unfortunately most satisfaction surveys, even the good ones, have components that often result in a high percentage of break-offs or incompletes, year-over-year attrition (where a respondent is unlikely to fill out a satisfaction questionnaire after the initial survey experience) and ultimately lagging and perpetually decreasing participation rates. All of this impedes your ability to collect representative and reliable data. Given that one of the central objectives of most results-oriented satisfaction initiatives is to inform change by indentifying actionable improvement opportunities, reliability of your measurement over time is essential to ensure the ROI and ultimately the long-term viability of your satisfaction program.

In this article we will focus on three core components of satisfaction questionnaire design that impact the respondent experience and the resulting quality of collected data. Specifically, we discuss questionnaire flow, the scale of the satisfaction metric and the visual presentation of the question for eliciting a more accurate response.

Often formulaic

The design of satisfaction surveys is often formulaic. Questions about major aspects of service, also known as superordinates, gauge performance across overarching service categories such as product availability or execution. Superordinates are typically followed by a battery of more specific questions that measure performance across the components that define the overall experience with a specific superordinate. For instance, the overall satisfaction with a salesperson superordinate can be followed by a battery of attributes such as satisfaction with frequency of contact, product knowledge, responsiveness, etc. Typically there are also additional measures that get at the loyalty construct such as likelihood to recommend, first choice, likelihood to repurchase or likelihood to attrite.

The online medium allows us to break away from the rigidity of the traditional satisfaction survey vehicle. While we still want to gauge satisfaction across all of our service and product areas, we have the option of granting the respondent the flexibility to control what they see and when they see it. While good research design dictates that superordinates should precede attribute-level questions, the relative sequence of what areas we survey first is more subjective. At best, the questionnaire is structured to reflect the actual life cycle of the customer interaction with a company or a product. Usually, however, the sections are ordered by some subjective level of importance to the customer. Unfortunately this level of importance is determined a priori by the company and hence often reflects the hierarchy that the company considers to be valid.

For instance, the execution or salesperson section may precede the satisfaction back-office or documentation section. This subjective measure of importance in structuring our questionnaire may inadvertently impact our results. While product availability is of foremost importance to the customer, we may first ask them to fill out three sections that deal with everything from salesperson satisfaction to Web site satisfaction to satisfaction with overall execution. By the time the customer gets to the section that represents the most important component of his or her experience, they may be fatigued and may answer the questions through the lens of the previous sections that they were forced to review first.

Flexibility equals engagement and better quality data. By allowing the respondent to choose which section they fill out first, we are allowing them to personalize their survey experience. With more control the respondents are answering the questions more thoughtfully. Through their selected path the customers are giving us extra data points that help define how they perceive their interaction with the company, its service and its products. Our flexible design allows us to answer questions without explicitly asking them. For instance, is it product first and salesperson relationship second, or vice versa?

By granting more control to the respondent, we also reduce the perception of how long it takes to complete the survey, hence improving their overall experience with the process.

First points

When we think about satisfaction survey design, the satisfaction metric itself is one of the first points of consideration.

There are dozens of ways that you can ask the same simple satisfaction question. As researchers, we know that how we design and word the question will impact the responses that we receive. First there is a question of scale and second there is a question of how we display that scale.

When we think about scale we must remember the underlying premise of all satisfaction initiatives: the need to benchmark and track change. The simple truth is that by offering more points on the satisfaction scale we are allowing for more sensitivity to change. Of course another reason for using more points on our satisfaction scale is the fact that scales with at least seven points are more likely to generate normal response distributions. Given that we mostly use parametric tests for our analysis, a normal distribution is a definite plus. The authors prefer the traditional, anchored 10-point scale, where 1 represents “completely dissatisfied” and 10 represents “extremely satisfied.”

Having established that a scale with more points is preferable for measuring satisfaction, it is important to note that the way we present this scale will often impact the distribution of our responses. Based on our research across both online and mail modalities, two different versions of the seemingly same overall satisfaction question, where both questions use a 10-point scale, yield very different response distributions.

As Examples 1 and 2 show, it is clear that full, horizontal depiction of the scale is preferable for achieving optimal distribution of responses. However, while a seven-to-10-point scale allows for more discrimination from the respondent, it also requires more effort and has the potential of making the survey experience more tedious.

Survey fatigue and subsequent indifference can result in a common problem of straightlining. This problem is not unique to satisfaction research. Typically this translates into neutral tendency, with the respondents consistently picking mid-points on the scale. With satisfaction data, straightlining often takes on a positive skew as well, especially when customers are asked to rate a specific individual, such as their salesperson or advisor. This is a particularly painful phenomenon for the automotive industry, where survey gaming is often part of the status quo.

So how do we reconcile the needs and benefits of our satisfaction scales with the fatigue and straightlining that limits the reliability of our data? The answer is enhanced visual engagement.

In the online medium, we have the unprecedented flexibility to redefine our customer’s survey experience. We can step away from the flat, two-dimensional format to a visually-assisted interface.

While visual scales are not new, they have not been used extensively in survey research. To date, the biggest limitation has been the extra effort required to measure and record the answers provided. Recent advances in technology, however, allow us to liberate the respondent from the traditional survey interface.

Figure 1 is an example of how we can turn a traditional question into a fun and engaging exercise.

Our own research-on-research, which we presented at CASRO, shows that infusing online surveys with visual questions impacts respondent satisfaction with the survey process and their perception of the time it takes to complete a survey. To be more specific, respondents who complete a visual questionnaire feel that it takes them less time to go through the survey than those who complete the flat version of the same instrument.

We have also found and proven that visual questions impact the quality of the data that we can collect. In our test, visual questions resulted in broader use of the attitudinal scale than the flat version of the same survey. Visual questions tend to move respondents away from moderate positions to stronger positions, so that we are less likely to see neutral and “don’t know”-types of responses.

Other researchers have also noted the positive benefits of the visually-assisted scales. Research conducted by Couper et al. examined the difference between traditional, radio-button scales and visually-assisted slider scales, where the respondent drags the slider to indicate agreement with a statement. Couper et al. found that visually-assisted sliders were significantly less likely to result in respondents using extreme values of the scale.

The higher response variability of the visual exercise may be explained by the drag-and-drop interaction that is embedded within each activity (Figure 2). The design of the question draws the eye and the respondent’s attention to the statement, putting the scale into a different visual context. The respondent also has to put more thought into their response as they drag the slider from the start position to their designated level of satisfaction. In our view, this results in a more accurate representation of their true feelings and perceptions.

While visual scales may actually take longer to complete, as respondents dedicate more thought and action to answer each question, our research indicates that the perception of the entire experience is more positive than what we typically have with often endless and repetitive traditional satisfaction grids.

All of these findings support the notion that visually-assisted questions and scales reduce respondent fatigue as well as limit neutral tendency and positive skew. The result is a better experience for the respondent and better-quality data for the researcher.

Disengaging components

The validity and long-term viability of a satisfaction program often hinges on the quality of the questionnaire design. Even the best satisfaction surveys have disengaging components that result in poor-quality data, a higher number of dropouts and low participation rates. Given that satisfaction surveys are branded messages that communicate the values of your organization and the importance of your client relationships, you don’t want customers to endure a questionnaire experience that elicits negative feelings.

Today’s technology allows us to break away from the traditional survey design. Satisfaction research stands to benefit from these recent advances as we give respondents more control and more satisfaction from their survey experience.

Our approach and our research shows that you can achieve better-quality data with interactive visual design. You can now boost the power of your research, by creating a survey that truly reflects how much you value your customer relationships.

References

“Maximizing Respondent Engagement Through Survey Design,” CASRO Panel Conference 2008, Prepared by Vision Critical and Angus Reid Strategies.

Couper, P., Tourangeau, R., Conrad, F., and Singer, E. (2006). “Evaluating the Effectiveness of Visual Analog Scales: A Web Experiment.” Social Science Computer Review; 24; 227.

Yi, Youjae (1990). A Critical Review of Consumer Satisfaction. In Valerie A. Zeithaml (ed.), Review of Marketing 1990. Chicago: American Marketing Association.