Editor’s note: Dave Langley, M.A., is director of strategic research & analysis, and Lori Cook, M.A., is a senior research analyst, at Blue Cross Blue Shield of Maine, Portland.

Most organizations have long-standing experience conducting satisfaction assessment and applying it to quality improvement programs. However, recent work by one health care organization - Blue Cross Blue Shield of Maine - with statistical models to determine the key drivers of satisfaction has emphasized points that any organization should consider as they work to design a relevant and actionable customer satisfaction program:

  • the ability to identify the "real" individual satisfaction drivers is dependent on the integrity of underlying study designs;
  • a single-method/single-trait program design is self-limiting; robust validation and triangulation on the "real" drivers is necessary; a multi-method/multi-trait approach to identify "real" drivers should include the following components: quantitative data and models, qualitative validation, independent review and audit.

Addressing these issues is critical given a) the need to maximize the business value of survey program expenditures and b) the significant impact that customer satisfaction outcomes have on financial performance (customer acquisition & retention) and marketing performance (positioning, accreditation). Identifying the "real" drivers is a necessary component of optimizing the satisfaction data for priority setting, policy direction, and resource allocation.

Moving to a multi-method/multi-trait approach to the identification of satisfaction drivers

Step 1: Annual Benchmark Customer Satisfaction Survey

Until recently, Blue Cross Blue Shield of Maine’s (BCBSME’s) approach to identifying satisfaction drivers has been dependent on a single source of quantitative data on managed care member satisfaction: BCBSME’s Annual Benchmark Customer Satisfaction Survey (Benchmark Study). This study, a telephone survey of 600 managed care members (samples are stratified for program type), was developed to evaluate service quality and satisfaction, and has been administered annually since 1993.

The survey is designed to comprehensively measure health plan attributes related to plan administration, health care quality and access, service issues, plan communications, and issues related to member education and knowledge of managed care procedures. To develop a model of satisfaction drivers, point-in-time drivers analyses1 using factor and regression techniques were conducted with the 1996 and 1997 survey data sets to identify significant predictors of member satisfaction with their health plan. Based on these analyses, the following drivers were identified:

  • Plan administration (principal driver)
  • Medical bills and claim payments
  • Satisfaction with benefits
  • Perceived ease of plan use
  • Customer service
  • Evaluation of the customer service representative (e.g., whether the customer service representative was courteous, knowledgeable, etc.)
  • Number of times calling customer service
  • Health care access
  • Perceived limits in access/choice re: primary care physicians (PCPs), specialists, services and treatments

These findings were used to inform priority setting, policy direction, and resource allocation for service quality improvement. Although these findings received some validation through consumer focus groups and quality committee discussions, a robust approach was not viewed as necessary since these "drivers" appeared to align with understandings of key customer issues.

Step 2: National Standardized Health Care Survey 2

Further drivers analysis was initiated following completion of BCBSME’s administration of the HEDIS 3.0/1997 Member Satisfaction Survey, an annual mail survey of 3,720 managed care members administered via a standardized national study design. While HEDIS shares a focus with the Benchmark Study on measuring health plan attributes related to plan administration, health care access and health care quality, the underlying study design differs significantly in a number of areas. These include: method of administration (mail vs. telephone); definitions of population and sampling frame; and structure and content of individual measures (i.e., scales, item wording, and item placement).

HEDIS also differs in terms of specific content areas. For example, it includes items related to functional health status, satisfaction with health care outcomes, and plan costs that are not included in the Benchmark Study. HEDIS lacks measurement items evaluating service issues (such as evaluation of customer service contacts), plan communications, and issues related to member education and knowledge of managed care procedures that are included in the Benchmark Study.

A point-in-time drivers analysis (similar methodology to that used with the Benchmark Study) was conducted with the 1997 data. The drivers identified can be categorized as follows:

  • Plan administration (principal driver)
  • Services covered by the plan
  • Cost
  • Availability of information about plan benefits and costs
  • Health care
  • Quality: (e.g., thoroughness of treatments, attention given to what patients had to say, amount of time with doctors and staff, overall quality of care and services)
  • Access: (e.g., ease of choosing a personal physician, number of doctors to choose from, ease of making appointments, delays or difficulties in receiving care, difficulties getting referrals)
  • Outcomes (e.g., perceptions of how much respondents was helped by the care they received)

Although this model overlaps to some extent in generally identifying health plan administrative features and access to health care as significant drivers of satisfaction, the model generated by this analysis differed significantly in the following areas:

  • evaluations of health care quality and outcomes were significant predictors of satisfaction; although the Benchmark Study includes measurement items related to quality and outcomes, these items were not predictive in Benchmark Study modeling;

  • the HEDIS data also identified drivers related to cost and coverage (items not included in the Benchmark Study);

  • because the HEDIS instrument did not include items related to evaluation of contact with customer service representatives, this service aspect was not included in modeling outcomes.

Although the different drivers outcome is not surprising given the methodological and study design differences between the Benchmark and HEDIS studies, the new HEDIS-based findings (in terms of identifying drivers related to health care quality, outcomes, cost, and coverage) clearly have significant implications for BCBSME’s priority setting, policy direction, and resource allocation. These findings also underscore the importance for organizations of recognizing that conclusions reached regarding the mix and importance of individual satisfaction drivers are dependent on underlying study designs.

Step 3: Qualitative validation

In order to further reconcile, understand and clarify issues around member satisfaction drivers identified through statistical modeling, BCBSME engaged 1) reviews of national literature on satisfaction drivers and 2) consumer focus groups.

3a. Review of national literature
BCBSME completed a comprehensive literature review of commercial and published studies regarding health care and health plan satisfaction drivers; this was particularly used to gain national and regional perspective on these local findings. These studies highlighted key points for our studies’ findings and applications to quality improvement work3:

  • not surprisingly, there is notable consistency between the perception drivers for local (Maine) consumers and those among health plan consumers in other areas of the U.S.

  • among these drivers, health plan attributes (plan administration, coverage, customer service, costs) and health care attributes (perceived quality of care and outcomes, access) are both important in managing overall satisfaction rates.

3b. Consumer focus groups
Also to develop and clarify the "real" set of satisfaction drivers, BCBSME conducted consumer focus groups to further define and understand outcomes of statistical modeling. The primary objectives of the focus group were to validate and prioritize satisfaction drivers identified in the quantitative analyses which were based on the Benchmark and HEDIS Studies. Focus group participants were given a survey instrument that included 32 items that had been identified as satisfaction drivers in the analyses, and were asked to identify and prioritize the most important items.

Participant’s rankings of the most important drivers confirmed a "key drivers" set that includes the drivers identified in both studies:

  • Cost/coverage
  • Health care access
  • Plan administration (e.g., bills, ease of use, availability of information)
  • Customer Services
  • Health care

In addition to prioritizing this set of drivers, further discussion was completed with participants for defining specific actions that would be relevant for implementing improvements.

Step 4: Independent methodology audit

As a final step in the process of validating findings and establishing further confidence in their reliability, an independent firm was engaged to critically review analytical and validation methodologies. This engagement allowed an opportunity for a methodological critique of study designs, analytical designs, and interpretations of findings. Although this audit endorsed in-place practices and further confirmed the validity of conclusions reached regarding key drivers, it also highlighted additional methodological enhancements needed to continue to develop these understandings:

  • drivers analyses should include evaluation designs which integrate health care delivery/quality issues and health plan issues into these determinations;
  • further refinements can be obtained by distinguishing between sick and healthy consumers;
  • implementing longitudinal modeling will identify drivers of change, rather than drivers of point-in-time differences;
  • improved understanding of drivers are likely to follow from further methodological enhancements (e.g., integrating behavioral data, use of choice-based research designs).

Next steps and other considerations

BCBSME is presently implementing the following steps to optimize the business value of these analyses. These are in addition to expanding the satisfaction measurement program’s ability to present a multi-method/multi-trait and self-validating approach to decision support:

  • employee and management team communications are being implemented regarding key drivers of satisfaction; this is being linked to related efforts to educate employees on health care quality measurement (HEDIS), impacts on quality improvement, and role of customer satisfaction in BCBSME’s business strategy;
  • member-defined perceptions of health care access are being further understood and addressed;
  • the integration of "health plan" and "health care" factors are being further defined and addressed;
  • educational interventions are being established regarding the effects of coverage type changes for members (e.g., from fee-for-service to HMO plans) on their perceptions of the amount and quality of time given to them by their physicians;
  • BCBSME is exploring ways to differentiate between the needs and perceptions of "sick" vs. "healthy" members; starting with the integration of service quality and preventive health programs through selected satisfaction-related studies (e.g., diabetics, asthmatics, heart patients);
  • satisfaction-related issues among non-managed care members are being further defined and developed;
  • annual satisfaction studies are being redesigned to support determinations of drivers of satisfaction change (longitudinal study designs) in lieu of or in addition to current approaches which determine drivers of satisfaction differences (point-in-time study designs);
  • HEDIS 3.0/1998 and the Benchmark Study are being used to begin the implementation of research design improvements (e.g., added analysis from administrative and external sources, technical improvements such as longitudinal modeling and use of pilots/experiments, incorporate developing industry practices).

References

1 "Point-in-time drivers analyses" refers to predictive modeling applications which use a single time-point dataset (rather than, for example, longitudinal modeling to identify drivers of change between time-points). Although point-in-time analyses lack some robustness for identifying key predictors of satisfaction change, these analyses have been proven to reliably identify explanations of variance in the data.

2 HEDIS 3.0/1997 Member Satisfaction Survey. HEDIS is a registered trademark of the National Committee for Quality Assurance (NCQA), an independent, non-profit organization that measures quality in managed care plans. The Health Plan and Employer Data Information Set (HEDIS) is a standardized set of definitions and specific methodologies that is designed to enable health plans, employers, and consumers to evaluate and trend health plan performance.

3 Selected examples of studies included in this review:
1997 Novartis Report on Member Satisfaction within Managed Care. Novartis Pharmaceuticals Corporation, East Hanover, N.J.
H.M. Allen, Jr. and W.H. Rogers, "The Consumer Health Plan Value Survey: Round Two," Health Affairs, July/August 1997, pp. 156-166.
The Road to Increased Market Share: Meeting Changing Consumer Expectations About Health Care. Sachs Group, Evanston, Ill.