Survey system manages the gateway to university hospitals

Editor’s note: David Drachman is director, marketing research at University HealthSystem Consortium, Oak Brook, Ill.

The culture of customer-focused service came late to the health care industry, and later still to university hospitals. The specialist and subspecialist physicians who predominated at these institutions were accustomed to waiting for referrals from primary care doctors, and the administrators of these institutions felt that the position of academic health centers as premier teaching and research facilities would guarantee that such referrals would continue unabated.

However, rising health care costs, the explosive growth of managed care plans, the increased purchasing power of local business coalitions, and declines in Medicare and Medicaid reimbursement have changed all that. Academic health centers (AHCs) found themselves competing head-to-head for referrals with nearby community hospitals while still trying to finance their teaching and research missions. In some markets, AHCs began to lose major insurance contracts to community hospitals that are unencumbered by teaching and research overhead. As a leader of one business coalition remarked at a meeting with staff at an AHC, "We know you have to train new doctors; we just don’t want to have to pay for it."

Administrators at AHCs suddenly realized that referrals from primary care physicians were a vital gateway to their institutions that could no longer be taken for granted. To protect and expand this gateway, these AHCs began to aggressively recruit more primary care physicians in order to generate their own referrals to staff specialists. Even an extensive network of primary care physician offices does not guarantee success, however. To keep the waiting rooms filled, it is essential to systematically monitor the office outpatients’ perceptions of their experiences.

This article describes one such monitoring system implemented by the University HealthSystem Consortium (UHC), an alliance of 78 AHCs. This monitoring system used a standardized mail questionnaire to measure satisfaction with key aspects of the outpatient visit, identify benchmarks for excellent performance, and pinpoint patient hot buttons that are crucial to the overall office visit experience. Primary care physician offices at 15 UHC member institutions participated in the project.

Data collection

A random sample of patients who had visited primary care clinics during January 1997 received a questionnaire in the mail after their visits. The questionnaire, designed and pilot tested by the Picker Institute in Boston, asked patients to report on 16 key aspects of their office visit experience, and to rate their overall satisfaction with the visit. These aspects are listed below:

  • ease in obtaining an appointment;
  • length of waiting time in waiting room;
  • length of waiting time in examination room;
  • provider’s attentiveness to patient;
  • clarity of provider’s answers to patient’s questions;
  • ease with which provider inspired confidence and trust;
  • respectful treatment of patient;
  • provider’s involvement of patient in care decisions;
  • provider’s explanation of what to do if symptoms continued;
  • explanation of medication’s side effects;
  • sufficiency of information given patient;
  • length of time spent with provider;
  • explanation of how patient would learn about test results;
  • explanation of test results to patient;
  • provider’s adequacy in addressing patient’s main reason for visit; and
  • organization of the office.

To achieve an adequate rate of response, a three-wave mailing was conducted. Ten days after the initial mailing, all potential respondents received a reminder postcard. This post card was followed by a second mailing of the survey questionnaire to all patients who had not returned their surveys. A total of 2,455 patients responded, representing 49 percent of eligible and reachable respondents. Response rates at the individual institutions ranged from 37 percent to 53 percent.

Problems with care

A problem score was computed for each patient who responded to the survey. Each aspect of care listed above was classified as a problem if the behavior in question was not performed or was only partially performed. For each patient then, a problem score ranging from zero to 16 was computed.

Across all participating institutions, an average of three problems per patient was reported. This represents almost 20 percent of the potential problem areas assessed. The most frequent sources of problem reports were:

  • length of waiting time in examination room; problems reported by 28 percent of patients;
  • sufficiency of information given to patient; problems reported by 28 percent of patients; and
  • organization of the office; problems reported by 27 percent of patients.

Identifying benchmarks for patient satisfaction

The participating institutions were coded with the numbers one through 15. Patient satisfaction varied substantially across the participating institutions for each of the survey items as well as for overall satisfaction. For example, the last item on the survey asked the patients if they would recommend the office to others. There were three possible response categories: "yes, definitely," "yes, probably," and "no." Across institutions, the percentage of patients responding "yes, definitely" varied from 80 percent to 62 percent while the percentage of patients responding "no" varied from 2 percent to 13 percent.

For each item on the survey, statistical process control (SPC) analyses identified several institutions that significantly outperformed the averages (Spoeri, 1991). These benchmark institutions were targeted for follow-up site visits and conference calls to uncover the factors that appeared to explain their superior performance in satisfying patients.

What really matters to office outpatients?

One goal of the project was to identify the customer hot buttons that were crucial in forming their impressions of the services received during the office visit. A common strategy for pinpointing customer hot buttons is to relate satisfaction scores on individual survey items to a measure of overall satisfaction with the service encounter. However, this relationship is not necessarily linear, as recent research in patient satisfaction (for example, Mittal and Baldasare, 1996) has shown.

On some attributes of service ("dissatisfiers"), an unsatisfactory experience is crucial to forming a negative impression, whereas a substandard experience on the same attribute may have little impact on the formation of a positive impression. For example, in the physician office setting a failure to clearly explain test results may help foster a negative impression of the visit, but a clear explanation of test results may not add much to a positive impression.

Conversely, on other attributes ("satisfiers"), an above average experience leads to a positive impression but a substandard experience may not add much to a negative impression. Finally, in some cases, the same attribute may be both a satisfier and dissatisfier, in which case the relationship is approximately linear.

In order to identify satisfiers, a logistic regression procedure was used to model the probability that a patient would "yes, definitely" recommend the office to family and friends, based on the patient’s responses to the 16 survey items. In order to identify dissatisfiers, a separate model was constructed for the probability that a patient would not recommend ("no") the office to family and friends.

The results showed that the degree to which the provider inspired confidence and trust and the organization of the office played a strong role in both satisfaction and dissatisfaction. On the other hand, ease in obtaining an appointment and the degree of respectful treatment of the patient are primarily dissatisfiers: a problem in these areas increases the probability of not recommending the office to others, but a lack of problems in these areas does little to increase the probability of a definite recommendation. Finally, length of waiting time in examination room and clarity of provider’s answers to the patient’s questions were satisfiers, playing a greater role in satisfaction than dissatisfaction.

From data to action

By providing comparative data from 15 peer institutions, the survey results gave the participating institutions an external frame of reference for assessing their success in satisfying their patients. Without such an external yardstick, it would have been very difficult for office administrators and staff to know what levels of satisfaction represent good or poor performance. The survey results identified institutions two and nine as above-average performers in a number of areas. These institutions were visited to identify the factors critical to their success in achieving high patient satisfaction scores. The factors are listed below.

Critical Success Factors

  • Support staff centrally located and cross-trained;
  • "Urgent care provider" designated for patients wishing to be seen within one or two days;
  • Expanded hours of operation;
  • Central access center for patient appointments and information requests;
  • Prescription refill protocol to reduce the number of calls needing physician attention;
  • Use of mystery shoppers to improve telephone response times;
  • Patient reminder calls the day prior to the visit to reduce the no-show rate.

The results of the site visits were compiled in a report that was shared with all survey participants. Later this year participants will attend a meeting to discuss the results further and select partners for benchmarking activities.

For each participating institution, a separate analysis of key drivers of patient satisfaction was undertaken. This allows office administrators and staff to budget their satisfaction improvement efforts to those areas most likely to have an impact on the patient satisfaction bottom line.

By identifying better performers and patient satisfaction hot buttons in a systematic and standardized way, the measurement and follow-up system described here has boosted our member institutions’ efforts to improve quality in the primary care physician’s office, an increasingly vital customer gateway to the academic hospital. o

References

Mittal, V., and Baldasare, P.M., "Eliminate the negative," Journal of Health Care Marketing, 1996, 16(3), 24-33.

Spoeri, R.K., "The inspection of data." In Longo, D.R., Bohr, D. (eds.): Quantitative Methods in Quality Management: A Guide for Practitioners. Chicago: American Hospital Publishing, 1991.

Participating Institutions

  • University Hospital of Arkansas
  • Bowman Gray/Baptist Hospital Medical Center
  • Medical Center at UCSF
  • The University of Connecticut Health System, John Dempsey Hospital
  • University Medical Center of Eastern Carolina University
  • Froedtert Memorial Lutheran Hospital
  • Hermann Hospital at the University of Texas Health Science Center
  • University of Kansas Hospital
  • University of Kentucky Hospital
  • University of Massachusetts Medical Center
  • University of North Carolina Hospitals
  • University of Tennessee-Bowld Hospital
  • University of Virginia Health Sciences Center
  • University of Washington Medical Center
  • University of Wisconsin Hospital and Clinics