Jury still out on mail- and phone-based data collection methodolgies

Editor's note: Sherri Cross is manager of public relations with National Research Corp., a Lincoln, Neb., firm specializing in providing market information to health care organizations.

Within recent years as constituencies inside and outside the health care industry sought to provide a more tangible definition of quality care, satisfaction measurement has earned a pivotal role. The Joint Commission on the Accreditation of Healthcare Organizations has long required hospitals seeking accreditation to assess patient satisfaction levels. Similarly, initiatives introduced by the National Committee for Quality Assurance (NCQA), the preeminent accrediting body of HMOs, influenced health plans to begin measuring the satisfaction levels of their membership. These industry mandates are aimed at making customer satisfaction measurement a key component within organizations' quality improvement efforts. Findings, however, also made their way to health care organizations' marketing departments, with growing numbers publicizing their satisfaction levels. Yet, seen by some as a marketing ploy, these satisfaction scores lacked comparability and meaning as measurement approaches lacked standardization.

The NCQA led the crusade for a standardized health plan member satisfaction initiative. In 1995, the NCQA drafted a pilot instrument and recommended health plans use a third-party firm to conduct the mail-based survey process on their behalf. Associated with Health Plan Employer Data and Information Set (HEDIS) version 2.5 reporting, this pilot phase directed the NCQA's survey refinement of methodological specifications last fall.

Among the methodology issues explored, the pilot study response rates achieved through mail data collection fell short of the NCQA's stipulated rate of 50 percent. The second of two reliability and validity studies conducted by National Research Corporation and the Health Institute found response rates ranging from 35 to 73 percent, or an average 44 percent response rate (1996). This led to the recommendation that the response rate specification be lowered to 40 percent, shown to provide valid findings. The NCQA's Committee for Performance Measurement, charged with finalizing instrument standards for the HEDIS 3.0 release, weighed the implications of the suggested change given the need to have industry buy-in to further substantiate the initiative as the industry standard.

Understandable debate arose about the most appropriate survey methodology - phone or mail - to pursue. Regardless of the outcome, a decision for either methodology relied on a defensible position shared by peers within the health care industry and the greater public - a notable concern, given that health care executives are seven times more likely to endorse phone surveys as being superior to mail for satisfaction studies (Response Center 1995).

Yet, while the NCQA's Committee for Performance Measurement seemed to be wavering, the Health Care Financing Administration (HCFA) together with the Agency for Health Care Policy and Research (AHCPR) announced specifications for a mail administered Medicare Managed Care Beneficiary Satisfaction Survey (AHCPR's Medicare version of its Consumer Assessments of Health Plans Survey). In fact, HCFA set forth a response rate aiming for 70 percent of surveyed Medicare Risk or Cost plan beneficiaries. HCFA specified necessary telephone follow-up to achieve this response rate target. An independent vendor will conduct the Medicare CAHPS study this summer, sure to add fuel to this long-standing methodological debate.

Worth mentioning, the Joint Commission on Accreditation of Healthcare Organizations' ORYX initiative is the most recent driving force behind standardizing performance measurement, including clinical outcomes and satisfaction. The Joint Commission will inevitably enter the methodological debate on many fronts, as it approves measurement systems health care organizations can contract with to meet new accreditation standards.

A look at how response rates are derived

Despite multiple recommendations that mail data collection become a standardized survey administration practice, this methodology still has its skeptics. Phone-based data collection, whether from experience or expectation, has generated a loyal following of proponents attesting to its strong response rate record. Face validity alone seems to suggest phone's achievement of higher response rates than mail. Looking for validation, the NCQA fielded a phone pilot study that would seem to support what many presumed to be true. Response rates achieved were in the range of 70 percent (NCQA phone-based studies, 9/96), surpassing the 44 percent average response rate shown to be achievable by mail (National Research Corporation and Ware, 9/96).

However, these numbers do not represent an apples-to-apples comparison of methodology effectiveness. The chart illustrates how the mail response rate reflects a percentage of total membership, while the phone percentage represents those members who answered a phone call. In essence, the perceived advantage of a higher phone response rate equates to the oversight of eliminating those members not given the opportunity to express their views. Thus, contrary to popular belief, both data collection methodologies achieved similar, rather than vastly different, response levels.

Looking to existing survey research for justification/evidence

Long-standing analysis of trends within the survey research industry provides insights into the methodological issues faced by health care's performance measurement standardization movement. A more exhaustive review of phone data collection, the Respondent Cooperation and Industry Image Survey, tracked phone response rates for the past two decades and documents a current figure of 40 percent (The Council for Marketing and Opinion Research 6/96). This study also reported steadily climbing refusal rates, with 58 percent of those called refusing to participate.

Reflective of this well-established trend, phone products such as answering machines and Caller ID continue to affect response rates. Sixty-eight percent of households have answering machines and half use them to screen their calls. Caller ID shows steady growth with subscribers in 10 percent of households and another 11 percent planning to add this service. Another survey research study supports these findings, adding that 56 percent of current Caller ID subscribers said they always or most of the time used the device to screen calls (Tecket and O'Neil Market Research - Fall 1996). Similarly, telecommunication trends including multiple phone line households, phone number portability and changing area codes will continue to affect phone-based data collection by altering such things as rates of answered lines, unlisted numbers and working phones.

Mail-based data collection suffers from its own unique obstacles. According to the U.S. Postal Service, bad addresses on average produce a 3 to 5 percent nondeliverable rate nationwide. The currency of mailing lists and ZIP code changes contribute to higher percentages of nondeliverables. However, when comparing no/bad addresses to no/bad phone numbers, National Research Corporation's health care industry specific experience shows a 5 percent mail-based nondeliverable rate to be negligible when compared to phone's 18 percent.

Survey research indicates consumers are most comfortable with mail data collection. When asked, 46 percent of consumers said they preferred mail, while 26 percent advocated phone, 18 percent cited in-person formats and 7 percent favored group discussion (CMOR 6/96). Without supporting evidence, the assumption that health care's topical nature nullifies existing response rate trends remains unsubstantiated.

Beyond response rates - weighing other factors

Methodological alternatives utilized within the health care industry cannot fully be weighed on response rate alone, as numerous factors impact the actionability of information collected. Health care consumers have been shown to report higher satisfaction levels when collected by telephone vs. mail (Medical Outcomes 1/95). Holding this bias constant, effects on satisfaction scores may present no substantial concerns. However, telephone data collection results in higher reported health status and under-reported chronic conditions conveying a dangerously flawed picture to health care organizations (Medical Outcomes 1/95). Perhaps the most direct threat to the goal of a standardized instrument, widely-recognized phone interviewer introduced biases, are yet another variable for which to control. While an auditing system could control for the majority of these protocols, this method has not yet been shown to mitigate interviewer bias.

Conversely, as national reports estimate that nearly one-third of Americans do not have a clear understanding of managed care, phone data collection allows interviewers to probe consumer responses and personally address confusing subjects. This issue may find particular relevance to seniors, as Medicare privatization and coverage options easily blur payer specifics. Field testing instruments, however, by phone or mail can identify those measures requiring modification to facilitate respondents' accurate interpretation and understanding.

The methodological debate inevitably must factor in cost considerations. As growing numbers of industry and government bodies mandate the reporting of performance information, health care organizations must reallocate resources to satisfy these requirements. Managed care organizations, particularly the smaller players, have argued a tighter regulatory environment will jeopardize the availability of services and affordability of coverage. Given the Health Care Financing Administration's estimates, Medicare risk and cost managed care plans may incur from $7,000 to $9,800 per contract area to comply with the 1997 implemented beneficiary satisfaction survey. This cost adds to the more than $500,000 to $1 million some health plans have projected they will spend for HEDIS 3.0 reporting, of which satisfaction measurement represents only one component driving expenditures. In fact, as NCQA requires health plans to show performance improvements, ongoing satisfaction measurement requires a sizable financial investment. Thus, the 50 to 100 percent higher costs involved with phone data collection cannot go overlooked.

Methodological stance within the standardization arena

With 99 percent of HMOs and 80 percent of PPOs measuring their memberships' satisfaction, the standardization movement will continue to direct how health care organizations' performance should be measured, reported and applied. Many health care organizations have opted to pursue a mail methodology with their internal, ongoing measurement initiatives to generate a unified performance perspective, as separate phone and mail data collection associated with a satisfaction measurement program can deliver conflicting data sets. Thus, the stance of industry players including the NCQA and HCFA on mail data collection has and will create followers. Therefore, given the standardization movement's current progress and the survey research industry's current findings, key indicators suggest mail data collection represents a sound course to maintain.