Satisfaction procured

Editor’s note: Victor Crain is vice president, technology division, ICR/International Communications Research, a Media, Pa., research firm.

Unisys U.S. Federal Government Group has been serving the federal market for more than four decades. The Group is an information technology services and solutions provider to the federal government, selected U.S. public sector organizations and state Medicaid agencies.

Unisys’ successful track record is based in part on understanding what it takes to serve the federal market, and careful attention to customer requirements. As a sign of how the company has done, in 1998 Unisys received six major customer awards including two Hammer Awards from Vice President Al Gore and an award from the Department of State for outstanding service. Unisys takes customer satisfaction very seriously.

Finite number

Satisfied customers are vital to any successful enterprise, especially when there are a finite number of customers, and each one accounts for millions of dollars in potential business. But imagine a situation in which these customers share information about how satisfied they are with their vendors, and where vendor ratings on select satisfaction measures are a mandated part of procurement decisions.

This is the environment facing companies serving federal agencies. Using information technology as an example, despite the size of the federal government, there are a relatively small number of decision-makers controlling several billion dollars in annual spending on technology services and products. In a highly formalized decision-making process, vendor satisfaction ratings can account for up to half of the criteria for procurement decisions.

To be successful as a vendor to the federal government over time, you have to make sure your clients are highly satisfied with the products and services you provide. This is more difficult to do in the technology sector today. Contracts are fulfilled globally, and technology vendors now have less absolute control over what they provide. Technology has moved beyond the point where any one company can manufacture all products and services needed by a customer for a specific assignment. A critical element in servicing customers involves effective management of third-party subcontractors. Much of what affects customers may not be visible directly to vendor management.

One of the key tools in monitoring and managing performance is the customer satisfaction measurement program that the Unisys U.S. Federal Government Group has put in place. Obviously, when dealing with a small number of strategic customers, there is ample opportunity for vendor management to hear about problems or complaints. However, this ad hoc feedback may not provide a fair picture of the overall relationship with a customer. It certainly limits the opportunity for proactive problem identification and prevention. The systematic feedback generated by the survey provides a balanced view, as well as insights that can contribute both to the conduct of ongoing work and to the design of future proposals and project plans.

Federal procurement policies

Federal procurement is based on a competitive bidding process. The criteria used in selecting the bid are explicitly stated in the request for information (RFI) or request for proposal (RFP). The weighting assigned to each criteria is denoted by a number of points (e.g., out of a total of 100) assigned to the item.

In 1994, the federal government formalized customer satisfaction as a criterion in procurement decisions with passage of the Federal Acquisitions Streamlining Act. The Office of Federal Procurement Policy issued a report on best practices for considering vendor past performance in procurement decisions in May 1995. In the federal government’s view, “The use of past performance as an evaluation factor in the contract award process...enables agencies to better predict the quality of, and customer satisfaction with, future work. It also provides the contractors with a powerful incentive to strive for excellence.”1 This logical view of vendors is not so very different from how consumers think about service providers.

The inclusion of past performance measures was a major break from a traditional, proposal-driven selection approach. “To select a high quality contractor, commercial firms rely on information about a contractor’s past performance as a major part of the evaluation process. The government, on the other hand, for large contracts attempts to select a quality contractor by analyzing elaborate proposals describing how the work will be done and the management systems that will be used to ensure good performance. The current practice allows offerors that can write outstanding proposals, but may not perform accordingly, to continue to ‘win’ contracts when other competing offerors have significantly better performance records, and therefore, offer a higher probability of meeting the contract requirements.”

The goal of procurement is to meet the contract requirements as cost effectively as possible, not merely reward excellence in writing.

Satisfaction assessment is based in part on prior experience with a vendor by the agency in question, and on the past experience of the vendor in serving other agencies with similar procurement requirements. Typical questions on which vendors are rated include:

  • conformance to specifications and to standards of good workmanship;
  • containment and forecasting of costs;
  • adherence to contract schedules, including administrative performance;
  • history of reasonable and cooperative behavior and business-like concern for the interests of the customer; and
  • service to the end user of the product or service.

These topics address some of the key issues for technology buyers. However, what buyers need to know in order to make intelligent choices is less than what vendors need to know to manage their business effectively.

Developing the measurement system

Unisys U.S. Federal Government Group actually began monitoring customer satisfaction in 1984. This was a natural outgrowth of a culture of focusing on satisfying customers and of a corporate “listening” program, which sensitized employees and customers to the benefits of listening to one another.

For almost 13 years, the Group conducted an annual assessment by mail, using questionnaires sent to customers or distributed by relationship managers. The questionnaires were short and concise, and the data used to create a variety of reports for multiple levels of management.

In 1998, the satisfaction measurement team decided to move away from the mail survey format. The issues with mail included:

  • A declining response rate. The response rate had eroded over time, dropping to below 40 percent. At this level, management was concerned about response bias; were the ratings of those responding to the survey truly representative of everyone else?
  • Length of the survey. With a different format, would respondents accept a larger number of questions that might provide more specific guidance to the Group regarding satisfaction issues?

Ultimately, management wanted a survey format that would accomplish the following:

1. Improve the response rate to acceptable levels.

2. Be minimally intrusive and burdensome on respondents.

3. Capture diagnostic information on issues that the survey might identify.

4. Provide information on a timely basis.

5. Allow flexibility in data gathering, to accommodate the rules of federal procurement.

6. Ensure continued buy-in to data collection and results by management and field personnel.

Let’s discuss each of these issues in turn.

Response bias: mail versus phone

The Unisys U.S. Federal Government Group considered and ultimately moved to a telephone medium for data collection. This change had two immediate results:

  • First, the response rate for the survey increased.

While honoraria are occasionally used in commercial research to boost response rates, this is not viable with federal respondents, given federal rules about acceptance of gifts from vendors. The change in media allowed us to boost the response rate without this device.

  • Second, while customer ratings remain good, average ratings actually declined slightly with the expansion of the response base. Managers whose compensation was affected by this change were not terribly pleased. However, ultimately it was accepted that the larger response base provided a more realistic assessment of customer attitudes and experiences than achieved with the mail questionnaires.

Analysis indicates that the change in average scores was tied to increased participation in the survey and unrelated to survey media. This also validated Unisys concern about the sample being representative.

Reducing burden on respondents: multiple data collection media

The next step toward improving response rates was to allow respondents a choice of how to complete the survey. Some respondents simply don’t have time for phone interviews, or are uncomfortable with the phone format. Under the multiple media approach:

  • Targeted respondents receive a letter at the beginning of each field period reminding them about the survey, and requesting their cooperation. This letter is sent on Unisys letterhead and over the signature of a senior executive.

  • Within the letter, respondents are told to expect a call from an ICR interviewer working on behalf of the Unisys U.S. Federal Government Group. If they wish, they can complete a Web version of the survey, and avoid the call.

Management has received comments from various agencies thanking them for making this Web option available. Some respondents prefer to see questions, while others like flexibility as to when they take the time to do it. Historically, about 20 percent of targeted respondents use the Web option.

From a methodological standpoint, the combination of phone and Web interviewing works because:

1. There is no impact on the representativeness of the data. The Web respondents are part of the phone sample; anyone not completing the survey on the Web (including those only completing a portion of the interview) are contacted by phone. This is not a Web broadcast methodology. Respondent access to the Web survey is carefully monitored; each respondent receives a unique password, so that we can track who accesses the Web questionnaire, and how much of the survey they complete. However, the relatively low rate of use of the Web option suggests that we would not be able to achieve a representative sample using only Web interviewing, even though all of our targeted respondents are Web users. Phone remains an integral element of data collection.

2. Use of multiple media is facilitated by complementary software for Web and phone interviewing. There are several interviewing systems in the market that have both phone and Web modules, and which store data in identical structures, allowing data to be combined readily for analysis.

3. There is no evidence of any bias in rating scores between interviews completed by phone and on the Web. Of course, we are not asking unaided awareness questions, which are affected by choice of media. What we do find, however, is that open-ended responses tend to be truncated on the Web; respondents tend to type less than they would say orally to an interviewer. This truncation sometimes leads us to callbacks to clarify responses.

4. On the positive side, the diversion of interviews from the phone to the Web serves to reduce the cost of data collection, for both ICR and Unisys.

5. Finally, and perhaps most importantly, respondents like having the option of how to participate, and have told us about it.

ICR receives occasional requests for mail or fax versions of the survey. We honor these requests, although we do not advertise this option to respondents. In practice, we have found that most requests for mail versions of surveys (in this and in business decision-maker surveys) do not result in completed interviews, and we have to follow up with these respondents by phone. Where we do receive completed mail forms, we compare these results to surveys previously completed by the same respondent, and to data from other respondents in the same agency, to check for possible discrepancies resulting from the mail format. To date, we have not seen any results suggesting a format-based bias.

Capturing diagnostic information

Conversion of the survey from mail to the phone/web format allowed us to increase the number of questions in the interview, and to add selected probing open-ended questions. This change has allowed management to obtain information on a wider array of issues that could potentially impact customer satisfaction and working relationships.

Open-ended questions actually contribute to data collection. Rating scores may mask concerns with issues that are only tangentially related to the question being asked. A judiciously placed open-end allows the respondent the opportunity to explain an answer that may not quite fit the question, but reflects something important that the respondent wants to communicate.

In analysis, we of course conduct the relatively standard key driver analysis on quantitative data. However, we also list verbatim responses to open-ended questions by respondent, and scan those. The purpose of reading verbatims by respondent is to understand the “story” that the customer is trying to convey in the interview, and look for patterns of responses that might indicate additional issues of concern to multiple customers.

The emphasis placed on open-ends impacts interviewer training and even the assignment of interviewers to this study by ICR. If we are to capture meaningful and detailed information, it is essential that interviewers have a baseline understanding of the Unisys U.S. Federal Government Group organization, products, services, and solutions in order for them to understand what customers say to them, and record this information accurately. We ensure this through providing training to interviewers that includes briefing materials on company offerings, and by having the same interviewers and supervisors work on this study each quarter.

Providing timely information

Information on customer attitudes needs to be disseminated quickly to the managers who can take appropriate actions. This means:

  • Issues requiring immediate action are identified during fieldwork, and this information is expedited to the attention of the research team and the client. This is standard “action item reporting.” However, for this to be done successfully,

    - Interviewers have to be trained to recognize what these items are;

    - A report format and procedure has to be in place to take this information from the phone room to the research team.

    - The research team needs to understand the amount of detail the client requires for action, and to ensure that the action report contains this required information.

    - The client should have a central clearinghouse for those reports, which can direct the report to the appropriate manager for action. The function should include follow-up to ensure that actions are taken on a timely basis, and that the customer is satisfied to the extent possible with the response.

Table 1

It is not unusual for customers who have had problems to be more satisfied than those who have not, if the customer is pleased by the timeliness and effort put into the response!

  • The format for providing both action reports and other survey information has to be one which line managers (non-researchers) can grasp readily and use.

For standard survey reporting, Unisys U.S. Federal Government Group has gone to a color-coded spreadsheet format, an example of which is shown above.. We’ve defined target levels of satisfaction (on the five-point rating scale that we use), as well as acceptable and unacceptable scores in terms of five colors. For any account or group of accounts, managers easily can see what needs improvement.

Note that the five-point scoring system is not ICR’s preference. It was grandfathered into the program from the firm handling the mail survey, and it’s what managers are familiar with.

We’ve moved the measurement program from a once-per-year inquiry to quarterly interviewing. Quarterly contact with customers:

  • allows us to detect issues in some cases faster than we would on an annual interview schedule, and

  • makes it easier for managers to do the internal follow up on issues, by having fewer accounts to deal with at any one time.

We limit interviews with any one respondent to once per year. Since we are in touch with multiple respondents from each agency, we try to ensure that each agency is represented in every wave of interviewing.

We use a moving average method of reporting aggregate performance results. The small number of customers means that the inclusion or exclusion of a specific agency in any one quarter can have a dramatic impact on aggregate results; use of multi-quarter moving averages circumvents this problem.

Flexible data collection

During procurement decision-making, there is a formal “blackout period” in which contacts between vendors and decision-makers are very tightly controlled. A survey contact during a blackout period could be seen as an attempt in some way to influence the procurement decision, and is not permissible.

Thus it is essential to be able to schedule data collection around these periods. However, it is not necessarily possible to predict in advance when these blackouts will occur.

Each quarter, we review with the field the agencies to be contacted in that wave of interviewing. We ask the field to identify procurement activity; respondents from these agencies then are deferred to a later wave of interviewing that will not conflict with the procurement.

Management and field buy-in into the measurement process

Utilization, not methodology, is the ultimate measure of the value of any research effort. For this program to be successful, it is essential that managers use the results, and that field personnel accept and act on the findings of the research.

We ensure acceptance and utilization of the results of this program through the following steps:

  • We meet individually with senior managers to review the content of the questionnaire and obtain their comments at the beginning of each year. While most questions remain the same, it is essential that the survey stay current with management thinking and with new technologies, so we allow some change in content each year. Managers are busy; the best way to ensure a thoughtful review of the content is to schedule a time to talk with them about it on a face-to-face basis.
  • Unisys U.S. Federal Government Group leadership is personally committed to the satisfaction measurement program, and holds managers accountable for specific plans to improve issues identified in the research. We hold an annual review of the program, in which the Unisys U.S. Federal Government Group and ICR research team present a 12-month roll-up of research results to the senior management team. Following the presentation of findings, this meeting includes an immediate discussion of actions to be taken by each organization leader.
  • Management uses data from the program as input into the compensation plan.
  • Program managers validate whom we interview in each agency before we conduct the survey each quarter. This list goes through several reviews, to ensure that it is accurate, comprehensive, and unbiased. It is essential that the survey interview be administered with the appropriate respondents within the agency; based on experience, we cannot be sure this will be true using a blind screening process. If vendor management has input into what is asked, and the field validates who is asked, then it is difficult to argue with the results.

Is there a risk in having the field influence who responds to the survey? Certainly. However, as we are interviewing the same agencies and in many cases the same people every year, the satisfaction measurement program team will notice omissions and take appropriate action to explain or correct them. This is possible because of the limited size of the target population.

Consistent with asking the field for input about respondents, we do not allow interviewers to pursue referrals if a particular respondent is no longer appropriate at the time the interview is conducted. In these cases, the respondent information is referred back to the field personnel for correction, and the respondent is deferred to a later wave of interviewing.

Where do we go from here?

Our continuing concern is enhancing the involvement of customers with the satisfaction program. Customers need to see the program as a valuable way to improve their relationship with a vendor, and gain better service from the vendor. We see the issue of involvement as essential to maintaining an adequate response rate for the survey over the long term, and also as essential to maintaining the quality and level of detail of the information the survey produces.

One way to nurture involvement by customers is to ensure that the satisfaction program involves the two-way communication of information. It’s essential for customers to know that the information they are contributing is being used; the only way they know this is if we tell them.

Notes

1 Office Of Federal Procurement Policy, “Acquisition Best Practices,” Interim edition (May 1995).