Customer Satisfaction Research Reporting Should Jibe with Operations Excellence
Monday September 6, 2010 by Charles Shillingburg
While there is a great deal of discussion surrounding the use and predictability of the NPS (Net Promoter Score) versus more traditional measures (Satisfaction, Purchase Interest) and ways of displaying them (Top Box, Top Two Box, Top Two Box-Bottom 2 Box), there appears to be little attention paid to whether any of these methods of reporting have any real use when it comes to helping achieve improvements (other than their Report Card function). In other words, can any of these measures be translated into operational improvements? Are they used for this purpose?
If one looks at the traditional reporting methodolgies used by most companies, we see that attributes (independent variables like Fixed Right First Time) and outcomes (dependent variables like Advocacy) are reported as single numbers (e.g., Top Box Score, Means). Often times, scores above and below the averages are reported with visual cues (Green=Above Average or Red=Below Average or arrows up or down), indicating that the scores are either good or bad relative to the average.
The reporting is communicating, "Do more of the things that give you above average scores and less of the things that result in below average scores" and you will achieve improvements. They imply that the results are different, because the behaviors of individuals or manufacturing processes are different.
What if this assumption is spurious? What if the behaviors or processes are the same, but sometimes they result in above average scores and other times they result in below average scores? In fact, if you think about it, you know this is true (and much operational research has been done to prove this).
If it's the process and not the differences in individual performance that is to blame, then showing comparrisons to the average (even if it is the average NPS or average Top Box Score) is counterproductive to achieving improvements. Why? Simply because those affected know they are doing the same things, the same ways and are being rewarded and reprimended for employing the same processes in the same ways. Since they are doing the same things with differing results, they are left with two choices, dismiss the validity of scores and/or manipulate the results to achieve above average scores so they are not reprimanded.
There is a way to improve the situation and make the results useful for achieving meaningful improvements. Reports (most easily done with online reports) could add the use of Operations Excellence Control Charts to emphasize the undelying process' performance. Special cause situations could be isolated for further analysis (rather than every case where the individual was below average).
If Customer Satisfaction Research incorporates proven, effective Operations Excellence methods like Control Charts into its reporting methods and analysis, it will likely find a much more receptive audience among those being measured and have much more impact within organizations. It will help reduce manipulation of results, help build a more cooperative and collaborative environment for achieving the improvements in outcomes that organizations seek (Employee and Customer Satisfaction, Loyalty, Advocacy, Efficiency and Effectiveness) and move Customer Satisfaction results beyond being just a Report Card.
» Post to this Topic