Editor’s note: Chris Benham is CMO of SurveyGizmo, a Boulder, Co.-based software company. 

Most market research begins with a manager posing a question: “Do we know…?” The question often launches an all-too-familiar process. A team is gathered, a survey is created, debated, revised, debated some more and finally fielded. Once the data is collected, it’s analyzed, discussed, debated, analyzed again and debated some more. Eventually, a report is created with pie charts and graphs and a meeting is scheduled to present the findings and analysis. Sometimes the original question is even answered. Sometimes. 

The reality is survey results often end up in pie charts and dashboards and never actually drive the change the original question was devised to address. How do we know this? Well, because one of our managers recently posed the question, “Do we know if survey feedback actually drives change?” (And, yes, the question precipitated the launching of an all-too-familiar process that involved creating a survey, debating it and so on.) 

The findings of the “Do surveys drive change?” survey were interesting – including the somewhat amusing fact that more than 15% of respondents claimed to use our fictitious BAPU score to measure overall improvement. What was perhaps most interesting was the apparent disconnect between the people who create, manage and interpret surveys and the senior managers who are responsible for implementing the change they highlight. 

When we asked survey creators whether their work drove improvement, 48.9% said, “some improvement,” while only 17% said, “significant improvement.” Conversely, 18% of VP/C-level managers believed their surveys drove “some improvement,” while 51.2% claimed they saw “significant improvement” as a result of their survey data. Who’s right? The people who create the surveys or the people for whom the surveys are created? In the end, the answer is determine...