Editor’s note: Katrina Lerman is senior researcher and corporate videographer with Boston research firm Communispace.

I have seen the future of surveys. And it’s glorious.

But man, do we have a way to go.

470933185When a recent project for one of Communispace’s hospitality clients involved a request for a slightly younger sample to complement their community audience, our consultants immediately thought of Google Consumer Surveys. The low cost and fast turnaround meant we could provide added value without missing a beat. Given my familiarity with the tool, I was asked to advise the team on the modifications needed to run the existing survey through Google’s “survey lite” platform.

At first glance, the survey didn’t look so bad: 15 questions, mostly single-select voting. (It’s worth noting that, while 15 questions might sound like a lot to someone outside the industry, Communispace, like most partners, routinely receives surveys twice this length from clients. Knowing that the future of customer engagement involves moving away from this model, we encourage shorter, more engaging questionnaires. But we also must operate under the realities of our clients’ needs, which, for the most part, are still tied to legacy models.)

From my past experience, I had to recommend against including detailed open-text questions, as Google’s respondents are not generally willing to put the time or effort into thoughtful answers (and certainly nothing as detailed as we have come to expect from community members). But I told them that the voting questions would be no problem, we agreed on screening criteria and sample size, and I went off to program the survey.

An hour later, I was eating my words. Or, more precisely, I was deleting them. A lot of them. To my astonishment, not a single question could be programmed into Google Surveys as originally written. The flowery text soared past Google’s character limits on questions (175 max; 125 “recommended”) and answer options (44 max; 36 “recommended”). Even knowing there were fairly strict limits, I was still shocked; the survey had failed my initial “eye test” – badly.

Gone was the conversational introduction that kicks off most of our surveys. Gone were the long “e.g.” lists and flourishing adjectives. By the end, my survey was a linguistic shell of its former self. But it fit into the software’s restrictions – and without compromising the fundamental purpose of each question.

The entire experience was eye-opening. While we typically think of strong, descriptive writing as a way to make surveys more engaging, this paradigm does not necessarily hold up in a mobile world. Twitter didn’t evolve alongside the mobile Web by accident. To a writer like me, the idea that, in the future, brevity will be valued over facility is both terrifying and disheartening.

But as my evening spent editing showed me, working within character limits requires its own set of skills. I found myself replacing commas with slashes, throwing around slang with indiscretion and eschewing prepositions altogether. Though I began my task annoyed and exasperated, by the end I felt exhilarated and liberated.

In a sense, language and length have become a luxury – dare I say, a crutch? – in the survey business. Not sure if you captured everything? Add another question! Does that wording make sense? Add a longer description! What if they forgot the previous question? Show it again!

It’s become clear that surveys need to evolve for a mobile world. Soon enough, if you’re not reaching respondents on mobile devices, you won’t be reaching them (a representative sample, at least) at all. And in the world of mobile, speed and substance trump style. That means shorter surveys filled with shorter questions – the right questions. Based on my experience, and that of others, we have a lot of work to do to get there.

Probably best to start now.