For insights pros, the more things don’t change, the more they stay the same. The wording is tortured but you get the point: As reflected in this iteration of our Corporate Researcher Report work life survey, status quo pretty much sums things up. Whether it’s salary and compensation (see accompanying article) or MR budgets, the responses have been very consistent since we began fielding the annual survey in 2014. For example, nearly 45 percent said their MR budget stayed the same as in 2016 (23 percent reported a decrease and 32 percent reported an increase).

But while budgets and many other aspects of the job stay the same, there is ever-present pressure to change, whether it’s calls to add some innovation to the methods and processes used to gather and analyze data (or risk obsolescence, in the view of some) or alter the way the insights function is perceived internally or the types of projects MR input is sought for.

To find out more about change’s role in the lives of our readers, we posed a number of questions in this year’s survey that centered around changes planned for the coming year, the adoption of new methods and assessments of and reactions to new and traditional methods. We also delved into data-quality issues, trying to define what poor-quality data looks like to them and learn more about what they do when research results are less-than-optimal. We tried to end with a bang with an open-end that asked them to vent about the areas of marketing research that most frustrate them.

More outsourcing, less outsourcing

To our query about the biggest change their organizations will make regarding marketing research over the next year, a number of factors received multiple mentions, including: corporate and departmental restructuring; more outsourcing; less outsourcing (aka bringing processes in-house); and increased automation.

No doubt reorganization is a constant across the insights function, as companies go through mergers and acquisitions, leaders cycle in and out or mandates come down from on high, but among our survey respondents, there were good aspects of reorganizing and some not-so-good ones.

The good:

“[The plan is to] bring in innovative research methods that are not currently being used or even being talked about in the organization.”  

“There’s certainly more of a focus on research and insights development. It seems like we’re finally breaking through and decisions are being made based on research.”  

“We will likely expand to provide research services to additional parts of the company as the work we do is very high-quality and we are well thought-of within the company.”  

“Merge knowledge department with the communications department in order to optimize resources regarding marketing research and data analytics.”  

“We are told that we will finally have a research agenda versus ‘do what you want’ or ‘do what people ask you to do.’” 

The not-so-good:

“Team is not being led by marketing research professional. Instead it’s being led by a person with a consulting background. I am concerned that the organization has devalued the expertise and experience our team brings because they do not understand it. I see this happening across the company.”  

“Our company is moving several studies away from traditional research firms to [a third-party customer experience management system] as execs think they can get real-time, multimode methodology with a closed-loop process from them. Huge waste of money and not likely to deliver half of what’s been promised. Decision made outside of Research. Will be built and managed outside of Research as we are not respected. VERY disappointing. Other huge change is bringing a lot of our other research in-house using Qualtrics tool. Research may implode by 2018.” 

As usual, count on the researchers for some good humor regarding their organizations’ MR changes:

“Maybe actually conduct some [research]?” “Hiring me! ;)” 

“Replacing me.” 

“We have a new CEO starting in late summer, so your guess is as good as mine.”  

“The cheapening of qualitative research by untrained stooges.” 

Restructuring is also changing researchers’ roles:

“[We are] moving away from overreliance on secondary market research and beginning to do more primary market research now that people, budget, resources are in place.”  

“We have a new CEO and a new CMO who are very focused on innovation and understanding the company. This has put an emphasis on market research. Having said that, we are still very conservative in our spending and don’t ever really push the boundaries when it comes to new techniques or new ideas.”  

“Starting to move towards a more integrated insights function. Looking at how traditional research can do more with big data/data science/analytics as well as listening research.”  

“Trying to maintain existing budget and staffing in a corporate environment that has radically shifted to value and rewards slashing expenses, yet its demands of the in-house marketing research function exponentially increase.” 

“Heavy use of outside strategic consultants this year who helped direct several market research studies that we then developed and helped deploy but they reported on. Could have done it all in-house but top mgmt not aware of internal capabilities or just wanted to hear consultant’s views.” 

“I simply do not know. We are currently being driven more by management’s opinions and less by research/data.” 

Outsourcing on the rise?

Outsourcing was a popular topic several years ago, largely as India came to the fore as a source of cheap, tech-enabled labor and brainpower, but it seemed to fade from the larger conversations across the industry. It may be on the rise again. Many respondents mentioned outsourcing as one of their changes of note, either in the form of work being moved overseas or of projects that were previously conducted in-house being outsourced to vendors:

“Offshoring. Hiring a team in India to be an in-house market research firm. Our company has laid off colleagues and is moving more work offshore. We are working with fewer research suppliers and have to cut our marketing budget. One way we are cutting the $ is to have our India colleagues do the research from beginning to end using software. Because this team works for our company we have to take calls at 6 a.m. to 8 a.m. to interact with the India team. More headcount cuts are expected in August. It appears that skilled researchers are no longer valued.”  

“Eliminating the internal market research position and hiring an outside agency to conduct all research.” 

“Losing headcount, moving more work to vendors.” 

“I don’t know if it’ll be within the next year but eventually they will do away with our internal research group and just outsource directly to other research vendors.” 

At the same time, while all the outsourcing might be a hopeful sign for vendors, an almost equal number of readers mentioned taking MR processes in-house:

“Eliminate additional third-party market research vendors and bring essentially all research in-house.”  

“Take more work in-house (a LOT more), much to my chagrin.” 

Automation was also mentioned often as a change, largely as a way to preserve already-stretched budgets and work schedules by using computers to handle some of the mundane, time-intensive tasks.

“Automating and streamlining processes and production to free analyst time for analyses.” 

“Working toward automating the sampling process to make in-house sample pulls easier for tracking studies.”  

“Automate as much as possible to avoid resources strain.” 


Twenty-eight percent of respondents said their organizations currently use an automation platform. When asked to choose from a list of automation platform capabilities that they feel have the most impact on gathering and producing insights, time-related factors such as rapid survey deployment and real-time reporting were seen as contributing the most.

Traditional vs. newer techniques

We asked about the perceived effectiveness of a group of traditional techniques in one question and a group of newer approaches in another. Quallies, you’ll be happy, as the in-person techniques earned high marks.

The good-old focus group, that much-maligned and oft-declared dead stalwart, was said by a combined 83 percent to be effective or very effective, with the focus group earning a tie (at 65 percent) with online surveys for the highest percentage of “effective” nods.

Despite grumbles about sampling elsewhere in their responses, readers gave online surveys the highest combined percentages of effective/very effective, at 92 percent. In-person interviewing (a combined 88 percent effective/very effective) and in-person ethnography (72 percent combined) also did well.

The strong showing of qual feels like a confirmation of the viewpoint that has been expressed many times at industry events and elsewhere that qualitative approaches serve a crucial role in this era of data proliferation as context-providers, giving depth and nuance to the stories being told by data from disparate sources such as sales, social media or tracking studies.

Even in the face of opinions that behavioral data is much more accurate or dependable for making marketing decisions because it’s based on what people actually did rather than what they say they might do, the act of talking with, listening to and observing customers or potential customers via qual methods still has value. It lets you see faces, hear the language and watch the packaging being fumbled with, etc.

Many of the less-established techniques, such as mobile ethnography, gamification, crowdsourcing, predictive markets, neuromarketing and other nonconscious methods, earned “not sure” percentages of over 50 percent when respondents were asked about their respective levels of effectiveness.

Online qualitative/focus groups earned an effectiveness of 56 percent, with 11 percent saying online qual is very effective. Mobile approaches such as mobile qual (combined 44 percent effective/ very effective), mobile-specific (combined 63 percent) and mobile ethnography (combined 41 percent) all acquitted themselves well, as did the non-mobile-based methods of text analytics (combined 54 percent) and social media research (combined 44 percent).

These numbers, coupled with responses to the survey’s various open-ends, show that most of these approaches are widely seen (and used) as complementary tools to the more-established data gathering methods, rather than as the replacements some of their proponents have long touted them as.

Level of adoption

On the topic of newer methods, we asked readers to assess their level of adoption of new techniques and also tell us more about how they choose new methods to pilot-test.

Putting a timeline around the pace of adoption of new tools is rather difficult but we settled on a scale that went from “innovator” on the early side to “slow to adopt (laggard)” on the other end, with the points of “early adopter,” “among the early majority” and “among the late majority” sandwiched in between. Not surprisingly, the bleeding edge is not familiar territory for researchers, with only a combined 12 percent putting themselves in the innovator and early-adopter camps. Instead, “among the late majority” is a more comfortable realm, with 43 percent using that descriptor for their speed of adoption, with “among the early majority” at 26 percent on one side and “slow to adopt (laggard)” on the other.

Happily, data quality was the runaway winner as the factor that is most important when choosing a new methodology, with 70 percent citing it as extremely important, followed by audience specificity, cost and question flexibility.

We asked an open-end to probe on how those and other factors interrelate during the choice process. Oft-mentioned factors were budget (of course), seeing presentations at conferences, reading Quirk’s (thank YOU!), case studies from other organizations and vendor recommendations. Two of the more interesting influences were word-of-mouth and the ever-popular gut-feel.

Though there were many, many wonderful responses, this one perhaps sums things up best:

“ANY methodology needs to be appropriate for the objectives and needs of a project. We don’t run out to try stuff just because it’s new (or because someone SAYS it will change the world in a blog post). We do projects because we have a business need for certain information. We look for the RIGHT methodology to get us the data we need. If it’s ‘new’ that’s great; if not, that’s great too. I don’t have the luxury of being able to test a methodology before applying it. If I pay for a project it has to work; I have people waiting for these answers. My job is to do what’s right for my company, not to spend money on and try out every ‘new’ company/methodology that comes around.” 

When quality is lacking

With quality as one of the stated focuses of this year’s report, we wanted to find out what researchers do when the results of their research are not the quality they had hoped for.

Some answers took a lighthearted (if rueful) tone:

“We call it qualitative research!!!” 

“Scramble!” 

“I would pack my boxes and get another job.” 

“Cry.” 

“I panic, cry, cower in a corner and pray for God’s mercy. Really? I communicate the insights so we can make the best business decisions possible and then I move on.” 

The more practical responses tended to span the spectrum from scrapping the data and determining who or what to blame to, as the commenter above expressed, making the best of the bad situation. A commonly cited approach was to label the questionable data as “directional” and not use it as the final basis for decision-making, while still trying to extract some value from what’s there:

“Figure out a way to salvage what is there and move on. Bank the knowledge so that things will go better the next time.”  

“This particular situation arose recently and we chose to go back into the field to expand the number of respondents.” 

“You adapt. You realize that the lack of quality means that your inferences are on a weaker foundation but I believe that there is always something worthwhile one can get out of research results.” 

“Make the best of what you DID receive. There are always aha moments in all data.” 

“[The questionable data is] often from trying something new. For example, we might be looking for a way to predict customer behavior in a specific category. We try something new but see that the survey questions were interpreted incorrectly or the results don’t make sense with what we already know. We tried but in those cases, we have to cut our losses. Doesn’t mean it wasn’t worth a try, though.” 

“Spend more time with the supplier on the report. Data quality is rarely the issue; poor-quality results to me means poor-quality analysis and reporting. If results are contrary to what someone was hoping to hear, that does not make them poor.” 

And sometimes it’s only the researchers who care about quality:

“Unfortunately, oftentimes executives in the company just want the number and don’t care about our explanation of why the quality isn’t there. Thus the numbers get used and reused while the research team winces every time we see it.” 

Some though, said that the dangers of bad data were too great to trifle with:

“Depends on the situation. Sometimes field additional research but if the quality isn’t good, I won’t report it. Bad data is worse than no data. And I will not compromise my credibility.”

A chance to vent

We ended the survey by giving readers a chance to vent on the areas of marketing research that they find most frustrating. Readers delivered a rich lode of commentary that we don’t have space to fully explore here but will certainly be mined for articles in the coming months.

Vendors, have your ears been burning? Quirk’s readers had a lot to say about the suppliers they work with. Many mentions, as in years past, of the problem of vendors wooing clients with the promise of executive expertise and involvement, only to end up having projects completed by lesser experienced workers.

“I’m frustrated that every research company says ‘We will have senior-level people working on this throughout the project’ but it always ends up low man on the chain who writes your questionnaires and reports – that analyst who hasn’t been included in any of the project development, strategy or clarification calls/meetings we have been having along the way but is expected to understand all the needs and nuances of the project through second- or third-hand feedback. How does this make any sense when everyone in the industry is pushing the idea we need to be better at providing ‘consultative’ engagements?” 

“Sample providers. These companies used to hire knowledgeable researchers who understood the ins and outs of conducting research. Currently, they are hiring salespeople who don’t understand data quality, sampling or weighting. The push is to sell you more, with worse quality.” 

The quality of vendor analysis and reporting was also a source of frustration for readers:

“Very rarely does a research company actually connect the dots. I had a vendor last year that created a 100+ slide deck of data charts. But I had to keep asking them to put it all together – what does it say about this segment of customers buying Product X via Channel Y? Nobody looks at that analysis unless I specifically ask, yet everybody claims to be selling insight and not data.”  

“We’re pretty frustrated with research vendors nowadays. It seems like most struggle with the basics, which forces us to spend time fulfilling their role (checking data, reworking presentations, etc.). In turn, it makes it difficult for us to see the forest when we’re dealing with the trees all the time.”  

“I find working with research vendors to be very frustrating. Most promise they can do just about everything; work is often poor-quality; they don’t understand our industry. And most of the time I’ll spend more time managing them and their work than it would have taken to do the work myself. Research vendors are great for corporate clients with big budgets who don’t know what bad research looks like.” 

Away from vendors, familiar thorns such as DIY research, the demand to innovate for innovation’s sake, budget problems, procurement and a lack of respect for the process and value of marketing research were commonly-sounded themes:

“More interest in what’s new and different (the shiny new toy) – and less in the tried-and-true. Continuing divide between qualitative and quantitative researchers – I’m good at both and appreciate both but many see their role as either one or the other. (Not) being respected and valued within the organization – lots of talk about wanting ‘insights’ but not valuing the research function with budget, staffing, resources. Non-researchers/ generalists as managers of the research function – ridiculous! Not enough research-based decision-making after all these years. Vendors scrambling to stay in business and make money – business development taking precedence over client relationships and quality research.”  

“I’m most frustrated with the lack of support (especially funding) of research. In a lot of areas, it’s still viewed as a luxury compared to something you need to do. It’s better than when I first started in the industry but it has a ways to go.”  

“Frustration is primarily within my own organization. Impossible hoops to go through to onboard vendors. A change of address by a supplier requires a mountain of forms. Almost impossible to get vendors approved to do patient research because of HIPAA concerns.”  

“The procurement process tends to slow down our ability to conduct research quickly and with a myriad of vendors and capabilities.”  

“Learning new methods. I hear all this talk about predictive analytics but can’t find education on how to do it. All I see is software that can do it for us. But I want to learn the ‘how’ first, especially since we don’t have a budget to buy the new software.”  

“Well, this survey deserves honorable mention. Our industry’s reliance on surveys to solve all issues [is frustrating].” 


And pretty much every technique out there earned at least one expression of disdain or tried patience:

“Qual practitioners peddling their methodology as cure-all instead of educating clients when quantitative methodologies are the best approach.”  

“Analysis of social media ... a lot of hype, a lot of promises. Five years ago people were predicting survey research would die because of analysis of social [media]. But to be honest, I see very little coming out of social media analysis other than ‘neat to know,’ very little that is actionable.”  

“The quality and health of panels/sample sources. We know there are issues, we just don’t know how bad it is.”  

“Some of the newer methodologies are simply too complex or out of reach to be done internally (e.g., biometrics, neurocognitive, etc.). They need to be made more affordable for client researchers.” 

And there were multiple mentions of B2B research-related problems, most specifically with sampling:

“Being able to reach our target audiences via e-mail for online surveys. We are B2B and we have extremely strict regulations on who we will e-mail. Additionally, our target respondent has an extremely low incidence. All this makes it very hard to conduct online research.” 

“B2B online sample quality! If I sold a product that dodgy, I’d be out of business.”  

“New methodologies tend to be targeted toward consumer (B2C) research. We need real innovation designed specifically for the B2B space taking into account unique industry vertical characteristics.”  

“Online survey – it’s so hard to tell if your data is quality or not. And it’s extremely difficult to reach B2B audiences (which is all we do). If we can find B2B it’s often crazy expensive for blind studies or we use all of our existing customers, which biases the results.” 


Open to growing

Despite researchers typing out a collective 12,000 or so words in response to our question on frustration, the mood of those who completed the 2017 survey seems similar to that of previous years. (One person said, “I’m not frustrated about any areas of marketing research.” At least someone out there is happy!) There are worries and hurdles aplenty – from sample quality and declining response rates to the ever-present encroachment by non-researchers – but on balance these Quirk’s readers are confident in their abilities, still committed to fighting for quality and open to growing as the demands placed upon them change.

METHODOLOGY  

The Corporate Researcher Report work life survey was conducted online from June 5 to June 23 among pre-qualified corporate marketing research (client-side) subscribers of Quirk’s. In total we received 861 usable qualified responses. An interval (margin of error) of 3.3 at the 95 percent confidence level was achieved. (Not all respondents answered all questions.)