Dipping a toe in social media analysis

Editor’s note: Tim Macer is managing director, and Sheila Wilson is an associate, at meaning ltd., the U.K.-based research software consultancy which carried out the study on which this article is based on behalf of Globalpark.

Results from the seventh annual Globalpark Market Research Software Survey, designed and conducted by meaning ltd., show a research industry where there is considerable diversity of practice in relation to the technology used and the extent to which technology is leading innovation. Often, it is the larger firms that are more daring in the way they apply technology or embrace new technology. We reveal that social media research is still in its infancy, with more firms considering it or experimenting with it than are actually raising revenue from it. Yet, for an approach that has its roots in Web technology, the methods being applied by researchers in conducting and analyzing social media research are still pretty low-tech.

We also seem to be detecting peak, or close to peak, for the growth of online research, as, for the first time, growth predictions focus heavily on mobile research. And we reveal some surprises about the extent to which research companies are turning to non-traditional data sources, how rigorous research companies are about testing online and highly interactive surveys and the directions firms are taking with respect to the distribution of research data and findings to their clients.

The survey was carried out among senior executives or business owners of market research companies around the world, in the final quarter of 2010. It consisted of an online questionnaire which took, on average, 15 minutes to complete. (The full study report can be downloaded at www.meaning.uk.com.)

The survey drew its sample from a wide variety of sources to ensure good coverage, and also included some participants from previous years. Individual named invitations were sent by e-mail to executives in senior positions within each company.

To avoid over-inclusion, only one response from each company was accepted. No quota controls were applied, but sampling was carefully controlled to ensure that different countries are represented roughly in proportion to relative size of each market, as reported in ESOMAR’s 2010 Global Report.

A response rate of 213 was achieved overall, with 79 from North America, 105 from Europe and 29 from Asia-Pacific. The survey is truly international, with contributions from 30 different countries. The response rate was slightly down from previous years, and sits at 10 percent of the invited sample. Researchers are a notoriously difficult target group to research!

We are therefore especially grateful to those who made this research possible: all those who willingly participated in our survey; and, of course, Globalpark, who generously sponsored the 2010 survey and provided us with technical support for the fieldwork.

Predicted changes in interviewing mode

We asked respondents what changes they foresaw in the amount of work they would handle over the next three years in their quantitative research activities. We used a four-point scale, with 2 representing major growth, 1 modest growth, 0 for no change and -1 for any decline.

This is the first time, since first asking this question in 2006, that online research has been ousted from the top spot. That position is now held by mobile research (“self-completion with mobile devices”).

We must wait to see whether the industry’s expectations for mobile research are reflected in increased volumes. Mobile self-completion is still very much a minority mode - as was shown in another question, where we learned that just 7 percent of respondents said that their companies offered it. However, there may be a clue to the future of this technology: We found that a more substantial 17 percent of large companies are already delivering mobile research.

Predicted growth for other modes is much weaker - mixed-mode CAPI and Web, mobile CAPI and “other” mixed-mode all occupy a position midway between modest growth and no growth. Online research is still showing a modest growth score, but expectations have weakened compared to previous years as firms perhaps see conventional online research reaching its plateau. In other questions, we have found that Web research has accounted for just below 50 percent of research companies’ revenue since 2008 (47 percent in 2010), plus the number of companies offering it reached just over 90 percent in 2007 and has remained at that figure.

Paper, however, seems to be coming down off the other side of the plateau. Companies are predicting a decline, and this is borne out elsewhere in the survey, as participants also reported the actual volumes achieved for paper were 13 percent. This represents a drop of one-third compared to the 21 percent of quantitative revenues recorded in the 2006 study.

CATI is proving to be more resilient than many industry commentators have predicted. Our participants put CATI’s fate slightly into the decline category, but the average scores net out at -0.16, with 0 representing no change and -1 representing decline. This is again mirrored in the actual reported volumes which have remained stable at around 25 percent of revenues since the 2006 study.

Social media use

As market research has been buzzing with ideas and opinions on social media research throughout 2010, we decided to include a few questions in our survey to explore the topic.

Given the recent arrival of social media research, it is not surprising that only 17 percent of companies (37 companies in total) say they are practicing it (Figure 2), indicating the method is still in its early-adopter phase. Volumes are low too: Social media research accounted for 5 percent or less of earnings for 24 of those 37 firms (roughly two-thirds of them) but some specialists appear to be doing much more, with seven of the 37 firms earning between 15 percent and 50 percent of their revenues from it.

Across the whole spectrum of research firms, there is clear interest in this method, with a further third (31 percent) currently experimenting with it and another third (32 percent) considering it for the future.

It is the large firms that are further along the adoption curve and, indeed, we have found throughout this study that large companies appear to be more technologically innovative. Thirty-three percent of large research companies already provide social media research, against 11 percent of small firms.

There is a surprisingly large number of companies (19 percent) who say that they are unlikely to offer research using social media.

In another question, we asked whether social media research was an alternative to existing research methods, a new method in its own right or none of these. Sixty-six percent view it as a new research method in its own right; 14 percent see it as an alternative to qual; 5 percent overlap it with quant and 2 percent view it as a desk-research replacement. A sizeable 13 percent did not see it as belonging to any category of research. However, among the large research firms, almost twice as many (27 percent) see it sitting alongside qualitative research.

Social media analysis

Social media research creates its own unique challenges when it comes to processing, analyzing and interpreting the data: The task is almost like reviewing hundreds of groups or depths you never attended. We asked how the information gathered from social media research gets analyzed and offered a list of eight methods we’d heard mentioned in presentations or articles discussing the phenomenon.

As shown in Figure 3, a substantial majority of those conducting social media research (57 percent) use manual methods for their analysis, which exceeds all others in popularity. Closely behind this comes text mining, a technologically-assisted method that still requires a high degree of manual input, with 54 percent using this.

Other more technological methods are fairly widely used, but there seems to be scope for researchers to reduce manual effort and improve their sifting through large volumes of unstructured textual data by adopting more technology. It appears to be an area where best practices are still being established. Given the affinity already observed between social media research and qualitative research, there may also be an element of resistance or even outright skepticism in applying technology that goes beyond simple word searches - and this has been the case for many years with more conventional quant.

Sample router use with online surveys

For the first time this year, we asked companies about their use of routers with their online surveys (Figure 4). We explained within the question that routers allocate participants to surveys when they respond to an invitation.

It seems to us that the use of routers is a technology area that many companies have yet to fully explore. Over 90 percent of market research companies offer online surveys, yet only 22 percent use routers, even though they have great potential to increase response rates and attract more people more quickly within hard-to-reach groups.

Many of our participants were unaware of the extent to which their companies deploy routers. Part of the reason for this lack of knowledge is perhaps because technology companies are dragging their feet on router development - half (50 percent) of the respondents who use routers use an own-developed solution, and that is the most commonly used router technology. Only 24 percent use a router that is a feature within survey software and 9 percent use standalone router software, which suggests either a lack of availability or quality.

Again, as seems to be always the case with new technologies, large companies are stealing the march - 43 percent of large companies use routers compared with 16 percent of small companies.

Data sources

Some leaders of the largest global research companies are saying they see a future in which market research is less dependent on conventional respondent surveys and routinely makes use of existing data or passively-gathered data. We asked market research companies what proportion of their data is derived from surveys and from other sources. In this question (Figure 5), we asked firms to report the proportion of data that derived from each of five named data sources in their revenue-based research activities. Globally, it appears that 76 percent of data used by market research companies is from surveys, so a quarter of the data is not from surveys.

There are large regional variations. Europe is the largest user of existing data sources, with 23 percent from this source alone, and a surprisingly low reported use of survey data, at 68 percent. The use of observational data is much higher in Asia-Pacific: 12 percent against the global total of 5 percent. North America, unusually, is lagging behind, with 86 percent of its data reliant on respondent surveys.

Passive data gathering, both where the respondent is and is not aware, are very much minority sources, and together account for just 5 percent globally. These figures are somewhat higher among large firms, which report double the amount for both sources. However, this is one area cited by many industry commentators to become increasingly important and offers a way out of the respondent-refusal challenge. Either way, the survey reveals an overwhelming dependence on conventional, newly collected survey data, though with some perhaps surprising inroads from other sources.

Usability testing

One consequence of researchers adopting more interactive components within their online surveys, such as questions where participants have to drag and drop or click on particular hot spots, is that the exercise becomes more complex and less obvious to the survey taker. Poor question design can lead to confusion among respondents and result in incomplete or even erroneous data being collected. Simple design considerations such as the starting position of an interactive slider can have a dramatic effect on the eventual average scores collected. The only reliable means to control for this high-risk source of error is to perform specific usability testing, similar to the tests software designers apply to new interfaces.

We had a suspicion that research firms, in the pressure to get the next survey out, were being less than rigorous in the level of usability testing they perform, when working with these kinds of interactive components. So we threw in a few questions about quality-control processes that firms apply to their new surveys.

Overall, firms do test their surveys (Figure 6). In a question not charted here, all firms named one or more testing processes that they routinely subject their surveys to: 85 percent of researchers always test their own surveys and 70 percent claim to routinely pilot survey instruments prior to launch.

Our suspicions on the interactive components were confirmed when we asked about testing procedures for these. The question charted in Figure 6 was asked only of those firms that field surveys with interactive components: 19 percent of our sample don’t do these kinds of survey. Specifically, we asked: “If your survey includes interactive components such as drag-and-drop or questions deployed in Flash, do you perform usability testing on these components?” and provided the four answer options shown in Figure 6.

It is just 59 percent of firms that “always or most of the time” apply some form of usability tests. Although overall, 83 percent of companies claim to “always” or “sometimes” conduct usability testing, we are astounded by the candor of the 10 percent who rarely or never conduct usability testing and are alarmed by a further 7 percent who had no awareness of whether usability tests were performed.

Those in Asia-Pacific and especially North America seem to be somewhat more assiduous at usability testing than in Europe, where routine testing is recorded by just 51 percent of firms - a worryingly low level of quality assurance, given the known risks to the data.

Online sample sources

This question looks at the proportion of sample that comes from each source and is one we have featured most years since 2006.

It is striking that large companies make far greater use of their own panels than others. This is perhaps to be expected due to the cost, and also the scale of utilization required to make an in-house panel economically viable. This has the effect that large companies are also much less likely to call on access panels. The observation that large companies rely on samples provided by their clients much less than the smaller firms is not as easy to interpret.

It is also clear from the chart in Figure 7 that firms in North America are much greater users of access panels and have not developed in-house panels as much as in other parts of the world. Access panels have been around a little longer in the U.S. than elsewhere. There may also be greater willingness to outsource in North America than elsewhere, where it is customary for quite large research companies to buy in most of their fieldwork and even outsource much of their DP - a model favored much less in Europe. But you may have a better idea.

Software in use

Every year, we ask respondents whether they use packaged, own-developed software or both (Figure 8). We are constantly surprised by how many market research companies use own-developed software. In many cases, they only use the software they’ve written themselves.

There are literally hundreds of companies developing software specifically designed for market research, yet many market researchers still see the need to develop their own. Why are the off-the-shelf packages not fulfilling their needs? Obviously no one company is going to find a package that precisely meets all of its needs but with all that choice, we find it hard to believe that the off-the-shelf packages cannot sufficiently meet the needs of most of companies, when balanced against the cost and risk of developing your own solution and then keeping it up to date.

For some reason, for CAPI, many companies seem to double up, with both an own-developed solution and a bought-in product. This is not the case for mobile CAPI and we are not sure why there is a difference in these two applications, since many of the commercial software packages on offer support both laptop and mobile CAPI. Mobile CAPI is a newer method and may be attracting newer entrants. Elsewhere in the study, it is clear that mobile CAPI has gained traction across the world, including North America, whereas large-format CAPI is much less prevalent in North America.

With analysis software, you might expect to see more own-developed software than in data collection, because it is in the analysis and interpretation that market research companies will try to differentiate themselves. This is not borne out by any greater observed use of custom software in the analytical process. What is different is the very low number (4 percent) who only use custom-developed software. But then, who doesn’t have Microsoft Excel on their desktop, and which market research company does not have someone, somewhere, using IBM SPSS Statistics?

Distribution methods

Each year we look here at the percentage of projects that use each type of client deliverable.

It is instantly clear from the chart in Figure 9 that PowerPoint continues to dwarf any other distribution method. However, a more interesting observation is that many distribution methods that are relatively rare in other parts of the world are commonplace in North America. Intriguingly, this applies to both technologically-advanced methods and more traditional low-tech techniques. For example, digital dashboards are used for 10 percent of projects in North America compared with 4 percent and 5 percent in Europe and Asia-Pacific; but also good ol’ printed tables are provided as a deliverable for 13 percent of North American projects compared with 5 percent of those in Europe and 4 percent in Asia-Pacific.

Indeed, many of the bars in the North American chart are higher than for Europe and Asia-Pacific, suggesting that North American market researchers are more versatile with the deliverables they are providing to their clients than elsewhere in the world.

Disappointingly for technology champions, we have seen little recorded change in the adoption of the more technologically-advanced distribution methods over the years. Interactive analysis has remained at close to 10 percent since 2006. We may even have a situation where PowerPoint is increasing its vice-like grip: 48 percent of projects were delivered on PowerPoint in 2006. In 2010 this had crept up to 53 percent, 51 percent in Europe and North America.

The good news from 2010, certainly for the environment, is that there is a definite worldwide downturn in the use of printed tables as a deliverable. This must also project a more forward-looking image for market research. In 2006, 23 percent of projects used printed tables as a delivery method whereas this has now dropped to 8 percent. There are probably few clients who will mourn their passing.