Editor's note: Kimberly Struyk is vice president, client strategy, at CRM Metrix, a Secaucus, N.J., research company. Struyk can be reached at 201-617-8181 or at kstruyk@crmmetrix.com. This article appeared in the August 23, 2011, edition of Quirk's e-newsletter.

 

During my first week immersing myself in the digital research arena, I was challenged to discuss the difference between methodologies used to measure digital versus traditional advertising. Four years later I am still asked the same question. Understanding the differences and benefits of each methodology can be a barrier to market entry for digital research, especially if researchers are not armed with the knowledge needed to help clients feel comfortable with the digital process.

 

The main difference 

 

First and foremost, let's define what is meant by traditional and digital research. The main difference between the two types of research comes down to the sampling techniques used. For the purposes of this article, traditional research is defined as research that uses a panel company or third-party sample provider to gather respondents/survey participants and collect data to accomplish the research objective. Conversely, digital research often measures a live, in-market experience within a real-time format that recruits the end user as a survey participant.

 

It should be noted that in most cases today, traditional sampling techniques are carried out in an online format (especially when evaluating media such as TV, print or even online usability testing) but this is not to be confused with the meaning of digital research applied here. Again, differing sampling methodologies with respect to survey respondent recruitment is the core distinction between traditional and digital research.

 

Secondary point of diversion

 

Additionally, within this definition, there is a secondary point of diversion between traditional and digital research that comes into play when evaluating the performance of an advertising platform. Aside from sampling methodologies, the type of advertising measured should also be taken into consideration. Traditional forms of research are typically classified as those that evaluate media within an offline category such as TV, print, radio, etc., whereas digital research measures an online advertising medium such as a brand Web site, online banner ad, social media page, online video, etc. Here, the offline-versus-online distinction is critical because the people using each form of media often have inherent characteristics that differ from one another.

 

Three valid concerns

 

Understanding the main ways in which traditional and digital research differ (within the sampling department), let's discuss what this really means. There are a few areas where the research is affected. There are three valid concerns about digital research, when looking from a traditionalist perspective.

 

Concern No. 1: To control or not to control the sample?

 

This concern goes back to the guarantee that research-based business decisions should be made using a sound sample that includes the audience considered the primary market and target audience for the product. This practice is the backbone of traditional research design. But within digital research, the sample is typically not controlled (although it can easily be done, if required).

 

Things are different within the digital space and here is why: When making digital placements, it is often unknown who will respond to that placement. This means that an array of individuals can arrive at your brand Web site or Fan page who may not yet be ready for convincing (e.g., consumers landing on a page by mistake via keyword searches gone bad). Controlling who is spoken with (as in traditional methods) would potentially eliminate a new target audience that should be granted attention. Or worse, the true value of the placement may be overlooked because those who were not within the target would have been eliminated initially from the sample. Traditional research does not typically investigate the quality of consumer coming through, simply because of the nature of the research design.

 

That said, digital research is about measuring the quality of traffic coming to interact with the initiative. Recalling a recent case study, the evidence is clear that if the sample was controlled to eliminate all traffic not within the target market, then business plans would have been made on incorrect findings. The results in Figure 1 were an eye-opener to our client, who thought they were doing the right thing by partnering with a philanthropic organization. Referencing past research learnings, I usually recommend doing so.

 

 

However, this case study reveals that the profile of visitors involved with the philanthropic organization is unaligned with the target audience for the brand. At the end of the day, conversions are almost nonexistent because those coming to the online experience do not have the means to buy the product, which stands at a high price point.

 

With this information, we were able to move the client away from this partnership, increasing KPIs and the potential for a conversion to happen. Again, the moral of the story is to let the sample fall naturally when gauging the success of a digital initiative.

 

Concern No. 2: But digital respondents have a pre-disposition towards my brand!

 

Yes, exactly! As we digitalists say, "What is wrong with that?" The consumers who choose to interact with a particular brand are the ones who can give the most valuable thoughts, opinions and feedback about it. This is especially important if the brand team is planning to invest in optimization. However, this is the exact opposite of how traditional research operates because the numbers will look inflated (artificially), bringing in higher scores than reported among those who are not predisposed toward taking action on the brand. In this case, it is important for the researcher to tease out what the research is trying to prove.

 

Two things to remember here: First, if you are assessing the impact of the experience, then measure among the population that is engaging with the experience by setting up a control pre/post methodology. Second, if the research needs to prove the value of having the experience versus not investing in the experience at all, then a digital/traditional hybrid methodology should be employed. This will reveal KPIs by looking across the brand performance within the general online population compared to scores achieved through the actual experience itself.  

 

There is one more nuance to mention that is a differentiator from how traditional research is set up. Set the digital experience as a priority over traditional data collection when using a hybrid approach, allowing the panel to collect and profile the natural audience coming into the online experience. This is the only way to gather a true evaluation of the experience itself.

 

Concern No. 3: Those who choose to answer a survey during an online experience must be happy with the experience, making the data skewed.

 

To this, the digitalists respond, "Then how are problem areas coming through in the results?" When looking at the data closely, we do see a representative distribution of satisfied and dissatisfied survey respondents. Moreover, we provide real-time solutions that assess what is happening among two distinct audiences that are thought to be the most dissatisfied with the online experience. For this objective, data is effectively gathered strictly for bouncers (who leave the experience within less than one minute) and abandoners (who leave the site and do not complete a transaction) to focus on what should be improved about the experience to keep them from defecting. Now, that alone should show how digital research methodologies are able to gather opinions from both sides of the story - pleased or not.

 

Segmenting the sample

 

At the end of the day, if these three concerns do creep into the research and cause a bias, we can turn to our old tool of segmenting the sample. This will allow us to filter out anybody who should be excluded from the research while also comparing across individuals with a similar mind-set. Hopefully, that alone will raise the comfort level of those still skeptical.

 

Something to think about 

 

However, the purpose of this article is not only to educate but to leave you with something to think about. The most interesting finding I recall is how the two types of research play out across an experience for measurement. When running a test for an online experience within a live environment (real-time Web site experience) and comparing the same experience within a non-live environment (using a panel sample) based on a forced exposure, the study found that the lack of true involvement (or predisposition) using forced exposure did not garner as fruitful of results for the analysis. The open-end diagnostics reveal that the consumer sentiment is much stronger when in a true real-time environment compared against a forced exposure environment.  

 

From the case study results in Figures 2 and 3, you will notice the quantitative scores are lower for satisfaction among those actually interacting with the Web site experience on their own. Why? Because they are rating the site according to their true expectations and purpose for the visit, making their ratings more rigorous in nature since they are highly attentive to the experience.

Tangible benefits 

 

Aside from the points above, there are tangible benefits of the digital research model as compared with traditional research. The digital arena boasts real-time data collection, which sees instant results without charging under the pay-per-complete model, making the data integrity and reliability even greater without breaking the bank or settling for smaller base sizes.