Editor’s note: Marija Smudja is advertising insights director at research firm EyeSee. Dobrinka Vicentijevic is shopper insights director, EyeSee. Sanja Cickaric is digital insights director, EyeSee. Mila Milosavljevic is senior insights manager, digital, EyeSee. They are all based in Belgrade, Serbia. 

Being a market researcher sometimes entails explaining complex research outputs to first-time clients who then have to retell the insights to internal stakeholders. However, for behavioral researchers, clarifying complex results to puzzled clients who are new to implicit research is a part of the day-to-day work. We compiled useful advice and do’s and don’ts to help corporate researchers make sense of the thrilling world of implicit insights. 

As we previously wrote, getting contradicting behavioral and conventional KPIs is not a bad thing – sometimes complexity helps us better understand the bigger picture and go deeper into analyzing the tested material. But what are some of the ways insights professionals simplify communication with clients and help them navigate behavioral results with curiosity, ease and confidence? 

Key message and framing set the tone for the conversation

One of the most important things to keep in mind is that for end-clients, such as CMOs and CEOs, how a particular KPI was measured or obtained (or if those showed contradicting results) is often not a top priority. What matters more is what the results mean for their business.  

The presentations and reports that make their way to the end-users and decision makers should not be longer than 15 slides or 10-minute presentations. Regardless of whether the ad was not visible enough (implicit insight) or if the ad wasn’t clear enough (a survey insight suggesting they should invest additional funds in modifying the creative solution) – they are interested in which decision they should make (e.g., increase the number of touchpoints or design a new creative solution), and not how a KPI was measured. For us, that means we need to be able to translate data into concrete steps they can undertake in business.

The way you frame the results when presenting them to the stakeholders will most certainly set the tone for the conversation that follows. Focus on one key message you want to relay and try to frame everything else accordingly. Also, try to understand that the stakeholders are often in difficult situations and faced with making big decisions that can make a huge impact on the company – for the better or worse. 

Making peace between implicit and explicit KPIs

It often happens that our implicit and explicit measurements don’t line up. In such cases, we tend to place a slightly bigger emphasis on behavioral results. There was a time in market research when declarative data was at the center of researchers’ focus. Today, we are much more aware of the advantages of behavioral info and what’s more – this type of data is more accessible to us thanks to different methodologies we use (such as eye-tracking and facial coding). Behavioral data is immune to the inevitable shortcomings of human memory and biased opinions. It allows us to explore how, when and what consumers do, very often in real-time. This doesn’t mean that info coming directly from respondents should be disregarded.

On the contrary, the approach is unique precisely because of the combination of these two types of indicators. The next-level quality of any analysis comes from integrating them, in an effort to understand the whole consumer. The results that are apparently contradicting can, in fact, be complementary.

We strive to make our final interpretation a cohesive entity – one that lends itself to storytelling. This is one of our do’s – always present clients with a complete, actionable story.

One of our don’ts is communicating fragmented KPIs (both implicit and explicit) and letting the client make peace between and unify separate points of data.

An additional step is our support for the client while they are getting ready to relay the story further to the stakeholders. Try to clarify precisely why these contradictions are OK and provide useful and easy explanations for such scenarios. Create additional reports that are streamlined and adjusted specifically for the broader audience.

Tackling complexity with key stakeholders

Speaking of storytelling, there are several ways you can help it. People are usually aversive to what they are not familiar with. In business, that is even more so, because the consequences of decisions based on new or different data can be risky. That’s why, when we need to explain the value of methodologies that are not widely used, we try to define everything through familiar concepts. complicated knot

In the case of explicit vs. implicit measurements, we wouldn’t separate ideas and concepts to behavioral vs. traditional, because it doesn’t really matter. In essence, they should both measure the same thing – the performance of a given advertisement/package design/e-commerce page – and are just evaluating it from different angles. Describe implicit/behavioral measurements as the base: if we were to compare it to Maslow’s hierarchy of needs, these would be the basic needs such as safety, food and water – a set of standard criteria that all material (regardless if it is an ad/pack design/digital content) should be able to satisfy. That is, any tested content should get noticed in its surroundings and trigger a reaction, i.e., perform well on implicit measurements. Only when we have this foundation, we can build upon it and see if the ad is clear/relevant/fun. Excellent performance on implicit tests is a standard, minimum requirement that must be fulfilled to further develop or evaluate the material. Tell the stakeholders that if you only rely on surveys and assessing explicit data it is like trying to work on your self-actualization while you are hungry and don’t have a shelter. Use well-known concepts like Maslow’s hierarchy of needs to metaphorically explain the relationship between these methods that don’t exclude each other.

What might create resistance when introducing new, let alone contradictory, KPIs is the fact that most clients already have standard ways of measuring performance. That’s why we would never suggest a 180 change from the current measurements (e.g., survey) to entirely new modes (e.g., behavioral), but instead behavioral in parallel with the traditional, as its complementary method. This way, we ensure clients can keep both the KPIs that are already a part of their business, and still open doors to innovation in research.

What convinces people to add new methods to their insights arsenal?

paper airplanesWhen it comes to persuading the stakeholders to try behavioral testing instead of traditional research, two great reasons usually turn them into believers. Even though virtual shopping is not identical to shopping in front of a real shelf, and a task-based e-commerce study is not the same as real shopping online, implicit research is, simply put, cleaner than self-reported data. We don’t rely on the plethora of biases that are at play in self-reporting. Secondly, using a monadic design and A/B testing in studies (i.e., multiple cells that we can compare) is much more reliable than comparing results to offline, standard research techniques. This would be the market research equivalent of comparing apples to oranges.

Another thing our clients need help with is understanding visibility data: What number is a good number when it comes to visibility? Here again, this needs to be solved by an A/B design, or by introducing a benchmark value to compare the results to.

Are insights people less prone to a rigid mind-set?

Further, it’s not only the stakeholders who have trouble with mixed results – sometimes the researchers do, too. For a complete analysis of contradicting implicit and explicit measurements, we need to use a holistic approach and always look for the reasons why. For example, we can have an ad with high emotional engagement, with an excellent holding power in the environment – great results on implicit measurements – but at the same time, have very low likeability and brand fit. On the surface, these are opposing KPIs. However, with more in-depth analysis, we might learn that they are not opposing at all. The ad is triggering a high level of emotions, sure, but those are mostly difficult or negative emotions (disgust, fear, sadness), so that would explain low likability. So, it is not enough to take a quick look at the KPIs and check if the implicit and explicit match, but look for the reasons. Only when we understand what each specific KPI is telling us can we can paint the whole picture.

Direct preference choice and its perils 

Make the study design clear upfront because it will simplify your life and conversations down the way. Also, use a monadic design. What sometimes happens though, is that in a monadic study, at the end of a survey/test, we sometimes get results that are confusing for the clients. 

For example, if we test three new packaging designs, and want to compare them with the existing packaging, we will have four cells so that one respondent only sees one of the new designs throughout the test. However, sometimes clients decide to put the respondents in an unnatural situation: have them compare the packaging they were exposed to for the entire duration of the test with the existing packaging and say which one they prefer, in a so-called forced-choice situation. 

This is done in case the resulting data doesn’t tip over in favor of any of the proposed new designs, in order to gain at least some insight regarding the efficiency of proposed new designs. So, what is the issue? For example, the winning design is Design 1 on all of the most important KPIs, but in a direct preference choice, Design 1 loses from the current design, which fared worse on all key KPIs and criteria. This confuses the clients immensely. In such situations, we try to explain that this particular bit of info is not as valuable as the rest of the data, because it is not a realistic situation for the respondent. People will never see two package designs of the same product on the shelf and have to choose between them. Additionally, in cases in which people face a choice, they are more inclined to pick less risky options (i.e., the old design, which is safer and more familiar to them).

Making the complex manageable

Trying out implicit research might be a big step for researchers used to conventional research, particularly if they have substantial historical databases and years of data to lean on. It is a researcher’s responsibility to clarify any issues along the way, allowing the predictive power of research to increase with the added value of a new complementing data set.