The cure for infophobia

Editor's note: Marco Vriens is managing director strategic analytics and senior vice president methodology at The Modellers, a Salt Lake City, Utah, research firm. Vriens is Based in LaCrosse, Wis.

Many organizations fail to take advantage of the information that’s available to them. Listening to anecdotes and reviewing business articles, books and papers suggests that decisions are often based on something other than data and insights. For example, an article in Information Week reported that most managers in retail who are responsible for price-setting prefer gut-feel, while only 5 to 6 percent reported using decision support systems to help them leverage insights to set prices better. Similarly, a study found that 40 percent of major decisions are based not on facts or insights but on the manager’s gut. If that is true, then the percentage of “minor” or “less-major” decisions based on gut-feeling is likely to be a lot higher.

Even if decision makers do seem to take advice (read: accept insights), they might not fully accept the insights. According to some research, decision makers have been found to adjust their initial opinion by only 25 percent toward the advice and insights offered by others.

Based on my experience at various Fortune 500 companies, I have seen and experienced the types of action that can drive the acceptance and correct usage of insights and what can help improve the many decisions made throughout an organization. The following are beneficial to making sure insights are known, accepted and used: direct persuasion; indirect persuasion; processes for quality decisions; and interactive decision support tools.

Direct persuasion

Market research insights are often communicated to a direct stakeholder – someone who may have asked for the insights or commissioned the research. Whether the recipients of the work will accept the insights or the advice depends on characteristics of the insights, i.e., perceived credibility, actionability and acceptability. It also depends on the characteristics of the messenger: seniority, likeability, similarity.

Decision makers take a risk if they act on insights, as the insights may not be valid. This is called information usage risk. The more credible the insights are, the lower the perceived risk and the higher the likelihood the decision maker will act on the insight. Credibility is a function of 1) how well the insights are understood, 2) the rigor of the used methodology, 3) the degree to which insights are validated more broadly and 4) the degree to which the insights can be given meaning in a certain business situation.

Research and analytics professionals should articulate the methodological rigor (or lack thereof) and emphasize it during the communication of the insights. I recommend summarizing the key methodological features, both strengths and weaknesses. What can and what can’t be inferred from the data and analyses should be called out. I also recommend assessing the degree of validation and the evidence to justify influencing a decision. The key is to communicate what steps have been taken to generate high-quality data and insights and to point out the limitations of the insights. This helps get insights adopted (studies have shown this). Lastly, making insights meaningful for a given business context will increase credibility. This can be done in a variety of ways. A simple quote from a customer can make a more general insight more credible. Alternatively, weaving several connected insights into a “story” also will help.

In general, I have seen three factors that impact perceived actionability: alignment to short-term objectives, timeliness and clear ownership.

Understanding of the business that one is trying to influence or impact is key to delivering insights that are linked to short-term objectives and to show if there is a link to the bottom line. Decision makers are usually much more motivated to act on advice that will improve near-term results or will help them deliver on their annual commitments than they are in creating bigger benefits that may be several years away.

Timeliness is a factor when insights are meant to support a specific decision or when they are meant to offer broader understanding. Marketing/product managers and directors work on tight business timelines, where decisions need to be made according to a certain work-back schedule. If insights are to inform such decisions, then obviously their availability needs to be in sync with the timeline of the internal decision maker; otherwise a window of impact will be missed.

If the actions implied by the insights do not have a natural landing place within the firm, nobody will feel a terrible urgency to act. Also, in some cases action doesn’t happen because the idea wasn’t invented by the person or team that should act on the idea – the “not invented here” syndrome. In other words, the actions implied by the insights do have a natural landing place in the firm but were not created by the team and as a result the team does not feel the passion to act.

Even if insights are credible and actionable, a decision maker may still not accept them. Any insight suggesting action that runs counter to a decision maker’s agenda or goes against a recently-made decision or challenges current organizational structure faces significant resistance. In such cases, acceptance could imply regret or embarrassment and hence will be resisted.

In addition to the credibility of the insights, messenger credibility plays a role. When an individual or team is trusted, this trust will lower the information usage risk and hence increase the likelihood of the insights being used. This is extremely important, as insights can only become winning insights if they are appropriately used. The implication of this is that there are situations where the person who produces the insights may not necessarily be the best person – or should not be the only person – who presents the insights to an internal stakeholder. This issue is particularly important when insights are presented to senior executives. In addition and related to trust are characteristics such as likeability and similarity. The more the person who communicates the insights is liked or similar to the recipients of the insights the higher the likelihood of this advice being accepted.

Indirect persuasion

In many cases insights generated could be useful to a broader group of decision makers – many of whom the insights professional or team may not have direct contact with. For these broader audiences an insight can only inform a decision if it is known by them when decisions are made or when certain business issues are being reviewed. A recent Accenture report showed that high-performance firms have something in common: the speed with which they get the right information in the hands of the right people. Therefore, it is important to make it easy for decision makers to find the insights. Just as a consumer needs to get exposed to advertising multiple times before the advertising message sticks, decision makers and influencers may need to get exposed to insights multiple times before they stick and are remembered when the time comes to use them.

To prevent the belief among decision makers and influencers that relevant insights are too hard to find, an insights team needs to make them easy to find. If the relevant insights can’t be found, a decision maker may proceed without them, contact someone to find the relevant insights for them or commission new research or analyses. Whichever one the decision maker chooses, the result is a loss due to suboptimal decisions, bad decisions or wasted time and resources.

Joost Drieman, former director of marketing intelligence at Cisco, has said, “We must make sure our stakeholders see and read what we would like to share with them.” He uses a variety of channels to make sure insights are known. Using different channels to infuse insights in various formats – PowerPoint decks, e-mail summaries, in-person presentation, interactive tools, etc. – increases the frequency with which some stakeholders get to see and consume the insights.

Paradoxically, information overload can be an obstacle as well; business decision makers are getting overwhelmed by market research and customer data. It becomes too hard to glean the relevant insights, which results in either ignoring the data and insights altogether or picking them selectively. Decision makers tend to focus on those insights that are most readily available instead of using all of the available – but harder to obtain or track down – insights that pertain to the business problem at hand. This is often worse than not using any information at all. This is why it is so important to talk about and leverage validated insights, because again it involves reducing perceived information usage risk.

Examples of insight-delivery channels that I and others have used successfully include: e-mails; providing overviews of what there is and what is coming (a research roadmap); online portals where the decision maker can quickly find relevant insights and become familiar with them; marketing decision support tools to enable reviewing possible future scenarios; and, lastly, insights day, a full day devoted to discussing insights with a diverse group of stakeholders.

Processes for quality decisions

More work may still be necessary to ensure that the decision makers actually infuse their decision-making with the insights in the right way and that intended actions are actually taken. Thus, for a firm to profit from insights, it should have systems and tools in place to ensure the insights are acted upon correctly. Specifically, I recommend using a planning process, a review process and follow-through process:

Planning: This process would require that executives articulate the consumer facts, trends and insights that support the action choices that they are about to make. For example, a firm can require that any chief marketing officer has a marketing plan that contains the relevant insights that support their specific programs, preferably even the insights that might not support the intended marketing actions. An example of such a process has recently been kicked off at GE Healthcare, which recognized that it simply needed more rigorous plans for its solutions. Ryan Heath, global marketing manager and the person responsible for driving this initiative, has said “... the Marketing Plan Blueprint online tool asks marketers the imperative questions that help guide them in their market, customer and competitive analyses. Then, it aligns the appropriate tools to help marketers derive and communicate insights from these analyses.”

This tool has been rolled out to all marketers in GE Healthcare (including the chief marketing officers). Anyone who produces a marketing plan needs to use this marketing blueprint tool.

Review: Each plan – marketing or otherwise – is reviewed by the business executive and someone from an insights team. The insights team needs to act as the rebuttal leader. In other words, she needs to review whether all relevant insights have been used and whether the insights quoted are correct, etc. I have seen many business/program plans with “customer data” that had no reference to where the data, analyses or insights came from. That is a recipe for errors.

Follow-through: I have been in situations where the insights were presented and the executives agreed on the validity of the insights and on the action needed to be taken and yet nothing happened or progress got stuck. I had to follow up and check in with these stakeholders to find out why intended actions did not happen. Now, things come up all the time in business – the manager of the team who needed to pursue the action suddenly decided something else was more important or they ran into something that in hindsight was not as clear as they thought and the progress halted. To mitigate a situation like that, and to ensure the pursuit of positive impact doesn’t get stuck somewhere in the firm, it’s useful to have a process that involves staying engaged with the person or team who needs to act, up to the point where the actions actually happen.

Another part of the follow-through process is the in-market tracking of results. When, in collaboration with your stakeholders, a set of actions is defined, one also needs to articulate what impact these actions are going to have and when, and a plan needs to be agreed upon that details how evidence regarding impact (or lack thereof) is being gathered. This is important for two reasons. One, the sooner you have evidence whether or not something works, the sooner you know you are either on the right track or need to rethink your actions. Two, actively and consistently engaging in evidence-gathering ensures the firm will engage in ongoing learning about what works and what doesn’t.

An additional benefit of using a process for both the planning and the review phase is consistency. Consistency here refers to doing things well, not just occasionally but consistently. Henkel, a German adhesives company, made two branding decisions with very different outcomes. This example is based on a Harvard Business Review article by Kashani (1989). A first decision pertained to Henkel’s Pattex brand. The strategy that Henkel wanted to implement was to make Pattex an international umbrella brand by putting fast-growing products under it. Initially, this idea received strong opposition from Henkel’s country managers. To validate whether the brand’s extension plan was a good one, Henkel tested it with consumers and the positive consumer feedback it garnered persuaded the country managers. The strategy was implemented and was an immediate success.

A second decision involved Henkel’s Pritt brand. A similar strategy to the one used for Pattex was considered. In this case, after the success of Pattex, the country managers endorsed the idea immediately. However, in this case, consumer survey results indicated the harmonized product line with Pritt as the umbrella brand might not be enough to turn around the broader Pritt brand. Yet, Henkel still went ahead with the Pritt umbrella brand strategy and it failed. If a planning and review process had been in place, Henkel would likely not have made this mistake. At the end of the day it is not only about making a better decision, it’s about making consistently better decisions than your competition.

Interactive decision-support tools

Acting on single insight, or selective insights, without reviewing a broader set of insights should be discouraged. Having tools such as an insights search engine is a good idea but may not be sufficient when insights are based on advanced analytics that enable users to look at many different scenarios. In those cases interactive decision support tools (IDSTs) may be needed. An IDST can be defined as a tool that transforms many business data and insights into: simple reports and graphical representations of selected data; summary insights; and predictions of the impact different action scenarios will have on predicted revenue, profits, etc.

These tools are usually made for a specific user or internal stakeholder group to help them make decisions more easily and effectively. If well-designed, such tools can highlight the connection between certain marketing actions and the most likely result in terms of business success and can therefore be a great option to help make better decisions.

These types of marketing decision support tools have been proposed and used for resource allocation, sales force optimization and pricing and branding decisions. Especially in situations where a lot of data needs to be reviewed in a somewhat standard fashion, an IDST can provide great benefits. We distinguish two types of IDSTs:

  • A tool that helps a decision maker quickly sift through a large amount of data and/or insights, zooming in on a relevant subset in a way that helps the decision maker ask the right questions or narrows the set of possible directions in which a decision needs to be pursued.
  • A tool that calculates many different (hypothetical) scenarios in a way that helps the decision maker quickly home in on the scenarios most likely to lead to be the best decisions. In this second case, we have an underlying (e.g., statistical) model that predicts the outcomes of various hypothetical scenarios (these scenarios are usually defined on a number of key drivers). For example, if the analytics team produced a demand function that models demand as a function of prices, seasons, promotions, etc., an interactive tool could be built that would allow managers to review several pricing scenarios and see what the impact on sales and profits would be.

Results of studies have shown that the use of an IDST can improve both the efficiency and effectiveness of decisions. Just as with the insights, adoption of a marketing support system is something that needs to be managed. This is a challenge that involves two components. One needs to convince decision makers that they should use a computer-based tool and that the recommendations made by the tool will result in better-quality decisions than gut-feel or random available information and insights. Four factors have been found to affect the adoption of an IDST: ease of use, training, feedback on upside potential and feedback on how to improve one’s own mental model.

Better business performance

Decision makers should draw on all of the insights available to them because there is strong evidence that effective use of information leads to better business performance. Even if you have access to the same insights as your competitors you can still gain an edge by making better use of them. With the concepts outlined above as a guide, you can develop a strategy for improving your organization’s handling of data and insights and ensure that future decisions are fully and profitably informed.