Ongoing maintenance required
Editor’s note: Jamie Baker-Prewitt is senior vice president, director of decision sciences at Cincinnati research firm Burke Inc.
Business managers use data to reduce the risk of making bad or flawed decisions. As providers of data, information and recommendations for action, survey researchers must ensure the validity and reliability the decision support they provide to managers, particularly when managers use the information to make decisions about how precious resources are expended. A common example of such decision support comes from what researchers call “trackers,” or programs of research aimed at periodic and sometimes continuous measurement and reporting of consumers’ attitudes and behaviors.
Approximately 32 percent of the revenue from online research conducted by U.S. marketing research firms in 2006 consists of tracking studies1. While they vary in design and usage, all of this research is intended to inform important managerial decisions spanning various elements of marketing and organizational actions. Despite the obvious impact of trackers on whether companies achieve their business objectives, little published material exists on critical success factors for tracking research.
Differing objectives and marketplace characteristic should dictate how tracking programs are designed, but some best practices should be ubiquitous. As a provider of survey research and industry education, and as a research company with experience with tracking research, Burke offers the following set of critical success factors for designing and implementing actionable tracking programs.
- Make decisions about measurement frequency based on the nature of the information being measured and the speed with which information users can use findings to make decisions. In order to connect advertising investments to brand and advertising awareness, ad expenditures by week and media outlet are juxtaposed with weekly awareness levels in the marketplace. In this and similar cases, frequent measurement and reporting best informs business decisions for the firm. However, other situations call for less frequent measurement. For example, monitoring relational customer loyalty often requires less frequent measurement because managerial actions take longer to implement. Subsequent changes in performance and opportunities for customers to experience the change require more time, so semi-annual or annual measurement might be most appropriate. In sum, selecting the right interval between measurements will help to optimize how research dollars are allocated.
- Choose sample sizes that ensure appropriate precision levels. High levels of statistical precision mean that the estimates provided to business managers have enough statistical precision to detect changes over time. Furthermore, in situations where a few percentage points make a big difference to high-stakes decisions, tracking studies must include high precision levels (e.g., +/-3 percentage points). An example of such a situation would be the use of a customer loyalty measure to determine whether managers receive bonuses. However, other situations require less precision because the stakes are not as high, and managers only need to understand general patterns; some general brand awareness and usage trackers might fall into this category. Because precision levels are partially driven by sample size, and data collection tends to be the costliest phase of survey research, researchers should select precision levels that are appropriate in each particular business situation.
- Keep the survey focused on the information business managers need in order to make good decisions. Invariably, researchers are pressured to include in tracking surveys quite elaborate and often redundant attribute batteries, along with exhaustive sets of questions only loosely related to the core objectives of the measurement program. Even when they are launched with few or no extraneous questions, “nice to know” survey items tend to be added, causing respondents to endure lengthy, often meandering surveys. Such surveys can lower response rates and also produce data generated by bored, frustrated respondents who become mindless midway through the survey. By keeping the survey focused on the main business objectives and related decisions to be made, researchers enhance the quality of the information obtained and reported from tracking programs.
- Use statistical significance testing appropriately. Because technology enabled it long ago, researchers sometimes cannot resist the urge to test every measure against itself during the previously reported measurement period. In reality, this misstep often produces reams of significant differences, some reflecting true change and some reflecting an annoying byproduct of significance testing: the occurrence of spuriously significant differences that reflect no true population differences (in this case changes over time). With a 90 percent confidence level, 10 percent of significant differences are not true differences in the population. However, superior significance testing practices do exist and should be followed for trackers. First, perform omnibus tests (e.g., ANOVA, MANOVA) to determine whether overall change is present - that is, change across a set of measures such as brand image ratings. Second, consider adjusting confidence levels to account for family-wise error rates. The appropriate adjustments are dictated by whether the significance tests were planned or not and the degree of overlap within the set of tests performed. However, simply using, for example, a 95 percent confidence level rather than a 90 percent confidence level would reduce the incidence of spurious significant differences. Finally, make significance testing hypothesis-driven; do not simply “compare everything to everything” to see if some comparison will “pop.” By definition, one will, but it might not be a result that is meaningful in the marketplace.
- Interpret findings in conjunction with relevant organizational initiatives. Most organizations have at a minimum six to eight internal initiatives going on at any given time. For example, a company might invest in a six-month training program designed to enhance the use of a new CRM system. Another initiative might focus on supplementing large-scale advertising campaigns with tie-ins to local community events. In addition, major clients might be invited to company headquarters for the unveiling of a new product line. While these activities might seem unconnected to results reported from tracking programs, each of them could have an impact on how the tracking information is interpreted. Therefore, researchers and end users alike must interpret tracking data within the broader organizational context.
- Interpret findings within the broader competitive and environmental context. No organization operates in a vacuum. Actions by competitors and the emergence of new competitors change how companies should allocate resources. For some industries, government regulations create boundaries within which organizations reach and serve customers. Furthermore, economic and sociopolitical trends change how consumers are influenced and, ultimately, the choices that they make. Without knowledge of the environmental context of findings from tracking programs, researchers and business managers will likely misattribute shifts in items like brand awareness, customer loyalty and satisfaction with transactions. While many reports from tracking programs include little or none of this broad environmental information, such context is necessary for proper interpretation of data from trackers.
- Weigh the value of changes to sample frame, survey item wording, scaling, etc., against the loss of comparability to previous measurement waves. For nearly every survey-based tracking program, there comes a time when information users desire a change in the measurement system. For some organizations, the content and structure of the survey instrument, for example, might be in a state of nearly continual change. Legitimate causes for the requested changes often exist, such as shifts in business strategy, changes in the competitive landscape for the business, and changes to improve alignment between survey data and other operational or financial data. However, modifications are sometimes requested for more trivial reasons such as a new information user having preference for a seven-point rather than a five-point scale to measure the believability of an ad. Simply put, most methodological changes reduce comparability of data across measurement periods. This reduced comparability decreases one’s ability to determine whether change over time reflects true change or whether change is merely a product of the changes in methods. While political considerations drive some research design changes, and while some methodological changes can be “adjusted out” analytically in part through parallel testing, the value of each change must be weighed against the loss of comparability in interpreting data patterns over time.
- Fully understand how findings from the tracker will be used within the organization. Tracking programs that truly shape a company’s strategy and tactics start with the end in mind. That is, how information will be distributed and used must be established early and used to make research design decisions, specifically those related to sampling, survey construction and data analysis and reporting. Early qualitative research allows researchers to hear what business managers need from tracking programs in order to improve the quality of their marketing and operational decisions. In addition, actually shadowing one or more information users can allow researchers to understand how tracking research results can be used to improve how companies operate. Thus, while the methods differ across situations, researchers must understand how decision makers and decision influencers use tracking information to achieve their business objectives.
- Hold periodic program reviews. To the extent that daily management of tracking programs involves many discrete activities and many individual changes to the design, it sometimes becomes difficult to envision the broader purpose of the research. In addition, different client contacts and information users have different ways of operating, and shifts in personnel often require rearranging priorities and changing communication approaches. Accordingly, stakeholders should hold periodic program reviews wherein researchers, decision influencers and decision makers jointly evaluate various elements of the tracking program. Topics for review can include core design components, as well as the “softer” aspects of implementing large-scale survey research. These softer aspects can include how results are positioned, how and when project team members communicate and the overall health of the professional relationships among project team members. Holding a program review annually can be a worthwhile investment to ensure the success of survey-based tracking programs.
Not that simple
At first glance, a tracking program might seem to be one of the easier forms of survey research: design an approach and keep repeating it. However, anyone who has implemented tracking programs knows it is not that simple. An abundance of operational details, paired with many forces that mandate changes in the measurement approach from period to period continually challenge the team of researchers running a tracker. The best practices described here can assist such researchers in maintaining the validity and usability of these all-important tracking programs.
References
1 This percentage represents the sum of all customer and employee satisfaction measurement, awareness and usage tracking, and advertising and brand tracking reported for 2006 in Inside Research, February 2007, Volume 19, Number 2, Issue 224.