Editor's note: Mark Travers is an account executive at Burke, Inc., a Cincinnati-based research firm. 

Raise your hand if this situation sounds familiar:

You contract with a quality market research supplier for a new segmentation project. With the help of your supplier, you write a great questionnaire that captures all facets of the consumer decision-making journey. You field your survey to include the most relevant consumers. You then work with your supplier to identify the segmentation solution that most closely aligns with your marketing objectives. You create segment personas to share with other business units in your organization. Finally, you receive a typing tool that allows the segmentation to live on, aiding in qualitative recruitment and in future quantitative studies. 

So far, everything looks great. Your stakeholders are happy. You give yourself a big pat on the back for your expert stewardship of a high-profile and important research project.

Then, just when you think the project is inthe books, your qualitative lead e-mails you to say they are having some issues with the typing tool. Specifically, they are worried about its validity in assigning qualitative recruits to the appropriate consumer segment.

The first thing you do is go back to the typing tool and peruse it for any potential problems. You take the assessment. Based on all that you have learned about the segments, you’re almost certain that the typing tool will identify you as Segment X. Instead, it types you as Segment Y.

Now you are starting to panic. Is the typing tool flawed? Or worse, perhaps the whole segmentation is somehow miscalculated? You contact your supplier to ask them to double-check the numbers in the typing tool. They assure you that it is working fineand that it is normal for there to be some error in the predictions.

You relay this information to your qualitative lead – but they push back, saying they are still having a difficult time constructing segment-specific focus groups based on the output of the typing tool.

What should you do? Can you salvage the typing tool or not? Is this indicative of a deeper problem with the segmentation?

More common than you think 

Believe it or not, this situation is more common than you think. The good news is that there is a fix for it – and it’s something that can be retrofitted into any existing typing tool to improve its classification accuracy.

To explain why this problem is a common one, and how to address it, let’s add some context to the example provided above. Imagine you are an insights manager at a large clothing manufacturer and your segmentation was performed on blue-jeans wearers. With your segmentation research, you have identified four distinct blue-jeans consumer segments: Rough and Relaxed, Hip and Trendy, Casual Professionaland Value-Seeking.

As stated earlier, your segment personas were well-received by your stakeholders. The problem is that the typing tool is lacking in face validity. For instance, knowing what you know about the consumer segments, you would expect yourself to fall into the Casual Professional segment. The typing tool, however, identified you as a Rough and Relaxed segment member. Your qualitative lead is experiencing similar issues, which is making focus group recruitment difficult.

Why might this be happening? First, let’s make the assumption that this segmentation was an attitudinal segmentation. This means that people’s attitudes towards blue jeans served as the basis for clustering. For example, the Rough and Relaxed consumer segment might hold the belief that blue jeans are primarily meant to be durable, comfortable and versatile. The Casual Professionalsegment, on the other hand, may be more likely to agree with statements such as, “I view blue jeans as an alternative to dressy work attire.”

Attitudinal segmentations can be contrasted with behavioral and demographic segmentations. As the name implies, a behavioral or demographic segmentation uses behavioral or demographic markers as the basis for clustering. For instance, a behavioral segmentation might split blue-jeans wearers into low-, medium- and high-frequency purchasers. Or, a demographic segmentation might split segments into Millennials, Gen Xers and Boomers based on a person’s age. In these cases, there is no need for a typing tool – you can assign people to their segment with 100 percent accuracy simply by knowing how often they purchase blue jeans or their age.

The attitudinal segmentation, however, is a common approach in segmentation research. Often, it is preferable to alternatives because it can add a deeper layer of consumer understanding. However, it makes the typing tool creation a delicate task. No longer are we dealing with 100 percent classification accuracy; rather, we are inferring people’s segment membership from their responses to attitudinal questions. And we know from a litany of basic psychological research just how fluid people’s attitudes can be. 

To muddy the waters even further, not only are we inferring people’s segment membership based on their responses to attitudinal questions, we are doing this using fewerquestions thanwere used to create the original segmentation. Fewer questions mean less information and less information reduces prediction accuracy even further.

The standard approach to maximize prediction accuracy while keeping the typing tool as short as possible is to identify a subset of questions that are most predictive of segment membership. We then plug those questions into our typing tool and use the tool as a short-form version to make segment assignments in the future. But here’s the rub: Because our segmentation was an attitudinal segmentation, it is the attitudinal questions that are most likely to be predictive of segment membership and thus included in the typing tool. But, having attitudinal questions alone in a typing tool, though it maximizes accuracy, may fall short in telling the whole story (by ignoring important demographic and behavioral information).

Luckily, there is a solution: append a second prediction algorithm to an existing typing tool based solely on the behavioral and/or demographic information. Working together, these two prediction algorithms will provide exactly the information you need to increase a typing tool’s face validity.

To see why this is the case, let’s return to the blue-jeans example. Naturally, in the case of any attitudinal segmentation, there tend to be some behavioral and demographic differences that emerge between the segments. For instance, it is likely that the Rough and Relaxed segment tends to skew a bit more male, blue-collar and older while the Casual Professionalskews a bit more female, white-collar and younger. These aren’t hard and fast rules; they’re more like broad-stroke trends. Importantly, they’re probably trends you’ve used to bring your segment personas to life. So, when you’re a female with a job in marketing research and the typing tool flags you as a Rough and Relaxed segment member instead of a Casual Professional, this might raise a red flag.

The truth is that there is probably a fair amount of attitudinal overlap between the Rough and Relaxed and Casual Professional segments, given that both view blue jeans as an integral part of their work attire. So it’s not all that surprising that the tool might misidentify some people in these segments. However, it is the demographic and behavioral data that can be used to break the tie, so to speak. Had your typing tool utilized both attitudinal anddemographic data when making its prediction, odds are it would have correctly identified you as a Casual Professional instead of a Rough and Relaxed segment member.

Using behavioral and/or demographic data as a second prediction algorithm is especially important for your qualitative recruitment sessions. For instance, when you are recruiting for, say, the Rough and Relaxedconsumer segment, you don’t want to recruit people who are on the fringe of the segment because they can dilute the opinions of the segment. Rather, you want to put together a group of people who exist at the center of that segment — people who can be thought of as segment archetypes and can speak to the core complexion of the segment. 

Save you some headaches

By deploying this second prediction algorithm to work in tandem with the existing algorithm, you can be sure that you are identifying consumers who match attitudinally and demographically/behaviorally to their assigned segment. In the end, you have a typing tool that is more face-valid – which might save you some headaches as you share results with others in your organization.

Segmentation is as complex a research problem as it is important. And there are many right (and wrong) ways to tackle it. The scope of this article focuses specifically on cases where typing tools lack face validity after an attitudinal segmentation is complete. Adopting best practices from start to finish can help avoid this situation altogether and can ensure a successful segmentation research project that lives throughout your organization.