Editor's note: James Rohde is founder of James A. Rohde Consulting, a Pittsburgh research firm.

There are all kinds of methods to help tackle questions about overall brand health. These critical tactics provide a holistic view of what is happening in our markets and provide some light to help navigate our paths. Though, at this particular view, we’re looking at something that could be thought of as Google driving directions with the map zoomed out all the way. We may be able to see where we want to go and the general direction we need to take to get there but not many of us could actually get in the car and start making our way without a viewpoint that is a bit more zoomed-in.

Just like in our Google directions example, we usually have all the information we need to get the more actionable directions, we just have to do some fine-tuning. Almost every reputable brand equity methodology requires respondent-level data based on ratings or rankings attributed to various traits used to make our measurements. The caveat of course is that your ability to take specific actions will depend on the attributes that have been used.

So for the sake of the process, we’ll assume that our attributes are workable and that we have seen our zoomed-out strategy and we are at the moment of “Now what?”

Fall victim

It is very easy to overlook everything you already have at hand to answer that question. This is where we can fall victim to a twist on a classic idiom. Where “can’t see the forest for the trees” is a more typical dilemma, we now “can’t see the trees for the forest.” Remember that we already have all the ratings and opinions of specific attributes, at a respondent level, for all the brands addressed in our study.

But alas, we run into all kinds of problems with attribute comparisons between brands when we’re actually trying to make specific improvements. For one thing, we have the halo effect, which makes things very difficult to interpret. Then we have attribute effects, which typically compound the problem.

For the sake of clarification, I’m using the term halo effect as the propensity for respondents to rate attributes of a brand either more positively or negatively based on their overall brand preference rather than strictly on the performance of the brand. For example, HTC may receive higher ratings than Motorola across all attributes when prices between the two are about the same.

When I refer to attribute effect, I’m talking about the inclination for respondents to rate something more highly or negatively based on their feelings of the attribute more than how the brand performs. For example, quality scores may be higher than price scores because respondents typically like “price” less as an attribute, regardless of the brand being rated.

Rid ourselves of the noise

Fortunately these problems can be overcome. By “centering” the data, as shown in Table 1, we can make our comparisons and rid ourselves of the noise that clouds the analysis.

Table 1 is meant to represent a crosstab as-is from a brand study. We see that Brand A rates highly as a whole, followed by C then B. If we did some stat testing we'd get some confirmation that the larger numbers are indeed larger and we could call it a day. Unfortunately, we also know this is not really going to do much for us. What if we are Brand A? According to this table, we are pretty much leaving the competition in the dust and should have no worries.

We can see that we're rated more positively than either brands B or C in everything. Though we also see that brand C is also generally rated more positively in everything compared to B. Does that mean that we can honestly say that Brand B is doing nothing right? How are they in business?

Just to play devil's advocate, in Table 2, let's take a look at these figures accounting for the brand halo effect. In order to account for the halo effect, we have taken each of the ratings and subtracted the brand's mean score from each of the attributes. Already we're seeing some changes in what we may walk away with but before we get too excited, look at what is going on with the mean attribute scores.

If we take this at face value, variety of product is pretty messy for everybody and friendly staff is just about level. Let's also take a look at the price vs. quality figures. I'm sure it's not much of a surprise that price is rated more negatively than quality. After all, respondents are rating something they receive vs. something they have to give. Again, taking this at face value, there is not much to say except to once again show the bipolar relationship between these two variables.

Right now, we are seeing that generally, there are just some attributes that get higher ratings and others that get lower ratings. With this being the case, how do we address actual performance on the attributes that we're trying to measure? If we are to give some honest direction as to where our opportunities fall, we need to account for this by doing to our rows what we did to our columns (Table 3). Take note that the mean attribute scores that we are subtracting are the ones that were recalculated after we accounted for the halo effect.

Get specific

Now we’re starting to understand who is doing what well. We have taken the general ratings associated with our brands and attributes and focused on the variance more than the ratings themselves. This allows us to get specific about things that could be improved.

It’s quite a different picture now that we’ve eliminated some of the noise surrounding our ratings. It feels like just moments ago we at Brand A were declaring ourselves invincible and now? (Sigh.) Okay, so this is just as inappropriately dramatic as our declaration of victory before. What we are seeing are the things that are fueling our brand halo and where improvements could launch us to the next level.

For example, a friendly staff appears to be doing wonders for Brand C. Also, we notice that our product variety appears to work in our favor when influencing people to recommend us. On the other hand our slow checkout is leaving people unsatisfied with their experience. How was Brand B able to stay in business? Well, there appears to be a market of people who just want to get in and out of the store.

How about the eternal struggle between price and quality? Our table shows us that we at Brand A are actually in pretty good shape and have an acceptable value equation. Brand B also seems to be doing OK in this territory but Brand C, our closest competitor in the this example, appears to be perceived as being more costly than their quality deserves. Just using our original table, I’m not so sure we would have zeroed in on this. Imagine the campaigns that this type information may inspire!

Provide the context

While we have gone through some lengths to rid ourselves of the halo effect and attribute effects, my intention is not to say that these are unimportant and should be completely dismissed. These effects – particularly the halo effect – are findings that provide the context and help set the goals that bring the most opportunity. However, in order to actually reach those goals, we have to get ourselves out of the theoretical and into reality so that we set objectives that can actually be reached.