Tuesday, January 06, 2015
Predictive Metrics Only Work If They Actually Predict Something!
If that sounds self-evident, before you go, “well, duh,” think about this: Many companies talk the talk about “prediction” and “predictive metrics” and how everything from neuroscience measures to keeping tabs on tweets, scoring social network shares, and listing ‘likes’ are predictive and can accurately forecast consumer behavior. But, alas, ultimately they don’t do the required walk. It turns out that making predictions tends to be a far more popular pastime than actually checking on their accuracy. Few researchers or marketers put their predictions to the test, so we decided to do something about it.
The sixth annual Brand Keys What Happened? audio recording series that addresses predictive brand, marketing and advertising metrics, and was posted this week. This year’s review focuses on over a dozen categories including technology, pizza, ride-sharing apps, fast food, fast-casual food, retail, phablets, and coffee. The predictive review also examines consumer market effects related to emotional engagement values like patriotism, luxury, and innovation.
What Happened? examines predictions made about brands like Apple, Google, Louis Vuitton, Uber McDonald’s, Harley Davidson, Sears, Starbucks, Disney, Domino’s Jeep, Coca-Cola, Pizza Hut, Samsung, Hermés, and Chipotle, and reviews the accuracy of those predictions by examining what actually happened in the marketplace. The free recordings may be found at brandkeys.com/what-happened
The predictive metrics were extracted from Brand Keys’ 2014 Customer Loyalty Engagement Index, a 32,000 consumer-based assessment of 64 B2B and B2C categories and 600 brands, and reviews predictions made as early as the first week of January 2014. The engagement and loyalty process allows marketers to measure what is going to happen because the approach measures the real emotions and expectations attached to brands and associated marketing and advertising efforts on a scalable basis.
They identify the really important emotions and expectations and category shifts 12 to 18 months ahead of traditional research, well ahead of MRI-driven neuroresearch, which, for some reason marketers seem to believe delivers super-charged research insights into emotions connected to categories and brands. Maybe it’s all the lights and colors. Storytelling and entertainment are all well and good, but aren’t adequate substitutes for engaged consumers, sales or brand profitability. Prediction is remarkably less risky, is 100% consumer-driven, requires no wires, and produce validated metrics that point to what people will actually do, instead of what they say they are going to do. And provide marketers with the answer to the ultimate question, “What’s going to happen to my brand?”
The thing is a cursory review of claims about traditional survey-based brand research and even copy testing might lead one to believe that they work well at predicting the impact on sales of different strategies and campaigns, but we disagree. If their predictions are so good, why are marketers unable to accurately predict brand trajectories, especially when they suffer losses of sales and customers? Why don’t research results match actual market results? Why does that “predictive” research allow brands to fail?
The new generation of market researchers may have forgotten, but back in the late ‘60s marketers tried the same thing with galvanic skin response measures. They didn’t light up, but they did have wires, and it turned out they interesting, but not predictive. Lab tests are well and good and we applaud those who push the boundaries of marketing science, but it’s important not to allow enthusiasm for chasing the newest shiny thing to get in the way of accurate prediction.
Winston Churchill once noted, “No matter how beautiful the strategy, you should occasionally look at the results.” Today he’d need to add, “no matter how beautifully told the story,” “how cinematographically filmed,” “how many tweets and views were received,” or “what part of the brain lit up.” If nothing else, we hope our own review will inspire marketers to demand more from their research. Maybe do a little digging on their own “predictive” metrics, and about what was promised and what actually transpired. Because there’s a big difference between truly predictive metrics and interestingly collected data, and it’s worth marketers’ time to note the distinction.
You’re betting you brands’ futures on it.