A Lesson from the 2016 Presidential Election

The era of Big Data is upon us. From Google analytics to social media monitoring to data warehousing and mining, we are now told we as marketers had better pay attention and have our actions be guided by, if not dictated by, “Big Data”.

All well and good, right. After all, who can argue with data? It sits there in black and white, or the pretty colors of a pie chart, bar chart, heat map, or radar chart, for all to visually drink in. When you make a point in a presentation, paper, or case, having data that backs up your argument or claim or theory provides a level of immediate legitimacy to your argument not to mention a prop of confidence for you. In short, having data, and the bigger the better, makes us feel “safe” and secure in what we tell others to be true.

But… have we come to the point where this Big Data has become a crutch… something we have come to rely on all too quickly at the expense of reason? Has the “bigness” of today’s data troves overwhelmed us to the point where we immediately trust the “findings” of what the researcher or presenter is telling us? Could there be something we are still possibly missing?

Based on the misses of the pollsters in the recent presidential election, one has to seriously wonder. Were they unknowingly asking the wrong questions of the respondents? Did they get lazy and accept “undecided” instead of prompting further to get to what the respondent was unwilling to share? Did they fail to “peel away the layers of the onion” to see what was really inside?

One could conclude a lot of things based on the pollsters’ inaccurate depictions of how the election would play out state by state. But the underlying lesson here is that when you put your trust, and sometimes your future, on the line, how confident do you need to feel that the data you are relying upon doesn’t crumble beneath your feet like a house of cards?

Are there specific questions you need to ask yourself before assuming that your data is of substance, and meaningful, instead of simply comforting? Because “comforting”, or accepting data that we “want to be true” doesn’t make it so. It’s called the Confirmation Trap and is certainly one of the biggest problems we as marketers see in our line of work. So, with that said, here are some heuristics (rules of thumb) that can help avoid misinterpreting your data or worse, completely missing the boat in a very costly fashion:

1. First things first… define what you want to learn and make sure it is answers your question. Let’s rewind to the New Coke debacle: Had the Coca-Cola researchers asked “why do you prefer Coke over Pepsi?” instead of “Which of these blind samples has more flavor?” they could have avoided a lot of heartache. Had they asked why Coke drinkers prefer their brand, they would have uncovered that people drank Coke mainly because of the nostalgia it presented… going to town with Grandma when I was four and getting treated to a fountain Coke at her favorite diner. Or always picking up the shiny Coke Holiday packaging at the store with the Sundblom Santas on it just to see Mom’s face light up when I brought it home. And to think they thought it was all about how their sugar water tasted!

2. Am I asking the right people? Sampling is crucial. In elections, you cannot just poll those who live in cities and skip those in rural areas or you flirt with the same type of misses we just witnessed on a national scale with the 2016 presidential election. If you are uncomfortable, or just too plain lazy, to venture out of your comfort zone and seek out those respondents who may present a different view or just because they don’t live along your metro station, your results will most certainly be skewed. Talk to the wrong people, you will get the wrong results.

3. Design your research to include not just quantifiable data but fill in gaps with qualitative as well. In the instance of the election results, you heard a good number of media experts mention how the winning side had spent time at local eateries and coffee houses just simply talking with people to get a sense of how those people felt about things. When we fall in love with our clean cut phone surveys and relying on search data to tell us what people are thinking, we may very well be missing the boat entirely. Qualitative is messy and hard, but it provides a solid litmus test to keep things real.

4. Always, always, always, guard against bias. The first thing you should ask yourself or of those conducting the research for you is this: “What’s in it for them?” Or “What result would I like to see emerge from this research?” You need to do this to guard against bias because what do you think pharmaceutical companies who have invested millions in the development of a new drug want to see come out of their studies? They want their efficacy and safety trials to have stellar results of course. When you fall victim to the Confirmation Trap you will only pay attention to results that align with your wishes or beliefs. What do you think a marketing company who happens to own a direct mail operation tells their clients about their market? Why, of course, that their best customers are those in the demographic age range of 50 – 75. Why? Because that is the demographic that still replies to direct mail! You must guard against the Confirmation Trap at all costs.

5. Remember, above all, that Big Data is nothing but numbers. Until you apply the human element… the analysis of what you are looking at, you have nothing but, well, numbers.

Keep these things in mind and use data to your advantage, but do not become enslaved to it. Make sure you are not just on the bandwagon of Big Data, but instead pump the brakes, evaluate, try to poke holes in your own position on a topic to see if it still floats, stay real, and you may avoid what could become in the end… a Big Disappointment.