top of page
LW Master Image Optimized.png

The causes of
ineffective marketing

The evidence problem

Lone Wanderer logo

In a business world that leans into ideas like ‘data-driven’ and ‘data-led’, marketing needs to make empirically informed decisions about approaches, methods, and tools while employing proven, tested, and validated best practices.

 

In short, good marketing is evidence-based. It draws on credible data and insight generated using the scientific method: formulate a hypothesis; design a neutral method to test it; analyse the data; and draw a conclusion. It critically assesses and evaluates new evidence against established benchmarks; and if it is up to standard, better decisions and practices can be integrated into the marketing programme in appropriate places, making it more robust and fit for purpose.

As Ronald H. Coase said: “if you torture the data for long enough, it will confess to anything”. In the current marketing world, you can find evidence to support anything you want. For every article of empirical proof that X is true, there is a counter-article of equally empirical proof that it is false; for every article of empirical proof that Y is best, there is another article of equally empirical proof that Z is in fact best; for every bit of data saying that consumers now do A, there is another proving they still do B. Are attention spans declining, or are people better at filtering out the noise of poor advertising on oversaturated channels? Are television and radio dead channels with no hope of resuscitation, or are they a covert special force that is as strong as it ever was? For every data-driven proclamation of death or decline, there is an equally data-driven rebuttal.

 

That's because much of it is "evidence": it has the appearance of legitimate research, but it is engineered to deliver a predetermined conclusion. Rather than provide facts or objective insight it supports an agenda or bias, using questionable methodology, curated data, and manipulated presentation. "Evidence" is generated from the reverse scientific method: deciding the desired conclusion, cherry-picking the data that supports it, creating a methodology that delivers it, and then pretending there was a hypothesis to frame it as a legitimate conclusion.

 

The methodology can be designed with biases. It could have used sampling biases to increase the chance of a particular responses: a creative agency asking creatives to rank the importance of creativity would be near guaranteed to unanimously put it above everything else. It may have a non-representative sample, despite asserting the conclusion is widely applicable. Quantitative instruments can be built to deliberately manipulate responses: leading questions, ordering effects, ambiguity.

 

A common trick is using extrapolations, which come in two forms. The first is qualitative extrapolation, where small, industry-specific interests are presented as large, seismic shifts that will change the attitudes and behaviours of the general population. The second is quantitative extrapolation, where temporary, short-term changes in numbers are presented as a permanent trajectory that signifies changes in the attitudes and behaviours of the general population. Even better, quantitative extrapolations can be used as evidence qualitative ones, making them even more unreliable.

 

The choice of statistic is important. Averages are a misleading figure, as they can be easily distorted by a few outliers; the conclusion may be very different if it was based on modals. Percentages can remove relativity and skew perception: saying Amazon's ad spend is only 3% compared to the typical 8% obscures the fact that Amazon's spend is actually much higher in real terms, as they are a much bigger company; 3% of £1 billion is more than 8% of £2 million. Drawing the conclusion that ad spend can be lower across the board based on this comparison is dangerous and irresponsible.

 

Visualizations can be manipulated. Data that would cause a visualization to clearly misalign with the conclusion can be removed: if 65% of respondents gave "I don't use social media" as an answer, those responses can be removed from the data set so the chart shows everybody uses at least one social media channel. Graphs can have scales left off of axes so it is difficult to tell whether any fluctuations or movements in a line are significant or not, or axes can be shortened to cut-off any parts that don't align with the desired conclusion.

 

Conclusions can be framed in a way that hides a more important insight. If X is the current vogue and the conclusion is that X is done by 26%, marketers won’t think about Y being done by 74% and potentially being the better option. There could be selective reporting to change the implication, such as correlations being presented as causations. There could be a merge-to-hide, combining things together so the shortcomings of one are cancelled out by the excess of the other, making the overall combined picture a positive.

We've seen the mistakes that "evidence" can lead to.

 

Netflix are an example of the dangers of extrapolations. They saw an increase in subscriptions during the Covid lockdowns, and extrapolated the line's trajectory into the future and made several years of financial projections based on it. In the years following Covid they fell short of those projections, not because the product was bad but because the projections were based on the assumption that the trajectory was a permanent trend rather than a temporary one. They had treated a short-term response brought on by environmental and political factors for a long-term fundamental change in consumer behaviour. During the lockdowns, people were subscribing because they were stuck indoors and had to occupy themselves in new ways; when lockdowns ended, so did the subscriptions, and the line reverted back to the mean that was below the projections.

 

Volkswagen are a good example of data being used selectively to support a narrative. After the emissions scandal they had one of their worst financial years, finishing with extensive losses. It was held up as proof that the purpose=profit equation was true, and that an absence or loss of purpose would be the death of a brand. Volkswagen had not been purpose-led, and now their reputation was so damaged they were in terminal commercial decline. That was true, if you only look at the data for a few years in isolation, which contextualised it in way that fit the purpose-led narrative. The losses had nothing to do with purpose, they were largely the result of the legal fees and compensation in the immediate fallout that hadn't been budgeted for. The pro-purpose marketers stop the showing data at that point as what follows proves the fallacy of purpose=profit: within three years they posted higher sales than they had in the decade before the scandal, showing that the violation of purpose didn’t prompt a long-term commercial disaster at all.

Why is "evidence" so readily accepted?

 

Occasionally clients will ask me what the most valuable skill in marketing is, and I think one has disappeared at an alarming rate over the last decade: critical thinking. Faux empiricism has led to it being replaced by passive acceptance: as a result sources are not assessed, data is not interrogated, conclusions are not examined. They are just grabbed and regurgitated. Sources like self-published whitepapers and business books do not go through the rigorous independent or peer review process that genuine studies do, so there is no expert validation of the methodology, data, or conclusions before they reach a marketer’s hands.

 

At the individual level, “evidence” tells marketers what they want to hear and shows them what they want to see. In the true sense of the terms, being ‘driven’ and ‘led’ means that decisions are made in response to the data, and the data is used to determine what should be done. In "marketing", the reverse happens: marketers are ‘driven’ or ‘led’ to the data that support decisions that are already made. It can be used to justify biased decisions, short-termism, unnecessary complexity. It can legitimise divisions, rejections, and narcissism. It can be used to support binary thinking, revolutionary zeal, and heroic fantasies.

 

At the discipline level it can be used to mitigate the risk of reputational damage. It provides a convenient get out of jail free card: if results are poor, it is because the data was faulty, marketing were simply 'led' or 'driven' down the wrong path. Marketing did what it was supposed to do: the source is to blame. It offers a way to shift accountability, avoid responsibility, and deflect criticism.

 

This marriage of convenience leads to a deliberate and wilful lack of due diligence on the data and insights that are being used. Signs of bias in the acquisition and analysis of data are ignored, manipulation tricks that are used in the presentation are missed, the agendas and vested interests that compromise the integrity of the conclusions are swept under the carpet.

 

Even high-profile studies can be affected by these factors. Peter Field, a highly respected and trusted voice in marketing, produced a study that proved purpose-led marketing was highly effective. This surprised a few people, especially when they noticed something interesting: the data that had been used seemed to have been selectively chosen. When pushed on this, Peter revealed that the original analysis had shown that purpose-led campaigns were less effective than other approaches; but the financial sponsors of the study, a major multinational company who were very public supporters, adopters, and evangelists of purpose-led marketing, didn't like that conclusion. They wanted the study to prove that purpose was effective, so Peter was told to curate the data and interpret it in a way that provided the pro-purpose conclusion that the sponsor wanted it to have.

 

Effectiveness is damaged by marketers basing decisions on other people’s faulty numbers, analyses, insights, and conclusions, rather than doing their own testing, research, and interacting with their own market. When marketer’s use “evidence”, they don’t find out what is working for them; they are hoping or presuming that other people’s numbers and insights will work. They design marketing that appeals to their own biases and preferences, as the bias confirmation they sought through “evidence” leads them to believe prospects and customers think, behave, and act the same way they do. This misalignment with the market reduces the relevance and resonance of the work, and the lack of appeal gets reflected in the lack of action it stimulates.

This is why I try to get any marketers that I am working with to interrogate the data and insights, to approach it critically and sceptically; to ask: Who funded it? Does a particular conclusion benefit the source? I try to pass on a great lesson that I was taught early: often the information that has been omitted tells you just as much, if not more, than the information that is there.

 

Marketing has no shortage of "evidence" to draw on, which raises a further question: where does it come from? To answer that, we need to look at the rise of Marketing™.

Ready to get started?

bottom of page