When Are Ad Results Actually Meaningful?

When you "let results be your guide," make sure those results actually matter. Jon explains why advertisers often make bad decisions based on meaningless data and outlines four critical factors that determine when your metrics are truly worth acting on.
On the last episode, I talked about letting results be your guide when it comes to making changes to your advertising. Now when I get questions like, how often should I update creative, or should I restrict by age or gender, or should I use lookalike audiences or go broad, and on and on and on, don't rely on advice that attempts to frame an approach based on universal truths.
Don't assume you should do one thing or another. Let results be your guide — your results that are unique to you. If you're getting great results, don't mess with it. If you're no longer getting good results, it's probably time to make a change.
But it's important we apply a caveat here. When you let results be your guide and make changes based on them, those results need to actually be meaningful in the first place.
Here's what I mean:
1. The volume of results needs to be meaningful. I see this far too often. An advertiser is running ads for a few days or a week at a modest budget, optimizing for a purchase. They have separate ad sets for Advantage+ audience and lookalike audiences. They get 10 purchases using Advantage+ audience and 15 from lookalike audiences.
They then declare that lookalike audiences work better than Advantage+ and turn off one of the ad sets.
Now, ignore for a moment the fact that the algorithm likely reached many of the same types of people in either case. The volume of results is not meaningful enough to learn anything definitive.
And if you're thinking that an A/B split test would fix that, I have an example for you. I once ran a split test of three identical ad sets that highlights this problem. Meta declared that one ad set was a clear winner, generating 25% more conversions than one of the other ad sets.
But since everything about the ad sets and the ads within them was identical, we know these results were random. We couldn't say it was because of the targeting or optimization or copy or creative. But we make decisions like this based on 25% differences all the time.
We assume performance differences are caused by the differences in targeting or creative, but that may not be the case. It could be correlation, not causation.
At some point, the number is meaningful. But where does that happen? Instead of giving you a definitive number, I'd look at stability and predictability of your results from day to day and week to week. Can your interpretation of results change based on a single day of results?
In this example with 10 and 15 conversions, a single day could completely transform your interpretation. So make sure the volume is meaningful.
2. What you focus on needs to be meaningful. Some advertisers are overly focused on metrics like click-through rate, cost per click, CPM, and others. These metrics should not be your foundation for judging performance. You shouldn't stop an ad set or choose an ad based on these secondary metrics.
While they help tell a story, they aren't your bottom line. Your focus should be on cost per action, which is hopefully a conversion of some kind. It might also be return on ad spend.
Click-through rate, for example, isn't predictive of either of those metrics. You can get a high CTR that doesn't lead to conversions. You might also get a lower CTR that sends qualified traffic that is more likely to convert. The same is true of focusing too much on CPM.
Focus on results based on meaningful metrics when making decisions.
3. The quality of results needs to be meaningful. Let's assume you're comparing two ads when optimizing for leads. You've got meaningful volume to make a judgment. Ad A generates 1,000 leads and Ad B generates 800 while spending the same amount. Since Ad A generated more leads, do we declare it the winner and turn off Ad B?
Not necessarily. Quality matters too.
Maybe Ad A was more persuasive, which led to more leads. But Ad B did a better job of attracting the right people, which resulted in more qualified leads. You could run into similar issues with purchase optimization. One ad might generate more sales at a lower cost per purchase, but the other might generate a higher return on ad spend.
So make sure that the quality, not just the quantity, of your results is meaningful.
4. The incrementality of results needs to be meaningful. Sometimes results can be misleading, especially when running ads for a sale. You might have one ad set using remarketing with a crazy 20x ROAS and another going broad with a more modest 4x. Is the remarketing ad set more effective?
Not necessarily. Incrementality is important.
In other words, how many of your conversions wouldn't have happened if people hadn't seen your ad?
You can argue that your ads influence conversions in remarketing, but there are times when that isn't the case. Some view-through conversions are completely worthless. Even with click-through conversions, your ads may not have been required to get the sale.
This isn't to say that remarketing makes no impact at all, but be very skeptical of your results. Don't make decisions based on surface-level reporting. If you have incremental attribution (which is rolling out now), focus there, or on 1-day click when comparing performance.
Bottom of the glass: It's very easy to fall into the trap of making bad decisions based on results that simply aren't meaningful. Make sure the volume is meaningful. The metrics you focus on are meaningful. The quality of results is meaningful. The incrementality of results is meaningful.
Truthfully, some or all of these things are rarely meaningful when advertisers make judgments about performance, especially when the typical advertiser doesn't spend enough money or creates too much complexity and then obsesses over things like copy and creative variations when the volume just isn't there.
Focus on meaningful results. Create less complexity to help generate meaningful results. And don't obsess over the small stuff that simply doesn't mean much.