Make Creative Testing Part of Your Process
Meta's creative testing tool launched in 2025, but most advertisers either aren't using it or completely misunderstand how it works. The old approach of creating separate campaigns and ad sets for testing has serious flaws that this tool solves. Jon explains why the creative testing tool should be part of your process, how to use it in stages instead of testing 20 ads at once, and why it gives you data to make informed decisions instead of guessing based on gut feel.
The creative testing tool needs to be part of your process.
Meta rolled out the creative testing tool in 2025, and it’s one of my favorite new things. But not enough advertisers are using it, and I think that those who are misunderstand it.
How the creative testing tool works
If you’re not familiar with the creative testing tool, here are the basics of how it works.
When creating or editing an ad, there’s a button to start a new test. You can then test up to 5 or possibly 10 ads, depending on the version you have. You can’t test existing ads for whatever reason, so you will create these ads from scratch.
You’ll indicate the amount of your existing budget that will be spent on these ads during the test. So if you have a $100 daily budget for your campaign or ad set, you might dedicate $20 of it per day to testing.
Then you choose the length of time the test will run and metric that will be used to determine success. While the test runs, budget should be spent evenly between the ads. It’s your typical A/B test, so there’s no overlap.
Why this tool is better than traditional testing
I’m a huge fan of this tool, and I think it’s far superior to the approach we’ve taken historically to testing.
Most advertisers will create a separate campaign for creative testing. They’ll even create separate ad sets for each ad to control the spend. And once they’ve uncovered a winner, they’ll move or duplicate that ad to the main ad set.
The problem with that approach is that there’s no guarantee that an ad that does well in testing will do well when it’s moved. I also try to limit my extra campaigns and ad sets, and this would obviously require more moving parts.
One of the advantages of the creative testing tool is that you can keep your ads in the existing ad set.
When the test is complete, Meta will stop forcing budget to your ads that were part of the test. Maybe distribution will happen as you expect based on test results, and maybe not. But the key is that you have data from that test to make an informed decision.
There’s otherwise a lot of guessing from advertisers who stop and start ads based on what’s running and what isn’t.
How I use the creative testing tool
So let me explain how exactly I use this tool.
When I create a new ad set, it starts with a test. I don’t immediately publish 20 or 30 ads. It will begin with a small handful, somewhere between two and five. So the test will take up the entire budget since there aren’t any other ads in the ad set.
I watch those results closely, knowing that performance is never optimal during a test. I also make sure that the test runs long enough and uses enough budget to generate meaningful results.
If I’m testing five ads at $20 per day to sell something, I’m unlikely to get the volume necessary to learn anything all that meaningful. Because of the forced budget and elimination of overlap, testing is inefficient, but we do it for a reason.
When the test is complete, Meta then optimizes delivery to get you the most results. That may mean prioritizing one or two ads and ignoring the others. If you feel like the distribution is completely wrong based on test results, you can make adjustments.
From there, I let the ads run without a test for at least a week or so. If I’m getting great results in aggregate, I don’t touch anything. I don’t turn anything off or create new ads.
But if the results are bad or just okay, it’s time to move to the next phase.
Within that same ad set, I start another test. This time, I test ads that are completely different from those that are already running. New visual style, new messaging angles, new everything.
This is a key element to creative diversification. While you could do this by creating a bunch of ads in the beginning, I prefer to do it in stages. It allows me to learn from the results of the first batch before I create the next one.
So I run that next test of the new ads in the same ad set. Once again, I watch it closely, learn from the results, and know that performance won’t be optimal. But I have an expectation of whether any of these ads will be seen as winners outside of the test.
When the test is over, I watch how Meta distributes budget. Once again, I have data from the test that can inform my decisions.
If an ad killed it during the test but Meta doesn’t show it, I can consider pausing what Meta decides to show. But that’s only if the results aren’t great in aggregate. Otherwise, I remain hands off.
The bottom of the glass
So here’s the bottom of the glass.
The creative testing tool is a huge part of my advertising process. This is how I introduce any new ads. It gives me data so that I can make informed decisions rather than guessing.
But I only start a new test when my results could be better. And I only turn ads off to force Meta to show another ad if performance isn’t good in aggregate, and I have the data from a test to inform that decision.
Have purpose behind your testing. Plan ahead, knowing that you are generating data that you will otherwise wish you had later.
Because an intentional approach to testing like this one is far smarter than randomly punching buttons based on gut feel.