The Best Ways to Test Facebook Creative

Stuart McMullin
4 min readDec 5, 2020

As an advertiser, what is Facebook trying to accomplish? The answer is it’s designed to predict who your next customer will be based on whatever your pre-determined objectives is. With that, creative is one of the most important variables the Machine Learning (ML) reviews to work effectively. Creative tells the story of what your brand’s user value is and drives a lot of the signals (that the ML uses) being used.

Product marketers, creatives, and media buyers are constantly working together to find incremental and exponential value adds for their products and to build concepts that can effectively showcase those values within the powerful walled garden that is Facebook.

The question is, how can advertisers effectively test creative? I’ve spent years pondering the answer. With a suite of complex and powerful tools you can deploy creative at scale and with time (and money) have a relative good understanding of what works and what doesn’t.

That said, over the years I’ve heard differing versions of how media buyers would execute creative tests. Feel free to give your thoughts in comments.

Dynamic Creative Optimization

Better known as DCO, this tool allows you to give Facebook a number of variants across copy (including title and description) and media. Applied at the campaign-level, this allows a lot more combinations than a single ad unit. Simply put, this is a powerful tool that allows you to test a vast array of creative variants at scale. This is the perfect route if you you want to test at scale.

Toggle on at the campaign-level

How does it work?

Upon giving all the media, primary text, headline, description, and call to action Facebook will injest all those variables to form different combinations treated as completed different ad units. While in the interface it looks as if you have one ad, within the auction you have significantly more.

Why is this route useful?

Over time, Facebook will find the optimal combination effectively increasing your chances of finding the right combination that meets your objectives.

Can you tell the winning combination?

Yes. Within the “Ads” tab, you can see the dynamic breakdown

You can cut the breakdown a number of ways

Split Test Tool

The Split Test Tool allows advertisers to answer specific questions. While it’s used a lot for A/B testing for, as an example, landing pages it can be used for creative as well.

How does it work?

The tool will test up to 5 different creative versions and evenly distribute the spend across one audience. Effectively this allows for the cleanest read on each version. During the test the tool will report on the progress of achieving statistical significance.

Why is this route powerful?

The tool is best suited if you have distinctly different creatives and are looking to understand the difference between the two with statistical significance.

Can you tell the winning combination?

Yes, the interface will tell you (even with a nice badge) the winning version and the degree (as a percentage) of confidence that version achieved.

Manual Build w/ ABO

Some media buyers prefer a little more control. Much like the A/B Test Tool, you can design your own test without all the gamification and aesthetic features of the tool itself.

How does it work?

In my opinion, you still want to maintain a clean read of each version. Placing 1 ad per ad set with the same targeting across all cells and using Ad Set Budget Optimization (ABO) will effectively give you a great read.

Why is this a route useful?

Unlike the A/B Test tool, you can still get a great statistically significant read and let the test run should you see success. You can easily scale (vertically or horizontally) any ad that is seeing success.

Upon getting a final read on performance, you can group the winners into legacy campaigns or new campaigns with CBO across new audiences while continuing to vertically scale the test winners.

Wildcard: Manual Build w/ CBO

This is likely the dirtiest (in terms of read) deployment but is something to consider. Depending on how many creatives you’re looking to test, you can break out the deployment across 2 buckets; interests and lookalikes.

How does it work?

Use the ads you’re looking to test and spread across a few campaigns that are tied to audience groupings of lookalikes and interests. Given I recommend a maximum of 3–5 ad sets per campaign the number of campaigns may differ depending on how many interests you want to granularly target.

Why is this a route useful?

In practice, you can test a lot more audiences that may not have been exposed to the other methods mentioned. With CBO, you can see what audience and creative are performing best relatively quickly.

Happy testing! Thanks for reading :)

--

--