One of my goals in this post is to get as many people from my audience to read it through, and even better, for some to actually find it useful!
But when sitting to write this post, I couldnβt decide which headline would work best, Split Test or A/B Test. A/B Testing is more commonly used in the marketing world and will be appealing to more readers, however Split Testing is the unique name for Facebookβs testing capability, which could potentially lead to readers who are interested in this functionality adopting it in Kenshoo.
I felt that if I had a tool to tell me which headline is more appealing to my audience β a tool that I can rely on and be confident in its results β my decision would be easier (and save me more time writing this post).
Why are we talking about A/B testing ads?
Advertisers are facing the same dilemma in every campaign they run. They perform A/B testing of two or more ads with different elements in the creative to get to a decision. But are these tests reliable? While the ads are tested to see which works best for the targeted audience, a critical element is being overlooked: overlapping audience.
What is the challenge with A/B testing ads?
A campaign for promoting hotels has two groups of images, one with price overlay and the other without. If not sure which image type works better, a marketer would probably run an A/B test to compare the performance between the two types.
These two campaigns would probably target the same audience, as the comparison would be weak otherwise, and that is the problem!
While doing so, these two campaigns are competing on the same audience. What if a person clicks on an ad without the price, but prior to that was delivered an ad with the price? Which image is better? And what if a person clicks on the first ad and not on the second one? Is that also because of the price or because of the sequence of delivery?
How does split testing help?
Split testing solves exactly this problem: it splits the overlapping audience between Facebook campaigns or ad sets. By using this capability for A/B testing, there wonβt be a person in the audience that is delivered competing ads, making the comparison between campaign performance relevant with a strong indication which is better for that audience.
Split testing is now available in Kenshoo and allows reliable testing of ad attributes (e.g. comparing between the many permutations created in the ads creation flow or between different bid types).
Stay tuned for my next post on best practices for A/B testing!