In a packed auditorium on a sunny Seattle morning, there was a palpable buzz in the air. Three veteran SMX Advanced speakers were getting ready to share their insights during a panel entitled, “Getting Creative With Ad Copy & Testing.”
The session, which attracted PPC marketers in droves (there was standing room only at the back!) was described as follows:
Creating successful ads use to be as simple writing a great headline and two lines of text. Now with so many extensions, device adaptations, dynamic text, and other components, creating ads can seem more like an engineering challenge than creative one. In this session, we’ll show you how to develop great ads that are equal parts creative genius and technical savvy, and proving it through testing.
From Mad Libs to selling snow to Eskimos to insightful case studies, this panel brought it all. The Q&A section was also very lively, bringing the house down at the end.
The panel, moderated by Matt Van Wagner, consisted of:
Up first was Marty Weintraub, who asked the audience, “What can you really say in the first 25 characters?” If we look at the SERPs today, we often see five different headlines that all essentially read the same.
We’re all trying to balance compelling copy, Quality Score, and keyword optimization — admittedly a tough ask for 25 characters. As marketers, how can we cut through the clutter to really stand out, apart from just spending more money?
Our creative has to be truly stunning to stand out; we’re dealing with crowded pages, competing against emotional messages as well as multi-channel conversion paths.
We can’t be happy with just 1%-2% CTRs (click-through-rates); Marty says we should be getting click-through-rates of at least 5%.
Marty cautioned that, when thinking of headlines, don’t think in full sentences or in the form of finished ads. Rather, think just in terms of parts of a sentence or different hooks and concepts, so you don’t limit or put undue pressure on the brainstorming process.
To illustrate this point, he shared a tongue-in-cheek example of how to sell snow to Eskimos. When brainstorming ideas here, one hook would be to use sex to help sell snow:
With this plain language theory in mind, the final packaged ad could read:
Marty pointed out that one couldn’t think like this from the start. We needed to have some broader concepts and longer sentences ideated first which could then be distilled down into an ad.
Starting with statements and plain language theories, which will then be massaged into the final product, is the approach he recommends.
Marty had originally written about these constructs in his book, Killer Facebook Ads, and he presented an updated version here.
Marty shared his key takeaways for the audience:
His parting advice was that we can’t have all the SERPs looking the same, so we shouldn’t be afraid to test (even testing the ads in different positions) and stand out.
Up next is Steve Hammer, who showed us how ad copy is very similar to a Mad Lib and explained why we should take this approach to ad testing.
Steve explained that keyword insertion can be an issue since it can cause all the ads to read the same. He shared an example to demonstrate:
Not only do all the ads read similarly, it exposes the strategy and makes them very generic. That’s not enough to stand out. Instead, he encouraged us to focus on a process he called “Ad Madlibs.”
Steve shared the three options advertisers have to edit and customize ad copy, along with their strengths and drawbacks.
For his talk, Steve honed in on customizers. Here’s an example from Google of ad customizers in action:
The Basic Elements Of Customizers
Steve believes that customizers grew out of shopping feeds and explained that thinking of them in this way can make them easier to break down. Here’s the list of elements:
Steve touched upon the different ways he recommends using ad customizers:
City Insertion. This helps personalize the ad for the searcher and allows the advertiser to simplify the process of adding custom elements for each geo location across multiple ads.
For example, if a hotel wanted to most effectively advertise their room specials across multiple cities, then ad customizers would be very effective. Here’s how they’d look:
Big Keywords or Mismatched Intent. A drawback with keyword insertion is that the character limits can’t allow for more than 25 characters in a headline or more than 35 characters in the display URL or description lines.
However, if advertisers have longer keywords such as +minimally +invasive +spinal +surgery and used keyword insertion, then they couldn’t fit this keyword into the space. They’d have to shorten the phrase and then could have a mismatched intent problem where people looking for regular spinal surgery could be shown the ads.
Customizers — without these severe character limitations — allow us to more accurately show ads for the right intent when searchers look for longer keyword phrases. To clarify, it won’t allow for character limits to be exceeded in the ad that is shown, but it allows for longer keywords to be used within the customizer for keywords to be replaced.
Thus for the example above, the final keywords shown could be “Back Surgery” or “Spinal Surgery” but it still ensures the intent remains intact if someone is searching for minimally invasive spinal surgery.
Keyword Insertion for Misspellings. Traditional keyword insertion doesn’t allow for misspellings to be used within the ads. However, customizers do allow for misspellings to be used if needed.
While Steve admitted that search volume has decreased for misspellings with Google showing suggestions for the proper spelling, he mentioned that it is still a tactic that can prove lucrative in some cases.
Localize Language. Ad customizers also make it easy to hyper-target and personalize ads for specific locations and attributes at the same time. Steve’s example for a home contractor targeting bathroom remodels was:
Here, the common parlance for bathroom for each country could be used and also the correct spelling of color for the region could be used, thus easily not only customizing for region but also local language.
Countdowns. A fantastic use for ad customizers is to set up countdowns to add a sense of urgency to an ad around a specific event. Whether it’s a time-sensitive promotion, an event, or something else, ad customizers can insert the time left before a promotion or event ends.
Two further advanced tips Steve shared:
When using ad customizers, Steve cautioned the following:
Generally, if you make one change to an ad in AdWords, it gets treated as a new ad and the history and past performance data is lost. Steve pointed out that this is where ad customizers have an advantage.
Making small changes to an ad (such as the price or duration) will still preserve all the data, and it will not be treated as a new ad. In fact, changes can be made as frequently as needed and the rest of the ad will be preserved.
Steve then shared an example of a type of test that can be set up using ad customizers:
When measuring performance of the tests, Steve recommended adding the following elements to the reports: items, keywords and groups, as well as extensions and dimensions.
All of these can still be reported on and measured in the same way as everything else, such as against metrics like clicks or conversions.
Steve recommended starting with a two-column CSV file, with the first column being the header and the second covering the series of attributes.
A few ways to use the data here would be:
Thus, an example of this in action would be:
To find the ad customizer section within AdWords, go to Shared Library, and it will be under Business Data. Once the feeds have been imported, it can be modified within AdWords.
Steve’s biggest takeaways for the audience were:
The final speaker on the panel was Brad Geddes. Brad started off by talking about the flaws with current A/B testing practices.
Testing in single ad groups can only tell us how creative performs across one set of keywords, but cannot tell us how creative can work across multiple ad groups. Rolling out winners from a single ad group test across multiple ad groups can be a costly mistake, since there’s no guarantee that they’d behave the same way.
Instead, it’s far better to take concepts and test them across multiple ad groups at the same time.
Multi-Ad Group testing can effectively provide insights such as:
Brad cautioned the audience to remember that, when reporting on data, average data is useless. Data can only really be valuable when broken down further into actionable segments. Some examples of data segments include:
With this in mind, Brad shared his process for approaching brainstorming and setting up multi-ad group tests.
When starting to ideate, Brad recommended taking brainstorms and listing out ideas for testing, such as:
Then, prioritize the ideas, take a few of them and test them out across multiple ad groups to prove them out across different segments.
For example, you could have 3 ideas being tested via 3000 ads, with consistent ads across segments. The structure could be:
Brad’s tip for reporting was to add proper labels for each idea and then analyze the data either manually in Excel via pivot tables or with software (such as AdAlysis, I am assuming).
When beginning work with this client, Brad read through all the customer reviews in detail to help in the brainstorming process. The advice is to look at all the five-star or highest reviews and see what common elements stand out.
In this example he shared, he noted that all five-star reviews consistently mentioned the words “rush shipping.” However, the ads only mentioned “free shipping.” Brad found this to be an opportunity worth testing, to see if having “rush shipping” instead of “free shipping” could be more effective.
The results of this test were:
The lift in average order value was dramatic. However, Brad pointed out that if something is this different, then there has to be something more at play apart from just the ad. Further segmentation was needed to really understand what caused such a large lift.
When drilling deeper into customer segments, Brad noticed that the B2B section responded very well to the “Rush Shipping” message, and they were the segment that tended to have higher average order values.
“Rush Shipping” lost out to “Fast and Free Shipping” in the B2C category, so there was this difference in performance between the segments that was masked in the overall test results.
Next, Brad was able to identify which products were more in demand for the B2B segment and which for the B2C and they were able to segment out the ads to these different audiences. Thus through this segmentation, he was able to find new creative testing opportunities.
Another round of ad testing followed, to test offering a discount on the products vs. free shipping. Even though with the discount applied the purchasers would have saved more money, the concept of free shipping won out in the end.
Once the ads were tested and running effectively across the segments, Brad decided to test reinforcing these messages on the landing pages.
After testing multiple phrasings of the same message through multiple button iterations, he settled on a winner that resulted in a 7% lift in conversion rate as well as a 9% lift in revenue.
Brad advised that taking outliers from previous ad copy testing proved to be effective when used as ad extensions.
A B2B client of Brad’s was not doing well on mobile at all, so Brad worked to create some tests to lift the conversion rates on mobile, which included options for redesigns of mobile landing pages.
This is how Brad structured the test:
The results were outstanding, with mobile conversion rate now at 7.28% (a lift from ~1% originally). He noted this with a caveat that not all B2B companies will find mobile success, especially based on what most mobile searches are likely to look for:
Brad’s recommendations and findings for testing mobile ads included:
Some additional ideas for testing that Brad shared were:
Brad’s takeaways for the audience were:
Some highlights from the Q & A at the end were:
Q. What are the main metrics the panel recommends for testing?
A. Brad recommends testing for revenue/impression or conversion/impression. He adds that he tends to look for the ads with the highest click-through rate and highest revenue per impression.
Marty agrees and says conversion over impression is what they go for as well and adds that they usually have a filter metric in there such as target CPA (cost-per-acquisition) to track against as well.
Steve agrees and adds the point that we shouldn’t throw away any data that taught us something.
Q. How does the panel recommend getting buy-in for testing?
A. Brad says that we need to win approval with numbers and not opinions — in other words, we have to make the case using solid data. Ask the powers-that-be if they would be willing to risk $10,000 to try and make millions.
Marty agrees and says he explains to clients that marketing is a capital investment. We learn something from every test, and those learnings can be used to generate more revenue.
Q. Can the panelists share their most memorable ad test?
A. Brad starts off by sharing a time when they launched an ad that accidentally had the second description line left as “Test, Test, Test.” Turns out that ad became a huge winner! For the next few months, they purposely tried to do “mistake testing,” wherein there would be a small error within the ad (such as “you’re”vs. “your”). and they all had very high response rates.
Marty shares some ads they wrote for a client of theirs that sold a popular, violent video game. His bold ad copy stated, “3 out of 5 homicidal terrorists love this game.” They also had a display URL that ended with “StickEmUp.” This ad went on to have a 16% click-through rate!
Steve mentioned a time when they ran a brutally honest ad. It mentioned the competitor and noted that the competitor was very good and also cheaper than their client. This ad ended up performing incredibly well for their client.