The challenge: PacificSource is a not-for-profit, regional health insurance carrier operating in the Northwest—a region with a lot of competition and large national and regional carriers with big …..
Background
Constant testing and trials in marketing campaigns are vital for highly optimized and maximum performance. It helps us understand the platforms, audiences, creative, and back-end development that should receive focused attention and budget.
The more we test our strategies through trial and error, the higher our confidence in recommending winning campaigns. We can analyze standard campaigns and make observations regarding what changes make the greatest impact; however, in order to be certain our observations are significant enough to validate a change in campaign practices, we suggest submitting campaigns to a laboratory test such as a split, or A/B, test.
A split, or A/B, test places one concept to compete for performance against the same concept with the difference of one variable. Tests can be held on social or digital channels. The winning variable can then be implemented into future campaigns as a best practice.
Best practices
Split testing divides your audience into random, non-overlapping groups. Then, two identical ads with one differentiating variable are placed in each audience group. Key to these tests is that they are performed with statistical significance.
When claiming that a result has statistical significance, we’re claiming that the result is likely to be attributed to one specific reason. Tests should seek a high degree of statistical significance or a high level of confidence that the results occurred because of the change in variable and not because of chance.
Performing a high-quality and accurate test requires the following best practices:
Who should consider a split test?
Split tests should be considered by brands who run similar recurring campaigns that could benefit from data-backed best practices. These brands have run variations of their tested variable in the past in an uncontrolled environment and are able to form a sound hypothesis. They have processes in place to implement the learnings of the test in future campaigns. Split tests should not be conducted without a written hypothesis.
Split tests are recommended for campaigns that will run for longer than one month and have the testing power to capture enough test results in a one-month period. Tests that require more time to gather enough results should consider a larger budget or a different key test metric.
back to insightsThe challenge: PacificSource is a not-for-profit, regional health insurance carrier operating in the Northwest—a region with a lot of competition and large national and regional carriers with big …..
Every day at ThomasARTS, we make amazing things happen in digital marketing when we blend art + science — this has been our strategy for many years. However, …..
Recently, we created a paid-search campaign for Zions Bancorporation that far surpassed its intended goals — and our client’s expectations. We were pretty proud of it and submitted …..