We’ve all heard of the importance of split-testing, but often times we are just too lazy (or rather simply do not have the right tool) to do sufficient split tests. Today I’m happy to invite our guest blogger, Ruben Corbo, a split-testing consultant, to share with us some useful tips on split testing.
To start, I requested Ruben to share with us a case study to illustrate the importance of split testing. I then continued to ask him a series of questions that I had in mind. Below is the organized version of our correspondences. Enjoy.
A case study of how split testing has made a difference
Split testing conducted by SIM-Only.co.uk is one the most convincing proofs that this marketing tool really helps. The test was conducted by first tweaking the icons used in presenting the products: images of phones vs. SIM cards.
On another round of testing, the icons are arranged vertically on the right hand side of the page for one variant, and horizontally across the page for the other.
Results showed that the vertical layout of phone images worked better than the other versions. It increased the website’s click out rate (clicking the icons towards the landing page) by 20 percent and improved other areas such as confidence levels, etc.
What are the things that need to be split tested?
In A/B testing, adjustments on text contents and images seem to create the most impact to the audiences. Shortening the texts and including only the gist of what the products/brand is all about seem to increase audience engagement and stir purchase motivation.
Similarly, placing a well-taken photo of the product on the landing page has also been shown to increase the click-through of visitors who check out the product, as compared to a page without product photo.
What are the things that don’t need to be split tested?
Design of menu bars and other navigation tools are not to be prioritized in split testing. As long as they are situated in the right places, their styles usually don’t matter. Most webmasters misconstrue that colors are least of priorities, but they are just as important. Remember that visual has a lot to do in luring visitors into action. A website with a pastel almost-unnoticeable blue background cannot ensnare attention as much as a vivid cerulean can.
What is the easy step by step procedure to do split testing?
1. Identify the problem: determine what could be causing your low traffic or poor conversions.
2. Formulate a hypothesis: make an educated guess on how the complexities can be solved.
3. Choose the right elements to test.
4. Set a metric that can be used for evaluation of the variants’ performances.
5. Run the A/B test.
6. Analyze the data gathered.
7. Implement the design that performed better than the other.
Split testing is tedious and time consuming. What are some of the tricks to simplify the process?
There’s no shortcut to A/B testing. If you want to get the soundest results, patience is required in the process. For newbies, however, there are loads of software and tools that can be downloaded online. Google’s Web Optimizer, for example, automates the process of splitting traffic and gathering data (including number of clicks on a page, hits on the landing page, etc.)
Other programs can help you choose the elements to be put on test, generate probable variant designs, set metrics, etc. These options can automate majority of the steps in split testing, thus taking a lot of time out of your hands.
Here are some of the links where the software can be downloaded:
What are some of the things that people do wrongly in split testing?
1) Using the wrong metrics in evaluating certain areas. For example, web masters tend to look at the number of purchases in assessing the effectiveness of content instead of the performance of the call to action button. Note that audience engagement and depth of browsing should be the metric for content impact.
2) Not earmarking ample time to gather reliable data. Most webmasters conduct the split testing in a span that’s too short. Six weeks is actually the ideal period.
3) Conducting the test when the website is yet to earn a good volume of traffic. A/B testing cannot be conducted by websites that are only getting hundred to a thousand of visitors per month. Otherwise, the data gathered from the experiment would be misrepresentation of consumer preferences. Note that there is a need to get a good amount of sampling in conducting the test.
This is a guest post by Ruben Corbo and he is a AB Test Expert.