Just because you can set up jump in immediately and launch a random A/B test to improve a page on your site in 20 minutes using a tool such as Optimizely doesn’t mean you should. Don’t just rush in and change button colors.
The sometimes forgotten first step of running any test is to come up with a hypothesis based on research, customer feedback, best practices and/or intuition.
The whole point of running a test is to start with a hypothesis as to why a page isn’t performing as well as it should and then running random, live traffic to both the original and new test version of the page which has been designed based on your hypothesis and seeing whether the change had the desired effect.
So what process should you follow in prepping for running a test such that it has the highest likelihood of success?
- Specify the business requirements of your site and create a set of micro and macro conversions to track – the overall economic indicators. The overall goal will likely be revenue or profit but you will likely benefit by tracking a handful of secondary metrics as well.
- Analyze your analytics data to create a list of highest priority pages to optimize. Which landing pages with high traffic have the highest bounce rates? At which point in the purchase funnel are most users dropping off or exiting the site? What is the highest business priority?
- Get into your customers’ heads, as I covered in a previous post.
- ADVANCED STEP: Segment users based on persona, context — such as device and screen size — and position in the buying cycle. Due to the added complexity of targeting certain segments, devices, etc this is not a required step but can add a valuable dimension to more sophisticated testing organizations.
- Identify and write down all of your hypotheses for what is currently going wrong or performing sub-optimally on your page or pages along with your idea for improving that.
- Rank it all in a spreadsheet and create a simple score of which tests to run first based on resources/time needed to set up the test, potential economic impact and your guess as to how effective the change will be.
- Mock it up.
- Get any necessary approvals and feedback for the test and the hypothesis.
- Now, you’re finally ready for test development, QA, and launch!
In the end it may in fact be just changing a color of a call-to-action button that doesn’t stand out and blends in with the surrounding design, but doing the above will broaden your view and get you to focus on what is truly important and why you think it is important.
The best and worst part of doing all of this prep work is that you never know whether your test will have a meaningful and positive effect on your micro- and macro-conversions. That’s the point of doing the test – to prove one way or another if it is true. No amount of industry expertise, best practices or research can guarantee success.