Skip to main content

Case for Long Conversion Rate Optimization tests

We have often heard Conversion Rate Optimization experts professing about the need for statistical significance. Most Businesses look at this advice with marked skepticism. There are valid reasons for this.

1) Low Page Views

Not all Businesses are Yahoo or AOL. They have to take into consideration their critical asset pages and iterate through the optimization process rather than make one major change. Most of the time, when a variation gets 5-10% improvement in conversion, Businesses will adopt the change at the earliest before the tests can reach any statistical significance.

We can’t blame the Business; they cannot afford to take risks on high performing pages. The algorithm updates have made it impossible to predict a sustainable SERP position for an asset page.

2) Lost Opportunity

This is one major reason why marketers cannot wait for statistical significance. Most CRO tools will split the traffic based on the user settings without taking into consideration the short-term performance of each variation. You can either stop the low performing variation without dynamically allocating traffic based on performance. Steve has written about multi-armed bandit algorithm as an alternative to A/B testing and Paras (VWO Founder) has written a counter- argument. No matter which argument you believe in, end of the day, you have to convince the management team as to why you are transferring traffic from revenue generating asset page to a low performing variation. The responsibility to minimize lose will be on the marketer’s shoulder.

3) Google Warning

Even though we agree that tests should be stopped when statistical significance has reached, the duration of the tests and seasonality of your Business introduces additional complexities for your tests. On Aug 9th, Google has warned marketers against running the tests for too long.

“If we discover a site running an experiment for an unnecessarily long time, we may interpret this as an attempt to deceive search engines and take action accordingly”

Case for Long Conversion Rate Optimization Tests

We created an A/B test in VWO on July 9th. We have marked 6 trend points in the detailed report. 

In the first trend point from July 16th to July 18th, the difference in conversion rate is a whopping 50%, but since the tests have been running only for a week, the best practice would be to wait for another two weeks.

In the second trend point on Aug 5th, the difference in conversion rate was 38%. It is a considerable drop but we can assume that this is part of reaching saturation.

The third trend point on Aug 14th is a little worrying. The difference in conversion rate is 15%. If your management has set a guideline to adopt a change when conversion rate improves by at least 10% then at this point, you are most likely to stop the experiment and make the changes.

Since marketers want to make sure that the tests are conclusive, they will convince the management to wait for another week. When they check the conversion rate on Aug 18th, it is still at 15%. At this time, the marketer is 95% confident that the variation will outperform the control in the long-term. 

Unfortunately, marketers will not wait until trend point 5 and 6 when the conversion rate began to reverse and control page started performing better than the variation. Maybe this is a long-term trend and the variation that you adopted in the live site is a low performing page. CRO cannot be taken on face value or as an absolute science. Decision-making process in an organization, Google Algorithm Updates and seasonality of a Business also influences how we implement CRO.