Split testing is another name of A/B testing and it’s a common or general methodology. It’s used online when one wants to test a new feature or a product. The main agenda over here is to design an experiment that gives repeatable results and robust to make an informed decision to launch it or not. Generally, this test includes a comparison of two web pages by representing variants A and B for them, as the number of visitors is similar the conversion rate given by the variant becomes better. Overall, it’s an experiment where two or more variations of the same web page are compared against together by showcasing them to real-time visitors, and through that determines which one performs better for a given goal. A/B testing is not only used or limited by web pages only, it can be used in emails, popups, sign-up forms, apps, and more. Let’s look into the example of a case study. So let’s implement AB testing in the R language.
Let’s imagine we have results of A/B tests from two hotel booking websites, (Note: the data is not the real one ). First, we need to conduct a test analysis of the data; second, we need to draw conclusions from the data which we obtained from the first step, and in the final step, we make recommendations or suggestions to the product or management teams.
Data Set Summary
Download the data set from here.
- Variant A is from the control group which tells the existing features or products on a website.
- Variant B is from the experimental group to check the new version of a feature or product to see if users like it or if it increases the conversions(bookings).
- Converted is based on the data set given, there are two categories defined by logical value. It’s going to show true when the customer completes bookings and it’s going to show false when the customer visits the sites but not makes a booking.
- Null Hypothesis: Both versions A and B have an equal probability of conversion or driving customer booking. In other words, there is no difference or no effect between A and B versions
- Alternative Hypothesis: Versions both A and B possess different probability of conversion or driving customer booking and there is a difference between A and B version. Version B is better than version A in driving customer bookings. PExp_B! = Pcont_A.
Analysis in R
1. Prepare the dataset and load the tidyverse library which contains the relevant packages used for the analysis.
2. Let’s filter conversions for variants A & B and compute their corresponding conversion rates
3. Let’s compute the relative uplift using conversion rates A & B. The uplift is a percentage of the increase
B is better than A by 83%. This is high enough to decide a winner.
4. Let’s compute the pooled probability, standard error, the margin of error, and difference in proportion (point estimate) for variants A & B
0.03928325 0.01020014 0.0199919
5. Let’s compute the z-score
6. Using this z-score, we can quickly determine the p-value via a look-up table, or using the code below:
7. Let’s compute the confidence interval for the pool
0.002953777 0.042937584 0.01575201 0.03972649 0.03477269 0.06659717
8. Let’s visualize the results computed so far in a dataframe (table):
metric value 1 Estimated Difference 0.02294568 2 Relative Uplift(%) 82.71917808 3 pooled sample proportion 0.03928325 4 Standard Error of Difference 0.01020014 5 z_score 2.24954609 6 p-value 0.02447777 7 Margin of Error 0.01999190 8 CI-lower 0.00000000 9 CI-upper 0.04589136
Recommendation & Conclusions
- Variant A has 20 conversions and 721 hits whereas Variant B has 37 conversions and 730 hits.
- Relative uplift of 82.72% based on a variant A conversion rate is 2.77% and for B is 5.07%. Hence, variant B is better than A by 82.72%.
- For this analysis P-value computed was 0.02448. Hence, there is strong statistical significance in test results.
- From the above results that depict strong statistical significance. You should reject the null hypothesis and proceed with the launch.
- Therefore, Accept Variant B and you can roll it to the users for 100%.
If you want to know the full analysis and datasets details then please click on this Github link.
It is one of the tools for conversion optimization and it’s not an independent solution and it’s not going to fix all the conversion issues of ours and it can’t fix the issues as you get with messy data and you need to perform more than just an A/B test to improve on conversions.