A/B test is an experiment to test a visitor's response to variant A versus variant B and to determine which of the variants is more effective.
In the context of Acoustic Personalization, the channel visitors are shown different personalized content variants (such as an image, text, or video) and the visitor's response is tracked to determine which of the content variants received more attention (determined for example by, the number of clicks on the variants).
For A/B test, there is a group of visitors who are part of Control Group, who belong to a segment that is targeted for personalization but held back from receiving the personalization. This group receives the original content, as against to the rest of the visitors who are shown the personalized content variants (A, B, or C). Usage of control group serves as the baseline against which the results of A/B test are evaluated.
You create an A/B test with some goal (objective) in mind. In other words, you can create an A/B test to track and measure the conversions for a specific visitor behavior on the personalized content variants to determine which content variant is more effective.
Based on the results of the A/B test, you may then choose to show the winning content to your visitors to maximize your goal, or you can continue with your tests till you are satisfied and statistical significance is reached.
A/B test workflow
Detailed workflow for creating and publishing an A/B test rule in Acoustic Personalization.
Step 1: Create and Publish A/B test
Create an A/B test rule. You can publish the A/B test rule on your channel to determine the winning content. You can edit a draft A/B test rule, or edit an ongoing A/B test rule.
After you create A/B test, you can:
- You can stop the A/B test at any given time.
- You can resume the stopped test later to continue testing and achieve the statistical significance.
Step 2: View performance details for your A/B test rule
You can monitor the effectiveness of your A/B test by using the performance details for your A/B test rule.
You can also export and save the performance details of your A/B test as a PDF file.
Step 3: Set winning content
After the completion of an A/B test, the Acoustic Personalization determines the winning content. You can also select the winning content manually and create a personalization rule using that.
That is it!
You can now proceed with creating an A/B test, or you can continue reading to learn more about its concepts, such as statistical significance and manual traffic allocation.
Statistical significance
Statistical significance of a test indicates whether the results of the test are "real" or whether they are simply due to chance occurrence. The statistical significance gives us a measure of the reliability of the test results.
For example, if you run a test with a 95% statistical significance, you can be 95% confident that the test results are valid, with a 5% chance of a sampling error or random error. A 90% statistical significance level means there is a 10% chance that the results could be in error. A 99% statistical significance means there is a 1% chance of a false test result.
The higher the statistical significance, the more confidence we can have in the results of the test. However, the higher the statistical significance, the longer is the required test duration, and also more volume may be needed to reach that level of significance.
Mathematical basis
In any experiment that involves drawing a sample from a population, there is always the possibility that the observed effect (result of the test) may have occurred simply due to a chance occurrence or a sampling error. To avoid this uncertainty and to ensure that the results of an experiment reflect the actual choices of the overall population, a term known as Statistical Significance is used.
The result of a test is considered to be statistically significant if the probability that the result could have occurred by chance is lower than a pre-defined threshold. If we denote this probability as p and the pre-defined threshold as ɑ (alpha), then:
Statistically significant result = Probability (p) < Threshold (ɑ)
Statistical Significance in A/B tests
An A/B test is an example of statistical hypothesis testing, a process whereby a hypothesis is made about the relationship between two data sets and those data sets are then compared against each other to determine if there is a statistically significant relationship or not.
To put this in more practical terms, a prediction is made that content variant B will perform better than content variant A, and then data from both the content variants are observed and compared to determine if B is a statistically significant improvement over A.
For example, we have no way of knowing with 100% accuracy how the next 100,000 people who visit our channel will behave. This is the information that we do not have today, and if we were to wait o until those 100,000 people visited our site, it would be too late to optimize their experience. What we can do is observe the next 1,000 people who visit our site and then use statistical analysis to predict how the following 99,000 will behave.
The complexities arrive in all the ways a given “sample” can inaccurately represent the overall “population”, and all the things we have to do to ensure that our sample can accurately represent the population.
Statistical Significance in A/B tests in Acoustic Personalization
In the context of Acoustic Personalization, you must specify the statistical significance for an A/B test as a percentage that indicates your confidence that the results of the A/B test are valid and free from errors caused by randomness. For example, if you set a statistical significance level of 95%, it means that you can be 95% confident that the observed results are real and not caused due to chance occurrences.
Statistical Significance is based on the number of impressions and conversions for Control group and the Variants.
Statistical significance value is calculated based on the click rate of visitors on the channel. To calculate the statistical significance, the control group and at least one of content variants must have non-zero click rate.
The Statistical Significance value for an A/B test in Acoustic Personalization should be within the range 50% to 100%. By default, the value is set to 90%.
It is not advisable to set the value below 90%, because the lower the threshold for statistical significance, the less likely it is that the improvement in the conversions (or whatever the Goal is) is due to given variant being shown. Similarly, it is not advisable to set the statistical significance value to 100%; as this value is practically unlikely to be met during the test.
For example, consider an A/B test in which:
- Control group has 15% conversion rate
- Variant 1 has 30% conversion rate
- Variant 2 has 35% conversion rate
If our goal was to increase the conversion rate, then Variant 2 is the best performing variant. Measured against Control group, the different is of 20% and it would be sufficient to meet the statistical significance if we had set it to 85%. However, the same difference may not have been sufficient if we had set the statistical significance to 99%.
Statistical significance of an in-progress A/B test
For an in-progress A/B test, you may see a message on the Performance details page, if the following conditions are fulfilled:
- The test is in progress.
- The statistical significance reached by the test is less than 90.
- The test has run for less than a week.
The statistical significance result shown for an A/B test which is still in progress may not reflect the real-world scenario and hence is not very reliable. For a more reliable result of statistical significance, it is recommended to wait for about a week from the A/B test start date.
Choosing the winning content
The winning content is decided when the A/B test reaches its end date.
Manual traffic allocation
Whenever you create an A/B test with multiple variants, it’s important to determine how you want the traffic to be distributed between the variants. Manual Traffic allocation helps you control how much of your eligible visitor traffic enters an A/B test.
Overview
In Acoustic Personalization, traffic allocation is the percentage of available traffic that enters your A/B test.
You can split the percentage of traffic among different variants. A 10/30/35/25 (Control Group / Variant1 / Varaint2 / Variant3) traffic allocation means approximately 10% of your visitors see the default content and approximately 90% see the variant.
Need for traffic allocation
Traffic Allocation in A/B test is useful when, you are not confident of the A/B test being run. In this case, you can run A/B test with a few users (10%) and check the results. You can then modify the traffic allocation percentages in your A/B test, and repeat the test to find the right fit for your needs.
Traffic allocation in Acoustic Personalization
- Use manual traffic allocation to send all the visitors to the "winning" variant of your A/B test. For more information about setting winning variant, see Set the winning content
- Traffic distribution expresses a probability that visitors are placed into a particular test group. If you have three test groups and the traffic is split to 10/30/35/25, each new visitor has a 30% chance of being placed into Test group1. The actual number of visitors who see the Test group1 content may not exactly match the 30% probability.
- You cannot change the traffic allocation percentages after you publish your A/B test.
Example: If you enter three content variants in your rule, and you observe that Test group 2 (variant) performs better than others(Test group1, Test group3) on your channel. Then, you can decide to direct all your visitors henceforth to this winning variant (Test group2). In this case, you can edit the A/B test and increase the traffic for this variant(Test group2).
Set traffic allocation in Acoustic Personalization
To set traffic allocation manually:
- Go to Zone details page.
- In the Content personalization pane, click New rule.
- In the Select the personalization type pop up, click A/B test, and then click Next.
- Click Configure A/B test. After you set the number of content variants, you can allocate the percentages to each of the test groups.
- In the Control traffic field, enter the percentage of visitors who will see the default content. Then, manually split the percentage in the other fields(Test traffic 1 up to Test traffic10), who will see the variant content. Ensure that the percentage for each test traffic is within the range 1 - 99.
Note The total traffic allocation must be equal to 100%.
Additional Steps
Apart from the basic flow described above, you can also configure the following features for an A/B Test.