1. Home
  2. CRO Dictionary
  3. A/A testing

A/A testing

A technique for contrasting two iterations of a website or mobile app experience to check the testing tool's correctness.

A/A testing

What is A/A test?

A/B testing is used in A/A testing to compare two versions of a webpage that are completely similar to one another.

Generally, this is done to verify that the testing tool that is being utilized to experiment is statistically valid or not.

In the case of an A/A test, the tool must indicate that there is no difference in conversion rates between the control and the variation for the test is done right with A/B testing tool.


Why test pages that are the same?

Before conducting an A/B or multivariate test, it is recommended that you monitor on-page conversions on the page with the help of A/A test. This will allow you to keep count of the number of conversions, conversion rate. doing A/A test also helps you to figure out the baseline conversion rate.

In the vast majority of other circumstances, the A/A test serves as a technique for verifying the efficacy and precision of the software used for the A/B testing.
You need to check to see if the software indicates that there is a statistically significant difference (one with a significance level of at least 95%) between the control and the variation.

It is problematic if the software reports that there is a statistically significant difference between the control & variant. You are going to want to make sure that the software is installed properly on your website as well as your mobile app.


When it comes to A/A testing, there are a few things to bear in mind:

When doing an A/A test, it is essential to keep in mind that it is always possible to identify a difference in conversion rate between the test page and the control page, even if both pages are similar.

When it comes to testing, there is always some degree of randomness involved, therefore the fact that this is a negative reflection on the A/B testing platform is not always a bad thing.

When doing any kind of A/B test, it is important to keep in mind that the statistical significance of the data is not a given but rather a likelihood. In other words, statistical significance is probability rather than certainty.

Even if you have a statistical significance threshold of 95%, there is still a 1 in 20 possibility that the data you are seeing is simply the result of random chance.
Your A/A test should, in most circumstances, conclude that the conversion improvement between the control and variation is statistically inconclusive. This is because the underlying fact is that there isn’t one to find, and the test is designed to reflect this reality.

When conducting an A/A test, you must, in most circumstances, plan on the findings of the test being inconclusive. This means that the difference in conversion rates between variations that are otherwise identical will not reach a level of statistical significance. In point of fact, the percentage of A/A tests producing ambiguous results will be at least equal to the cutoff for statistical significance, which is set at 90%.

On the other hand, there are circumstances in which you can observe that one version is doing better than another or that a victor has been chosen for one of your objectives. If you have determined that a significance level of 90% is sufficient, then the experiment’s decisive result is the consequence of pure chance, and it should only occur in 10% of cases if you have determined that a significance level of 90% is sufficient. If your significance threshold is greater (let’s say 95%), the likelihood of running across a conclusive A/A test drops to 5% from its current level.

New interesting words

Interesting reads from Scribbleminds