Last week I tweeted an explanation of how we know that an increase or decrease in SEO performance was caused by a change that we made or by an external factor like seasonality, competitors, Google updates etc. People found it helpful and it generated a lot of questions so I thought it would be useful to post a more detailed explanation on what exactly SEO split-testing is as there seems to be a lot of confusion/misunderstanding.
One quick thing: This is deliberately a simple example with a basic explanation of the maths that we use. In reality, the maths is a lot more complicated and based on this research by Google: Inferring causal impact using Bayesian structural time series.
The purpose of this presentation isn’t to teach or explain the maths behind the ODN, it’s to, hopefully, explain the core concepts in a simple way, that allows you to imagine applying this methodology to websites with a lot more than 4 sub-category pages :)
If you want to dig into the testing methodology in detail, then you can visit: https://odn.distilled.net/learn-more/faqs/
With that out of the way, let's get started.
Imagine a basic website
The site below has two simple categories, animals and countries. It has 8 sub-category pages (cats, dogs, Scotland etc.)
- All of the animals sub-category pages use the same template
- All of the countries sub-category pages use another template
This is critical to understand because SEO split testing is predicated on the concept of testing changes to page templates. A group of pages that share the same template can be used for SEO split-testing.
In the animals sub-category example above you can see that although the content of each page is different, they all follow the same template.
- An H1 at the top of the page
- A block of intro copy
- A featured image
We could create an animal template test:
Or a countries template test:
But you can’t mix templates:
An example experiment
Imagine that we wanted to test a new Animal page template by replacing the image with a video and removing the intro copy from the animals sub-category template.
For the test to be a valid experiment, we need a set of pages to remain un-changed (the control group) and a set of pages to have the new proposed design (the variant).
Distilled’s ODN platform uses advanced maths to decide which URLs should remain on the control template and which should get the variant template.
For simplicity in this example, you can think of this as selecting URLs at random to be on each template.
In this example, the test is on 50% of pages, but you could do this on a smaller or more significant percentage of pages. During the experiment, the site would look like this:
Notice that the /cats and /badgers pages now have the new template and the /dogs and /unicorns pages remain unchanged and have the same design they have always had.
The graph below shows the organic performance of the cats and badgers pages versus the dogs and unicorns pages.
Notice that just after the test started, there was no change in the difference in organic performance. That's because Google needs to crawl the pages. Depending on the number of pages that are being tested, the amount of time that this takes can vary.
Over time we notice that the variant pages start to outperform the control pages. Once the test reaches statistical significance, we can declare the test a success and recommend that the changes be rolled out to 100% of pages instead of just 50%.
How do you know it wasn't just seasonality?
This is a common question and you can replace seasonality with pretty much anything you like:
- Google rolled out an update
- Competitors’ performance decreases
- Backlinks to your site
- TV campaigns
- Branding/direct traffic
- Other macro factors
By having a control group of pages that have the same intent/theme/template we can exclude external factors like seasonality because the control group of pages would also be impacted.
The analysis isn’t looking at the trend of the traffic; it’s looking at the difference in performance between the control group and the variant.
In other words, if it was seasonality, for example, Christmas, there's no reason why /cats and /badgers would be impacted but not /dogs and /unicorns. The same would apply for something like a Google update.
Seasonality would look like this:
Although there is an upward trend, the difference in performance between the control pages and the variant pages is the same as before the test began. This example experiment would be declared as a neutral test despite that after the change was made, organic traffic went up significantly.
I hope this makes SEO split-testing easier to understand.
If you want to know more or are interested in doing SEO split-testing, the ODN is a piece of software that lets you do that. Find out more at https://odn.distilled.net/ or click the button below to contact us so I can set up a call to show you a demo of the software.