Ad Yield Ops 105: Running Successful Yield Tests

In this lesson, you will learn how to run successful yield tests.

 

 

Lesson Overview + Resources:

In this lesson, we'll discuss how to run successful yield tests, including:

  • Control vs. test conditions
  • Yield testing best practices
  • Yield test monitoring

Here are additional resources pertaining to the lesson above:

Read the Transcript:

In this video, we’ll cover how to run successful yield tests.

Having a list of ideas is great, but actually running those tests is where the rubber meets the road. Let’s go through some best practices for ensuring your tests run as smoothly as possible, and are capable of generating results you can trust:

First, HAVE A CONTROL TO COMPARE AGAINST.

In order to determine if a test is ultimately a “winner”, you must first have something to compare the results against to determine whether the change did, in fact, result in increased revenue.

The best solution is to run a true A/B test where the control and the test conditions are run simultaneously with traffic being split between them. Whenever possible, run true A/B tests.

Unfortunately, this isn’t always possible to do. Depending on your set-up, you may not be able to run a pure A/B test. In this case, you’ll have to run a set-time-period test where the whole of your traffic is run through the test condition. Following this, you’ll need to do a pre and post analysis to review how your revenue metrics were affected during the test, comparing them to what those metrics’ trend lines looked like before and after the test.

If you have to run the pre and post analysis, just make sure you are acutely aware of any seasonality issues that might affect your results, and be careful to account for these when trying to apply causation (rather than simple correlation) to your test.

Second, RUN TESTS LONG ENOUGH TO GAIN STATISTICAL SIGNIFICANCE.

The length of your tests will be determined by the volume of traffic that runs through them. You may be able to run tests in a matter of hours, should you have enough traffic volume running through both the control condition and test condition to gain confidence in the results.

Most robust a/b testing tools will be able to calculate statistical significance for you, and excel even has ways to calculate statistical significance if you are exporting data there. While this will look different depending on how you are running tests and reading data, the important part to know is that you need to run a test long enough to collect enough data to be confident that your test condition is what is CAUSING the change you are seeing.

This alone should be the determining factor in how long you run a particular test.

Third, MAKE SURE TESTS DON’T INTERFERE WITH EACH OTHER.

If your system is built to be able to run multiple A/B tests concurrently, then you are free to run many at once. Just keep in mind that you’ll want to be able to run a volume of traffic through each condition to be able to provide statistically significant results. This means that the more tests you are running concurrently, the less traffic is running through each one, and the longer it will take to reach statistical significance.

In our experience, most mid-size publishers are simply not set up to run concurrent A/B tests. If this is the case for you, you’ll need to run a single test at a time.

And fourth, MAKE SURE YOU MONITOR. 

Once you start a test, make sure you are monitoring results as the test is running (and not just waiting until test completion to review the results). You’ll want to know immediately if there is a very big swing in revenue either up or down.

This will help you catch tests that hurt revenue in a big way right away, or notify you that something might be broken.