Skip to main content
Mutual Exclusions
Updated over 9 months ago

Many testing programmes require users to only see one test or perhaps exclude specific tests from being part of the same journey for a user.

In Webtrends Optimize, we have a few different ways to handle this.

Consider an ABn test

There are some great efficiencies to running ABn tests that most teams don't recognise.

When running mutually-exclusive experiments, your typical setup for 2 tests when looking at overall traffic share is:

Mutually exclusive Test 1: 25% control, 25% variation

Mutually exclusive Test 2: 25% control, 25% variation

Here, half of your traffic goes to the control, despite it being the same no-change pool of users, and only a quarter goes to each variation.

Instead, an ABn tests allows you to boost traffic to each variation whilst also proportionately providing more traffic to the control for stronger comparisons. It's a win-win way of working.

Combined test: Control: 33.3%, Test 1 variation: 33.3%, Test 2 variation: 33.3%.

This means you are more likely to run significant experiments.

From this setup, you can see more traffic to everything being compared - both the control groups and the 2 variations.

Originally, the comparisons would be made of 25% variation vs. 25% control. Now, each comparison is 33.3% control vs. 33.3% variation. More traffic means a higher likelihood of detecting and validating a change in conversion rate.

The approach depends on the tests sharing a location - it would be a poorly run test if the control had a wider scope than the variation.

Use Projects

A core concept of the Project mechanism is that users will only be served one test per project - inherent Mutual Exclusion.

It again does require you to share locations between projects, but beyond this works without much fuss.

Consider the following series of tests:

  • 1 - Homepage USPs

  • 2 - Homepage Banners

  • 3 - Homepage Navigation

Typically, you would build these as completely separate tests, and they would live in different projects.

Instead, you can clone an existing test, and use the clone with whatever name, code, variations and segmentation you like. For the above setup, that would require 2 clones.

If the tests share a segment, be sure to throttle-down your tests so one test doesn't receive 100% and leave nothing for the other tests.

  • Test 1: 33% - would leave 2/3rds for other tests

  • Test 2: 50% - would leave half for the last test

  • Test 3: 100% - take whatever is left

This is most easily thought about as "what should we leave for what's next" rather than what the test itself should consume.

Priority Weighting

With multiple tests in a given project, you can control the order in which they're evaluated.

This is found in the dashboard carousel, a few slides along. Simple drag tests/targets from the left to the right, in the order you wish for them to be evaluated.

Using segmentation

Are your tests on completely separate pages? If so, Segmentation could be a simple solution to your problem.

If you are using a non-localstorage version of the tag, there will be cookies identifying which experiments users fell into. You can use these for basic 1:1 exclusions:

"Only let people into my test if they don't have cookie _wt.project-42516-ta_1projectname"

This can be achieved as a quick solution directly inside the segmentation builder in the UI.

Using Javascript

This is the most flexible option, and whilst not as friendly for traffic as ABn tests, ensures under all other scenarios that tests do not overlap.

If you're using Tag Version 5.6+

We've simplified the logic and ability to block your test from rendering and counting users since Version 5.6. Complete this code, and put it in your pre-render. Leave the rest empty, or use return to ensure the rest of your code doesn't execute. This example aborts "this test" if they've seen ANY other test.

!function(){
var thisTA = "ta_something";
if (WT.TestsHistory.getTests() && !WT.TestsHistory.seenTest(thisTA)) {
WT.blockCodeRender.push(thisTA);
WT.blockPV.push(thisTA);
return;
}

// Add this test to the History, so the next test can pick up the logic
WT.TestsHistory.addTest(thisTA);
}();

This code blocks the render of all future blocks (variation, post-render) and also doesn't count a user - effectively disabling the test.

If you're using Tag Version 5.5 or older

The Test History library is a lightweight JavaScript library, typically kept in Pre-Init (integration coming for easier enablement). When enabled, it allows users to write simple logic to handle when to allow tests to run. Unlike projects, because this handling is purely with JavaScript, you're able to facilitate the most intricate of logic, deciding at the very last minute (after polling/waiting, API calls, etc.) whether to allow someone into your test or not. Its accurate usage is dependent on the Optimize Build Framework, which allows you to suspend and manually track Views.

The library is found below, to be placed in Pre-Init in your tag. Its usage is described as follows:

To check against a specific test you want to exclude - here's a snippet from the OBF post-render:

/*/////////////////////////////////////////////////////////////
| ENTRY POINT TO THE TEST
-------------------------------------------------------------*/
Run: function()
{
// Optional: If you have anything immediate to check that would exclude users, do it here.
if(someConditionThatWouldExcludeUsers){
Test.abort("User matched some condition that would exclude users");
return;
}

// To abort if user has seen a specific other test.
if (WT.TestsHistory.seenTest('ta_30_HomepageLayoutAB')) {
Test.abort('Visitor has seen test 30 - excluding from this one.');
// Stop function execution
return;
}
// Add this test to the History, so 30 can pick up similar logic
WT.TestsHistory.addTest(Config.testAlias);

Test.poll({

// ...

To check against any/all tests:

/*/////////////////////////////////////////////////////////////
| ENTRY POINT TO THE TEST
-------------------------------------------------------------*/
Run: function()
{
// If you have anything immediate to check that would exclude users, do it here.
if(someConditionThatWouldExcludeUsers){
Test.abort("User matched some condition that would exclude users");
return;
}

// To abort if user has seen any tests other than this one.
if (WT.TestsHistory.getTests().replace(Config.testAlias, '').length) {
Test.abort('Visitor has seen another test. Abort.');
// Stop function execution
return;
}
// Add this test to the History, so 30 can pick up similar logic
WT.TestsHistory.addTest(Config.testAlias);

Test.poll({

// ...

If you want to perform this check after polling/waiting for some conditions:

/*/////////////////////////////////////////////////////////////
| ENTRY POINT TO THE TEST
-------------------------------------------------------------*/
Run: function()
{
Test.pageview.suspend("Waiting for cart");

Test.poll({
msg: 'Polling for cart',
// Polling function
when: function()
{
return document.querySelector('.header--cart-link .number-wrapper');
},
// Polling callback
then: function()
{
// Run the checks you need to run.
if(someConditionThatWouldExcludeUsers){
Test.abort("User matched some condition that would exclude users");
return;
}

// Now check Tests History
// To abort if user has seen any tests other than this one.
if (WT.TestsHistory.getTests().replace(Config.testAlias, '').length) {
Test.abort('Visitor has seen another test. Abort.');

// Stop function execution
return;
}

// Add this test to the History, so 30 can pick up similar logic
WT.TestsHistory.addTest(Config.testAlias);

// ...

The test history library

If using this, you will need the following JS in your tag. We do not add any unnecessary weight to tags, so it needs to be enabled.

try {
// Function to save project numbers user has seen
WT.TestsHistory = (function(){
return {
cookieName: '_wt.testsHistory',
domain:'',
addTest: function(pTest)
{
var tests = this.getTests();
var testsArr = [];
if(tests)
{
testsArr = tests.split(',');
}

var rxTest = new RegExp('(^|,)'+pTest+'(,|$)', 'i');

if(tests.match(rxTest)) return;

testsArr.push(pTest);

// Write the test to a cookie (expires in 90 days)
WT.helpers.cookie.set(this.cookieName, testsArr.join(','), 90);
},

seenTest: function(pTest)
{
var rxTest = new RegExp('(^|,)'+pTest+'(,|$)', 'i');
var tests = this.getTests() || '';
return Boolean(tests.match(rxTest));
},
getTests: function()
{
return WT.helpers.cookie.get(this.cookieName) || "";
}
};
}());
} catch(err) {
if(document.cookie.match(/_wt.bdebug=true/i)) console.log(err);
}

Did this answer your question?