Debug Mode
Updated over a week ago

Enabling debug mode

Debug mode is enabled using the query string _wt.debug=. You then provide a number of letters relative to the level or verbosity you’d like in the output – ranging from 1 (v) to 5 (vvvvv). Our most verbose output would therefore see you adding _wt.debug=vvvvv to the query string, i.e. producing a URL similar to https://www.webtrends-optimize.com/?_wt.debug=vvvvv.

Ensure you have the Console tab in your developer tools open when doing this – this is where you’ll find our debug output.

Below, we’ll cover the main topics you’ll find in this output:

On initialise

When the log starts, you should expect to see output that resembles this:

Finding matching Locations

This is the first step in knowing if your tests should run – finding locations (with valid tests attached) which match the current environment.

Invalid state – no matching locations

If this happens, ensure you have the right tag, that your test is in the right mode, and that your location matches the page you’re on.

Valid state

If we find valid locations – the log continues.

Test evaluation

Wrong mode

If you’re in the wrong mode for a test, e.g. the test is in Staging and you aren’t

Failed to match segment

If you don’t match the segment criteria for a given test. E.g. it’s a Mobile test, and you’re on a desktop browser.

Success

If you’re granted entry into the test:

Sucess, for test already bucketed into (cached decision)

If you’ve seen the test before, and so are “stuck” to that experience – no segment evaluation is done, you’re just served the content immediately

Conversion tracking

When your conversion call goes out

The CAPI debug offers a json output. Note that the OBF offers additional logging for conversions.

When your conversion call responds as tracked

Note: If you use our “beacon tracking” mode, you should not expect a response – it’s one-way communication

Failure – user not in test

If the user isn’t in the test you’re trying to track a metric for, it will fail. We only track metrics when users do fall into a test

Did this answer your question?