Skip to main content
All CollectionsTroubleshooting
How to measure page load performance as part of your experiences
How to measure page load performance as part of your experiences
O
Written by Optimize Team
Updated over a month ago

Note: This approach does not work as well with Single Page Applications. See the final point of the "Considerations" section, which explains why they do not report genuine, useful metrics.

Webtrends Optimize lets you capture all manner of data, and page load performance is one of those options.

Although officially deprecated, many browsers have a "Performance Timing API" - a means of capturing page load performance metrics. For these browsers, we can capture that data and therefore make some comparisons on any differences observed as part of your variation.

This document describes where the data comes from, what it means, how to capture it, and what data you'll get back out in the Optimize reports.


Available performance metrics

Most modern browsers support the PerformanceTiming API:

Data courtesy of caniuse.com: https://caniuse.com/mdn-api_performancetiming

This API comes with useful metrics such as:

  • domInteractive: Time for the page to be able to be interacted with.

  • domComplete: Time it takes for elements on the page to be loaded in. Note that asynchronously loaded content may appear after this moment.

  • loadEventEnd: Time for all scripts to complete and the "load" event to fire.

We can capture these into Custom Data for your experiment.


Capturing performance metrics into Custom Data

As part of your script, such as in post-render, you can capture an event for the above.

Example code:

!function(){

if(!window.performance || !window.performance.timing) return; // not available for this user

var testAlias = "ta_something";
var check = setInterval(function(){

if(!window.performance.timing.loadEventEnd) return; // page is still loading
clearInterval(check);

var start = window.performance.timing.requestStart;

// Page has loaded, capture the event
WT.click({
testAlias: testAlias,
conversionPoint: "performance_info",
data: {
domInteractive: window.performance.timing.domInteractive - start,
domComplete: window.performance.timing.domComplete - start,
loadEventEnd: window.performance.timing.loadEventEnd - start
}
});

}, 500);

}();

The data we hope to send will be similar to:

Add Custom Data Fields

Alongside the code, you will need to inform the platform to expect these metrics.

If you are doing this routinely, consider adding it to the Automated Conversions section, at which point it will get automatically added for all future tests.

For a one-off, add it to your test in the Conversions tab of the Advanced Editor:

This will unlock the Non-binomial / Continuous Metric reporting that will make this data valuable.


Optimize reports output for performance metrics

In your reports, you will find a panel for Non-binomial data. This will include your performance metrics, if set up correctly.

Note: Outcomes are inverted for this data.

For this niche use-case, outcomes are inverted. A reduction in numbers is a good outcome, and an increase is bad. Please keep this in mind when making sense of the data and deriving conclusions from it.

Screenshot from WTO7 UI. The legacy interface will contain the same metrics with a different presentation.

In this report, you will see useful metrics such as:

  • Avg per tested visitor

  • Avg per test view (per time we fired the metric)

  • Means and Medians (note considerations below for studying averages)

  • Uplifts

  • Chance to beat control

  • Significance


Considerations

May factors can affect the numbers we collect.

The broader the scope of the test, the more variables will influence this figure and stretch the upper and lower bounds of performance scores captured. E.g. you may have a particularly slow PDP page, and your experiment may send more people to this page (great) but you then this poor performance becomes part of your monitored score (bad).

Next, also consider that performance while your experiences are being delivered is important, but these experiences are typically short-lived. Winners found will be implemented by your website, and so instead of needing to bend or twist your website to meet the experience, the user will download the correct experience in the first case. So negative scores whilst important, are short-lived problems in isolation.

Further, the trend we suggest looking into is performance across many experiences that you run. In isolation, an experience won't tell you much. But if many point to negative performance, you may want to reconsider how you make your changes, such as using CSS over Javascript, or Mutation Observers over Polling. These are commonly the better approaches, but seeing the impact of faster/poorer approaches may drive extra effort in the "better" solution.

Finally, note that single page apps bend how these metrics are gathered. Instead of a genune page load, they only load the page once. All future navigation from page to page is actually just the page being transformed, and the URL being updated to reflect where the user has reached. Whilst effectively the same, the performance metrics will not keep up with Virtual Pageloads, and as such the technique here is neither applicable, nor useful.

There are no meaningful metrics to capture for single page applications as you go from virtual page to page.

Did this answer your question?