Skip to main content
All CollectionsFor WebQA & Debugging
What to look for when QAing an experiment
What to look for when QAing an experiment
Updated over 10 months ago

There are several aspects to consider for a thorough QA of an experiment.

Doing so is crucial - the smallest of errors could easily degrade overall user experience, and consequently both hurt sales/leads and also negatively impact the results of your experiment.

For everything that you validate, you should check

Browsers and devices

If you're running an experiment across multiple browsers on desktop, tablet and mobile, you should be QAing against as much of this as possible.

Chrome dev tools is not a sufficient replacement for QAing on real devices - they often run on different engines, and have their own set of quirks.

You might also not appreciate nuances with touch as the mechanism for interacting with the page instead of clicking if using devtools - just one of many reasons to use real devices wherever possible.

Scenarios

Nowadays, many websites have page templates that have subtle if not large differences within them. For example, perhaps not all PLPs are built the same, nor all PDPs.

In principle, this is a good thing. More relevant pages are always welcomed, but this does add a layer of complexity to build and QA that you should make sure you validate against.

For example - PLPs are often dynamically built. What happens if someone clicks a filter or changes their sort order? What happens if someone triggers the lazy-load at the bottom? Do your changes still apply to all product tiles?

Make sure scenarios are understood before dev starts.

One of the biggest mistakes many programmes make is to not document scenarios, and discover them during QA. Had a developer known about them, they would likely have accounted for them.

Documenting these things up-front, despite pressure to get projects into dev or live, will help you with a shorter end-to-end completion time more often than not.

Things to check

Visual checks

Does the experiment look as it should do? Are the correct fonts used, are elements aligned, is the text accurate?

Minor mistakes or misalignments can cause a jarring visual effect and negatively impact the user experience and likelihood to complete their journey.

Functional checks

If you're building new components, do they work as they should do?

Consider all possible inputs and outputs, not just the obvious/positive scenarios.

What happens if products aren't in stock, or if an action isn't successful? All of these smaller things can happen, and a well-thought-out component could make all the difference between a dead-end user journey and a successful experience.

Data collection and rollup

If you're tracking metrics, are they both sending off the page as expected, and showing up in the Optimize UI?

Consider not just the events, but if you're sending Custom Data along with your metrics, make sure you extract the data and that the values are correct. Test as many possible inputs as is reasonable, to make sure certain scenarios don't break your data collection.

Accurate data collection is just as important as accurate visual/functional experiences. Poor data leads to unreliable experiment results, and considerable amounts of wasted time.

Segmentation and targeting

Make sure that you're not just targeting the right users and pages, but also that you're not unintentionally capturing more people/places than you're supposed to.

Negative testing is particularly important here - it's very easy to make sure you fall into an experience, but far more challenging to think about who might unintentionally fall in.

Load time and performance

Is your experience taking too long to show up on the page?

Perhaps you're waiting for a library to become available, and it's not loading quickly enough onto the page, causing a noticeable delay.

On the other front, is the experiment slowing the page down?

We've seen plenty of cases where developers will be polling indefinitely within an experiment, often every 5-10 milliseconds. They do this to try and patch over holes in the user experience as quickly as possible on dynamic pages, but this drags down performance.

Looking outside of your test area

Leaking code to unintentional areas of the website is far too easy. Class names are often shared across the site, and single-page apps don't make the process any easier as stylesheets might not be dropped and Javascript event handlers may stick across more pages than expected.

Thorough code reviews and regression of core site functionality are both underrated but growingly helpful.

Did this answer your question?