All Collections
For Web
Build
Importing User Profiles into Optimize for Segmentation
Importing User Profiles into Optimize for Segmentation
Updated over a week ago

Looking to import other types of data, or for other reasons?

We can handle varying types of data, for a multitude of reasons. If you don't find a suitable document to support your use case, reach out to us for support.

Introduction

User profiles are a valuable resource for segmentation and personalisation. If you know a user has certain interests, a purchase history, etc., delivering them a more relevant experience is far easier.

Webtrends Optimize can ingest user profiles, and as this article will address, can surface that data for use in segmentation.

The pieces required to make this work are:

  • Your data

  • Our storage

  • A "processing" job on our side

  • A paired User ID we can find on the front end of the website.

  • A lookup API, or direct injection into your tag.

  • Our segmentation builder, or Javascript code.

We will explore each of these sections here.

Getting your data into our storage

Our customers will send us data, either directly or via. a 3rd party such as Planning Inc. For segmentation-style data, we expect to receive a list of User IDs.

These could be ours, or any other platform's User ID e.g. Google Analytics cookie IDs. The key is that the User ID must be known before the wt.js tag fires, as this tag is responsible for lookups taking place.

Alternatively, you can delay the execution of your experiments to wait for this data. We do this, for example, with Bloomreach's data layer.

Acceptable data formats

We typically receive data in CSV format (comma-separated values). This is typically best suited to the segmentation use case, as the size of the data in this format is far smaller than most others.

We are however happy to receive data in most formats, including:

  • CSV

  • JSON

  • XML

  • Avro

  • Parquet

Push - you uploading data to us

Several mechanisms can be used to get your data over to us. These mechanisms include, but are not limited to, the below. If you don't find something you need on this list, please reach out. We can support most initiatives.

Note: We do not have mechanisms in our UI to handle this yet but are planning the feature and it will be made available to users in 2024.

  • Blob storage / S3 containers - we can create S3-style storage containers, and provide you with keys to upload your data.

  • SFTP - we can provide you with the credentials required for SFTP server uploads.

  • REST API - we can provide you with an API to upload small amounts of data into.

Pull - us fetching data from you

We are happy to pull data from your sources, given the credentials to do so. This typically happens through either connecting to a storage bucket, or REST API.

We appreciate every setup for this route will differ.

Please reach out to us to discuss how best to make this happen.

Processing your data

This is a part that we handle fully but is detailed here for your knowledge.

We will often be sent data that pivots off Segment Name. For example:

  • wt_vipcustomers.csv - list of VIP customers

  • wt_recentpurchasers.csv - list of recent customers

  • etc.

This is ideal for you compiling the data set, and for us to receive it, as in both cases the data is as slim as possible. To look up all of the segments a specific user belongs to, however, it's less than ideal. We don't want to scan dozens of files looking for a user ID - for best performance, we want User-centric records.

For example, turning:

wt_vipcustomers.csv:

UserID
410070279
443510612
499976979
484711422
418425464
459606483
462408019
435731083
425431022
485166787
...

wt_recentpurchasers.csv

UserID
410070279
443510612
499976979
484711422
...

Into:

"410070279": [ "wt_vipcustomers", "wt_recentpurchasers" ],
...

The latter is far more efficient for lookups, so please account for us setting this job up in your timelines.

Finding a matching User ID on the front end of your website

We need to perform our lookups using the same User ID you send us. In this example, we look at how to achieve this using the _ga cookie ID.

The _ga cookie looks somewhat like:

_ga=GA1.2.1310337251.1704722312

There are a few irrelevant parts to this cookie, but the third block is the important one - in this case, 1310337251 is our User ID.

We can pull this user ID out of the cookie quite simply:

var cookie = WT.helpers.cookie.get("_ga");
if(cookie) cookie = cookie.split('.')[2];
else cookie = "user_id_not_found";

This ID can then be fed into whichever mechanism we use to perform a lookup.

Delivering segment data to the page

However the data is populated, it finds its way into our data layer.

It is then ingested as normal - for more details, see Ingesting Data Layers.

Into your dynamic tag

If you're already using a dynamically built tag (these typically come from webtrends-optimize.workers.dev), we can inject our segments directly into there.

This will remove the need for a secondary lookup, and improve overall load speeds marginally.

Lookup API

Alternatively, we can make this available over a REST API, allowing you to look up segment groups by User ID.

To make this work, you should then build a custom pull integration, delaying the execution of your post-load script whilst fetching the data from the API we provide.

The details of this API will be made available to you as needed. We configure this on an as-needed basis.

Using this data to build segments

As mentioned, the data is found in our data layer.

It can be found with Javascript at:

WT.optimizeModule.prototype.wtConfigObj.data

Alternatively, it can be referenced directly in the Segmentation or Location builders.

If the check is just for the presence of an ID in the list, simply use the rule:

Data Object Attribute "filename_here" is equal to "1"

Using this in a segment controls entry to the experience (note that targets re-evaluate on every load, but tests are sticky).

Using this in a location evaluates on every page load, and so users can fall out if they no longer qualify in the future.

For full details, see Optimize Data Layer.

Did this answer your question?