With Webtrends Optimize, you are able to ship your DataFrame to our instance of Azure Blob Storage.
We will then handle the processing of that data.
What data can I send?
We typically see two types of data sent to us.
1. Product-feed data
You may have product feed data, including attributes and metadata available in Databricks. And by shipping this to Webtrends Optimize you are able to power your onsite experiences with additional data that might not be available on the page.
E.g. serving Product Recommendations based on a Similar Products algorithm, which requires us to know product attributes to cluster them.
In this scenario, you should make sure we have a Product ID that is identical to what can be looked-up on the website, so we have something to pair your records against.
There are no other restrictions to the nature or shape of what you send us.
2. Customer 360 data
You may have Customer Data, including traditional metrics like Lifetime Value and key Segments, as well as Machine-Learning calculated values like Propensity to Buy or Churn.
By sending this data to Webtrends Optimize, you can have additional attributes to Segment by in your Audiences.
In this scenario, we expect a User ID that can be paired with something on the page, such as:
User IDs from your CMS
GA / other Analytics Client IDs
Webtrends Optimize User ID.
We are flexible for which ID we are provided. We do however suggest to pick something that is available for the majority of your users, and not just those who are far down a journey such as a logged-in user ID, if possible. Doing so helps widen your potential pool of matches.
Beyond this, you are welcome to ship us any user attributes you wish.
We expect that you do not send us Personally Identifiable Information without prior conversation.
How do I set up the transfer?
1. Get your credentials from Webtrends Optimize
We will supply you with credentials for an Azure Blob Storage account. This will include an Account, Container and expected Filename structure.
2. Set up your Workflow
Head to Jobs & Pipelines
Select Job
Everyone's job will differ somewhat at this stage, as you define / select your DataFrames / notebooks.
You will add something like this to the bottom of your Notebook, which will ship data to our Azure Blob Storage instance:
df.write.mode("overwrite").option("header", True).csv("abfss://<container>@<storage_account>.dfs.core.windows.net/<file_path>")(for exact details and credentials, please reach out to Support).
Make sure you set it as a Schedule, so it happens on a regular basis. E.g.:
That's it!
Once you've set this up and run it once, we will see the file populate on our side and handle the rest.



