Overview
Cordial Data Automations allow you to sanitize, normalize, transform, and programmatically enrich customer data. You can supply a set of data from your Cordial account or from an external source, perform any number of transformations on that data, and then load it to a supported internal or external destination.
The Cordial UI simplifies basic functions such as data column mapping between internal and external sources, and loading your contact data to supported social media audience integrations.
We also provide a flexible scripting interface for running advanced transformation use cases. You can create custom script to perform one or more data operations in concert. Your script can contain Smarty data transformation logic utilizing powerful utilities you have come to know and love. Additionally, we have created utilities specific to data transformations that allow you to operate on almost any data collection in Cordial.
Data Automations Breakdown
Data Automations are made up of three main building blocks: data source, data transformation, and data destination.
Data source
A data source can either be internal or external.
- An internal source gets data from a specific data collection within the Cordial platform. Data collections currently supported as internal sources include contacts, orders, events, and supplements.
- Data can be extracted from external sources including local file uploads, FTP/SFTP, HTTPS, Amazon S3, and Google Cloud Storage.

Recurring data jobs can pick import files from the directory by looking for the specified file name or based on a pattern in the file name prefix. This becomes relevant when the external source is FTP/SFTP, Amazon S3, or Google Cloud Storage.

Use Smarty getNow utility to look for the current date in the file name:
Your import file name may have a predictable prefix but is appended with a unique timestamp when added to the source directory. The resulting file name may look similar to this: contacts_2020_09_14_09.csv. We can look for the known prefix and use Smarty to match the expected timestamp at the time the scheduled import occurs. If no files match this pattern, the data job will run again on the following scheduled date and time.
/path/to/contacts_{$utils->getNow('America/Los_Angeles', 'Y_m_d_H')}.csv
Use the wildcard* character after a certain prefix:
The wildcard will match any characters in the file name after the prefix. Your import file could start with contacts_ but is appended with a unique string when added to the source directory, resulting in a file name similar to this: contacts_crmdata2.txt. Simply use the wildcard to match the appended string.
/path/to/contacts_*.txt
Combine Smarty and the wildcard character:
You can use Smarty in conjunction with the wildcard character to support matching other file name patterns.
/path/to/contacts_{$utils->getNow('America/Los_Angeles', 'Y_m_d_H')}_*.csv
Choose one of two data structures depending on your data source file type:

- Columnar file - With the option to indicate whether the file contains a header row and then choose the delimiter type.
- JSON line file - Basic column mapping is not available when working with this data structure. JSON line files do not have a header row, and each record is stored in a new line, not separated by a delimiter.
You can use the Advanced mode to skip file prefetch and provide a sample header row of the data expected to be in your file. This allows you to set up a data job in anticipation of the import file being available at the external destination in the future. This is also the key to improving front-end performance when processing large files, which can be accessed when needed, rather than at the beginning of a data job.

Data transformation
A data transformation is an optional but powerful step in a Data Automation. Once you determine the data source, you can perform data transformations to update any of the incoming data, perform other complex transformation functions, before sending data to its destination. You can also use the data transformation to target and store specific data points of the larger data set into one or more internal destinations (see the following section).

- Pre Job Script - Runs Smarty only once and is used to set variables and other data objects for use by the main script.
- Transformation Script - Smarty in the main transformation script will run repeatedly for each row of data.
- Post Job Script - Run Smarty only once after the main transformation script is done rendering.
Data destination
A data destination, like the data source, can either be external or internal and can be an optional step depending on your intent.
- Currently supported external data destinations include local file download, FTP/SFTP, HTTPS, Amazon S3, and Google Cloud Storage.

- An internal destination allows you to store data within a specific data collection in Cordial. Storing data to an internal destination is accomplished in one of two ways:
- Through a basic mapping via the UI. Currently, you can only use the contacts collection as the internal destination via basic data mapping.
- Via data transformation through a library of utilities that can add and/or update one or many collections. Currently supported internal destinations for data transformations include contacts, supplements, orders, products, and events.
- Through a basic mapping via the UI. Currently, you can only use the contacts collection as the internal destination via basic data mapping.
Create Data Automations
Navigate to Data Jobs from the left navigation menu. Select Create New Data Job to get started.

You will be able to choose from four different types of data jobs:
- One time - Run a data job once only.
- Recurring - Used to run data jobs automatically based on a set schedule.
- Event triggered - Used when the data job is to be triggered by an event.
- API triggered - Used to trigger data jobs via API calls.
There are some limitations based on the type of data job you select:
- Event triggered and API triggered Data Automations must include a data transformation.
- Only one time and recurring data jobs can be used to send data to an internal destination through a basic column mapping.
- Data Automations that use an external data source cannot be used to send data to external data destinations - only to internal destinations using a scripted transformation.
View Data Automations Status
You can view your draft, one time, and automated data jobs using the left navigation menu.
Data Jobs > One Time Data Job Drafts - View a list of one time data jobs in draft status. A draft is saved immediately after you create a new one time data job.
Data Jobs > Completed One Time Data Jobs - Displays a list of one time data jobs that you ran. Depending on the data job complexity and the number of concurrently requested jobs, the status may be one of the following:
- Complete - The data job is done running.
- Pending - Waiting to begin running.
- Failed - Data job stopped running before it was completed. Please check the log for more information.
Status labels that indicate data job is still running:
- Processing
- Pre-processing
- RecordsQueued
Data Jobs > Data Job Automations - View the status of created data job automations. Job automations are added to this list immediately after you create them, including those that have yet to be enabled and published.
Test Your Script
To test a transformation, you must provide a sample row of data in order to see the expected results and test the validity of your script. During Data Job setup that contains a transformation, click Edit to access the Transformation Script window.

If your data source is internal contacts, you will be able to select an existing contact to test the transformation, but note that running the test script will add or update the selected contact record, so we recommend running your test with a contact record reserved for testing purposes.
You can select the desired test contact from the bottom left of the Transformation Script window via the Test as menu.

If your data source is internal events, orders, or supplements, you will be able to enter a sample record written in JSON. To display the JSON input dialog window, click the Test Script button from the bottom left of the Transformation Script window.

If your data source is an external CSV or JSONL file, you will be able to insert sample values of a record or insert a row directly from your file. To display the import file test options dialog window, click the Test Script button from the bottom left of the Transformation Script window.

Status Log and Rendered Output
You can use the Log tab to see a breakdown of utility functions that have been executed with success or failure. Click on individual rows to expand for additional details.

We recommend using $utils->jsonPrettyPrint($object) utility to output values as you develop your script. The utility will render data objects in the Rendered Output tab.

Note that the script will stop as soon as a failure is encountered. Any data updates performed before the failure will not be reverted. No additional utilities will be executed when a failure is encountered.
Supported Smarty Utilities
A collection of Smarty utilities are available for use with data transformations. For a complete list of supported utilities, see the Cordial utilities reference page.
Use Case Examples
How to build a basic contact import
- Create a new One Time or Recurring Data Job
- Select an external Data Source
- Under Data Mapping, select "Basic Mapping"
- If recurring, set up a schedule
- Run data job
How to build a basic export of contacts or events
- Create a new One Time or Recurring Data Job
- Select an internal Data Source
- Under Data Mapping, select "Export" and configure the contents of your file
- Under Data Destination, select where you would like to export the file to and configure any necessary credentials
- Run data job
How to build an Import with transformation
- Create a new One Time or Recurring Data Job
- Select external or internal Data Source
- Under Data Mapping, select "Map to keys used in an advanced data transformation script". Here you will set the keys for each column in your data set so that you can reference them within the Data Transformation script.
- Next, you will be able to construct your Data Transformation.
For this example, we will assume an external data source with a file of this structure
subscribe_status | years_active | |
---|---|---|
mary@example.com | s | 3 |
mark@example.com | ||
jsmith@example.com | s | 2 |
Based on this file, we want to add row 1 and 2 as new contacts and update the contact in row 3 in the system. We also want to assign them a rank. To do this, we might create a transformation like the following:
if ($dataRecord.email && $dataRecord.email != "") { if ($dataRecord.subscribe_status == 's') { $ss = 'subscribed'; } else if ($dataRecord.subscribe_status == 'u') { $ss = 'unsubscribed'; } else { $ss = 'none'; } if ($dataRecord.years_active) { if ($dataRecord.years_active > 5) { $rank = 'T1'; } else if ($dataRecord.years_active <= 5 && $dataRecord.years_active > 2) { $rank = 'T2'; } else { $rank = 'T3'; } } {$contact = $utils->upsertContact(['channels.email.address' => $dataRecord.email, 'channels.email.subscribeStatus' => $ss, 'rank' => $rank], $dataRecord.email, true)}; {$utils->jsonPrettyPrint($contact)} }
Comments
0 comments
Article is closed for comments.