How can we help?

Data automations overview


Cordial data automations allow you to sanitize, normalize, transform, and programmatically enrich customer data. You can supply data sets from your Cordial account or from an external source, perform any number of transformations on that data, and then load it to a supported internal or external destination.

The Cordial user interface (UI) simplifies basic functions such as data column mapping between internal and external sources, and loading your contact data to supported social media audience integrations.

We also provide a flexible scripting interface for running advanced transformation use cases. You can create custom script to perform one or more data operations in unison. Your script can contain Smarty data transformation logic utilizing powerful utilities you have come to know and love. Additionally, we have created utilities specific to data transformations that allow you to operate on almost any data collection in Cordial.

Data source

Data automations are made up of three main building blocks: Data Source, Data Transformation, and Data Destination.

The Data Source can either be internal or external.

  • Internal: An internal source retrieves data from a specific data collection within the Cordial platform. Data collections currently supported as internal sources include contacts, orders, events, and supplements.
  • External: Data can be extracted from external sources including local file uploads, FTP/SFTP, HTTPS, Amazon S3, Google Cloud Storage, and Azure Blob storage.
Note: A data source (external or internal) is required to set up any Data Automation. For an event triggered Data Automation, the data source is implicitly the triggering event.

Recurring data jobs can pick import files from the directory by looking for the specified file name, or based on a pattern in the file name prefix. This becomes relevant when the external source is FTP/SFTP, Amazon S3, Google Cloud Storage, or Azure Blob storage.

Use the Smarty getNow utility to look for the current date in the file name

Your import file name may have a predictable prefix, but it's appended with a unique timestamp when added to the source directory. The resulting file name may look similar to this: contacts_2020_09_14_09.csv. We can look for the known prefix and use Smarty to match the expected timestamp at the time the scheduled import occurs. If no files match this pattern, the data job will run again on the following scheduled date and time.

/path/to/contacts_{$utils->getNow('America/Los_Angeles', 'Y_m_d_H')}.csv
Note: Ensure that the availability of your data file coincides with the Data Automation schedule.

Use the wildcard* character after a certain prefix

The wildcard will match any characters in the file name after the prefix. Your import file could start with contacts_ but it is appended with a unique string when added to the source directory, resulting in a file name similar to this: contacts_crmdata2.txt. Simply use the wildcard to match the appended string.


Combine Smarty and the wildcard character

You can use Smarty in conjunction with the wildcard character to support matching other file name patterns.

/path/to/contacts_{$utils->getNow('America/Los_Angeles', 'Y_m_d_H')}_*.csv

Choose one of two data structures depending on your data source file type:

  • Columnar file: With the option to indicate whether the file contains a header row and then choose the delimiter type.
  • JSON line file: Basic column mapping is not available when working with this data structure. JSON line files do not have a header row, and each record is stored in a new line, not separated by a delimiter.

You can use Advanced mode to skip file prefetch and provide a sample header row of the expected data. This allows you to set up a data job in anticipation of the import file being available at the external destination. This is also the key to improving front-end performance when processing large files, which can be accessed when needed rather than at the beginning of a data job.

Data transformation

A data transformation is an optional but powerful step in a data automation. Once you determine the data source, you can perform data transformations to update any of the incoming data, perform other complex transformation functions, before sending data to its destination. You can also use the data transformation to target and store specific data points of the larger data set into one or more internal destinations (see the following section).

  • Pre Job Script: Runs Smarty only once and is used to set variables and other data objects for use by the main script.
  • Transformation Script: Smarty in the main transformation script will run repeatedly for each row of data.
  • Post Job Script: Run Smarty only once after the main transformation script is done rendering. 

Data destination

Your data destination, like the data source, can either be external or internal, and it can be an optional step (depending on your intent). Currently supported external data destinations include local file download, FTP/SFTP, HTTPS, Amazon S3, Google Cloud Storage, and Azure Blob.

An internal destination allows you to store data within a specific data collection in Cordial. Storing data to an internal destination is accomplished in one of two ways:

    • By selecting an option from the Data Mapping pane in the UI.
  • Via Data Transformation through a library of utilities that can add and/or update one or many collections. Currently supported internal destinations for data transformations include contacts, supplements, orders, products, and events.

Create a data automation

1. Log in to Cordial and navigate to Data Jobs > Create New Data Job to get started.

3. Fill in the required information, and choose from four different types of data jobs:

  • One time: Run a data job once only.
  • Recurring: Used to run data jobs automatically based on a set schedule.
  • Event triggered: Used when the data job will be triggered by an event.
  • API triggered: Used to trigger data jobs via API calls.

There are some limitations based on the type of data job you select:

  • Event triggered and API triggered Data Automations must include a data transformation.
  • Only one time and recurring data jobs can be used to send data to an internal destination through a basic column mapping.
  • Data Automations that use an external data source cannot be used to send data to external data destinationsonly to internal destinations using a scripted transformation.

View the status of data automations

You can view your draft, one-time, and automated data jobs using the left navigation menu.

Data Jobs > One Time Data Job Drafts: View a list of one time data jobs in draft status. A draft is saved immediately after you create a new one time data job.

Data Jobs > Completed One Time Data Jobs: Displays a list of one time data jobs that you ran. Depending on the data job complexity and the number of concurrently requested jobs, the status may be one of the following:

  • Complete: The data job is done running.
  • Pending: Waiting to begin running.
  • Failed: Data job stopped running before it was completed. Please check the log for more information.

Status labels that indicate data job is still running:

  • Processing
  • Pre-processing
  • RecordsQueued

Data Jobs > Data Job Automations: View the status of created data job automations. Job automations are added to this list immediately after you create them, including those that have yet to be enabled and published.

Test your script

To test a transformation, you must provide a sample row of data in order to see the expected results and test the validity of your script. During Data Job setup that contains a transformation, click Edit to access the Transformation Script window.

Note: All Smarty utilities in the transformation will execute and applicable data updates/inserts will be performed, so we recommend using contact profiles reserved for testing purposes.

If your data source is internal contacts, you can select an existing contact to test the transformation, but remember that running the test script will add or update the selected contact record. We recommend running your test with a contact record reserved for testing purposes.

You can select the desired test contact from the bottom left of the Transformation Script window via the Test as menu.

If your data source is internal events, orders, or supplements, you can enter a sample record written in JSON. To display the JSON input dialog window, click the Test Script button in the bottom left of the Transformation Script window.

If your data source is an external CSV or JSONL file, you will be able to insert sample values of a record or insert a row directly from your file. To display the import file test options dialog window, click the Test Script button from the bottom left of the Transformation Script window.

Tip: Click the Show Column Headers button at the bottom left of the Transformation Script window to see a list of pre-built variables corresponding to the header row of the CSV file.

Status log and rendered output

You can use the Log tab to see a breakdown of utility functions that have been executed with success or failure. Click on individual rows to expand for additional details.

We recommend using the $utils->jsonPrettyPrint($object) utility to output values as you develop your script. The utility will render data objects in the Rendered Output tab.

Note: The script will stop as soon as a failure is encountered. Any data updates performed before the failure will not be reverted. No additional utilities will be executed when a failure is encountered.

Supported Smarty utilities

A collection of Smarty utilities are available for use with data transformations. For a complete list of supported utilities, visit the Cordial utilities reference page.

Use cases

Build a basic contact import

1. Create a new One Time or Recurring Data Job.

2. Select External under Data Source.

3. Under Data Mapping, select your Data destination and Import options.

4. If the job is recurring, set up a schedule.

5. Run the data job.

Build a basic export of contacts or events

1. Create a new One Time or Recurring Data Job.

2. Select Internal under Data Source.

3. Under Data Mapping, select Export and configure the contents of your file.

4. Under Data Transformation, write the Smarty and include $utils->addExportRecord().

5. Under Data Destination, select where you would like to export the file to and configure any necessary credentials.

6. Run the data job.

Below is an example of exporting the cID and an md5 hash of the email address.

{$export_record.file_identifier = 'export_cid_md5'}
{$export_record.row.cID = $contact._id}
{$export_record.row.md5_hash = $|lower|hash:"md5"}
{$utils->addExportRecord($export_record.file_identifier, $export_record.row)}

Build an import with transformation

1. Create a new One Time or Recurring Data Job.

2. Select External or Internal under Data Source.

3. Under Data Mapping, select Map to keys used in an advanced data transformation script. Set the keys for each column in your data set so you can reference them within the Data Transformation script.

4. Next, construct your Data Transformation. For this example, we will assume an external data source with a file of this structure:

email subscribe_status years_active s 3 s 2

5. Based on this file, we want to add row 1 and 2 as new contacts and update the contact in row 3 in the system. We also want to assign them a rank. To do this, we might create a transformation like the following:

if ($ && $ != "") {
    if ($dataRecord.subscribe_status == 's') {
        $ss = 'subscribed';
    } else if ($dataRecord.subscribe_status == 'u') {
        $ss = 'unsubscribed';
    } else {
        $ss = 'none';
    if ($dataRecord.years_active) {
        if ($dataRecord.years_active > 5) {
            $rank = 'T1';
        } else if ($dataRecord.years_active <= 5 && $dataRecord.years_active > 2) {
            $rank = 'T2';
        } else {
            $rank = 'T3';
    {$contact = $utils->upsertContact(['' => $, '' => $ss, 'rank' => $rank], $, true)};

API error responses

The Cordial API will return an error object with an errorKey and message if there's a problem with an API call. Below is a list of errors specific to the data automations (data jobs) API endpoint, along with suggested modifications to resolve each error. If you receive an error from this API endpoint that is not listed in this table, it is likely recorded within the Global API error responses page.

errorKey Message Modifications
DATA_JOBS_TRANSFORMATION_FEATURE_IS_DISABLED Current account does not support transformations through API. Current account does not support transformations through API.
DATA_JOBS_MISSING_TRANSFORMATION_RECORDS Transform record(s) should be provided. Ensure the correct transformation records are provided.
DATA_JOBS_AUTOMATION_KEY_NOT_FOUND Data Automation does not exist. The provided key did not return a Data Automation.
DATA_JOBS_API_TRIGGER_IS_DISABLED Trigger API is disabled. The trigger is disabled.



Article is closed for comments.