Pardot to Databricks

This page provides you with instructions on how to extract data from Pardot and load it into Delta Lake on Databricks. (If this manual process sounds onerous, check out Stitch, which can do all the heavy lifting for you in just a few clicks.)

What is Pardot?

Pardot, a marketing automation platform owned by Salesforce, helps businesses attract, convert, and retain customers. It uses automation tools to powers engagement campaigns designed to help companies generate leads and close sales.

What is Delta Lake?

Delta Lake is an open source storage layer that sits on top of existing data lake file storage, such AWS S3, Azure Data Lake Storage, or HDFS. It uses versioned Apache Parquet files to store data, and a transaction log to keep track of commits, to provide capabilities like ACID transactions, data versioning, and audit history.

Getting data out of Pardot

The Pardot REST API gives developers access to prospects, visitors, activities, opportunities, and other data in Pardot. By default, Pardot Pro customers are allocated 25,000 API requests per day, and Pardot Ultimate customers can make up to 100,000.

A call to the Pardot API for prospect information might look like GET /api/prospect/version/4/do/query, with required security and authentication parameters tacked on at the end, along with optional selection parameters that let you tailor what data is returned.

Sample Pardot data

Responses to Pardot API calls come in the form of XML files. A barebones example of the kind of data you might see looks like this:

<rsp stat="ok" version="1.0">
    <result>
        <total_results>...</total_results>
        <prospect>...</prospect>
            ...
    </result>
</rsp>

Preparing Pardot data

If you don't already have a data structure in which to store the data you retrieve, you'll have to create a schema for your data tables. Then, for each value in the response, you'll need to identify a predefined datatype (INTEGER, DATETIME, etc.) and build a table that can receive them. Pardot's documentation should tell you what fields are provided by each endpoint, along with their corresponding datatypes.

Complicating things is the fact that the records retrieved from the source may not always be "flat" – some of the objects may actually be lists. This means you'll likely have to create additional tables to capture the unpredictable cardinality in each record.

Loading data into Delta Lake on Databricks

To create a Delta table, you can use existing Apache Spark SQL code and change the format from parquet, csv, or json to delta. Once you have a Delta table, you can write data into it using Apache Spark's Structured Streaming API. The Delta Lake transaction log guarantees exactly-once processing, even when there are other streams or batch queries running concurrently against the table. By default, streams run in append mode, which adds new records to the table. Databricks provides quickstart documentation that explains the whole process.

Keeping Pardot data up to date

At this point you've coded up a script or written a program to get the data you want and successfully moved it into your data warehouse. But how will you load new or updated data? It's not a good idea to replicate all of your data each time you have updated records. That process would be painfully slow and resource-intensive.

Instead, identify key fields that your script can use to bookmark its progression through the data and use to pick up where it left off as it looks for updated data. Auto-incrementing fields such as updated_at or created_at work best for this. When you've built in this functionality, you can set up your script as a cron job or continuous loop to get new data as it appears in Pardot.

And remember, as with any code, once you write it, you have to maintain it. If Pardot modifies its API, or the API sends a field with a datatype your code doesn't recognize, you may have to modify the script. If your users want slightly different information, you definitely will have to.

Easier and faster alternatives

If all this sounds a bit overwhelming, don’t be alarmed. If you have all the skills necessary to go through this process, chances are building and maintaining a script like this isn’t a very high-leverage use of your time.

Thankfully, products like Stitch were built to move data from Pardot to Delta Lake on Databricks automatically. With just a few clicks, Stitch starts extracting your Pardot data, structuring it in a way that's optimized for analysis, and inserting that data into your Delta Lake on Databricks data warehouse.