How to sync data between Amazon S3 and Salesforce
MuleSoft helps developers execute a variety of integration use cases from within the AWS ecosystem. One of the most common use cases we see is connecting AWS services out of the AWS ecosystem to external systems in a reusable, accelerated manner. In this developer tutorial, we are going to walk through how to synchronize data between Amazon S3 and Salesforce using MuleSoft’s Anypoint Platform. This is just one example of how MuleSoft provides out-of-the-box connectivity to AWS services such as S3, DynamoDB, SQS, and to common external systems like Salesforce, SAP, Oracle, etc.
Developers can use MuleSoft to interface with Amazon S3 to store objects, download files, use data with other AWS services, and to build applications that require internet storage. Amazon S3 easily integrates with MuleSoft when using the Amazon S3 Connector, which you can drag and drop into your Anypoint Studio project.
With Amazon S3, you can execute a few common business operations such as:
- Build apps with native cloud-based storage: Connect your application to scalable Amazon S3 buckets to store files, images, and other files.
- Backup and archive critical data: Use the Amazon S3 connector to seamlessly integrate with your ERP, CRM (such as Salesforce), EDI, and fulfillment systems, and archive necessary data.
- Drive business intelligence and optimize operational outcomes: Leverage Amazon S3 as a storage repository that holds a vast amount of raw data in its native format until as needed. Use your data lake to extract valuable insights on your data such as machine learning, analytics, and query data assets without having to provision or manage clusters.
In the steps that follow, we are going to walk through how to set up your Amazon S3 connector with MuleSoft’s Anypoint Studio. You will need to have an Amazon AWS account in order to get started and will need to grab your security credentials from the Amazon AWS console.
To download the assets used in the project, feel free to download the jar file located here and import it into your Anypoint Studio project. To import go to File -> Import -> Packaged Mule application (JAR)
First login to your Amazon AWS account and go to the top right corner and select My Security Credentials. Then click on Users on the sidebar, and click the button Create access key. You will need to copy your access key credentials into your local.properties file in your Anypoint Studio project.
If you haven't already, make sure to sign up for a free Anypoint Platform account. Click the button below to create a free account.
Already have an account? Sign in.
In this integration, we will be using the Salesforce Connector to backup new object data to our Amazon S3 Bucket.
Every time a new object is an input into Salesforce, this entire flow will execute.
In order to set up your Amazon S3 Connector, you will need to create a configuration properties file and add your AWS access key and secret key to it. We also have added fields for your Salesforce Username, Password, and the optional Security Token as well.
Create your local.properties file, go to File -> New -> File, and name the file local.properties and place it in your src/main/resources folder.
Then under your Global Configuration Elements menu, create a new Configuration Properties file and assign the local.properties file to the configuration properties element.
The Amazon S3 Connector offers over 50 operations as shown below for you to explorer and develop rich use cases for. For more documentation on how to add the Connector to your project and add the connector to Studio, read the documentation here.
In the video below, you will see that our flow is made up of a few different components. The first component is the Salesforce Connector that is listening for a New Object to be created in your Salesforce instance. For this demo, we selected Lead as the Object Type. Whenever a new Lead is added to Salesforce, this entire flow will execute. It’s important to change the Frequency on the Salesforce Connector to the time value that works for your use case. For this tutorial, we have the Salesforce Connector polling every 10 seconds.
The next component is the Transform Message Connector. We use a Dataweave script to format the payload in a format that is similar to the one found on our S3 bucket.
This is how the Payload will look from Salesforce once we transform it using Dataweave:
We then save the Payload as the variable SaveObject so we can reference it later in the flow and append it to the existing document on S3.
Next, we grab the most recent object from our S3 bucket! Simply enter the Bucket Name in the Get Object General settings, and under Key, type in the name of your file which in this case is customers.json. You can download the customers.json file by clicking the button below.
We then use another Transform Message component to append the most recent object from Salesforce to the current customers.json file located on S3. Then we take that new file, and reupload it to S3 replacing the old file on the server.
If you get the error: Message: "You called the function 'mapObject' with these arguments: 1: String, 2: Function but expected But it expects arguments of these types: 1: Object 2: Function", make sure that you are changing your DataWeave code to read the file as JSON. To fix the issue, add the script to your Transform component after the S3 Get Object Connector:
As you can see in the script below, you can see that the Dataweave script added an index to the next value and will keep adding values to this JSON file for every new Lead added on Salesforce.
As you can see from the above example, backing up data from Salesforce to S3 is a simple process using the Amazon S3 Connector. Using Dataweave, you can iterate through an existing object on S3, and add new data to that object. When you are done transforming the payload, use the Create Object S3 Connector to upload the modified file back to S3.
The example shown in this tutorial is only scratching the surface of the potential integrations you can develop using the Amazon S3 Connector. There are many instances where you will need to back up large files from Salesforce to S3 since Salesforce can only support a maximum file size of 25mb.
The S3 Connector also allows you to listen for when a new object is created in your bucket using the On New Object Connector. For instance, you could listen for a new object to be uploaded, then modify that object in your flow and reupload it to S3.
The possibilities are truly endless with the Amazon S3 Connector. To learn more about MuleSoft and to read more tutorials, please visit the developer tutorials homepage. Please rate this tutorial below if you found this helpful.
Try Anypoint Platform for free
Already have an account? Sign in.