Skip to main content
AutoRABIT, Inc.

DataLoader (Single Dataloader )

Dataloader is used to perform operations such as extract, insert, update, upsert, and delete records from Salesforce.com.

Some of the Features are:

  • Web-based tool.
  • Job scheduling: Scheduled or on-demand jobs.
  • Error reporting: Detailed reporting of any failure at the time of data loader operation.
  • History of data loading operations: The results are available as a history so that the user can view the results in case of any issues in the future.

Launching your Dataloader

  1. In the AutoRABIT home page, click on Dataloader > Dataloader.
  2. The single dataloader process summary page gets displayed. We can see five dataloader buttons which are used to create new data loading processes based on operation type. [Extract/ Insert / Update / Upsert / Delete]

Configuring New Single Data loader Processes

AutoRABIT encapsulates the operation of data loading [extract, insert, update, upsert, delete], the filters set for the operation, sandboxes used and any schedules that are required for the operation into an entity called "process".

To perform any of the operations described above, a process should be defined.

Extracting Data

The Extract option is used to extract the records to .CSV file.

  1. In the data loader page, click the Extract button on the toolbar.
  2. In the displayed dialog box, enter the following details: 
    • Salesforce source organization.
    • Type of Salesforce org (the corresponding URL is automatically loaded in the next field).
    • Enter the Username.
    • In the field Password, enter the password.
    • In the field Security token, enter the security token.
    • Next, Click Login and fetch objects.

  1. On the next page, Select the Label Name. You can filter the object using the search functionality at the top-right corner of the screen.

  1. Click Next.
  2. In the following page, select the required filters and order using the appropriate sections.

  1. Click Validate query to validate your selection. The following screen shows the number of records that match your filters and order.

  1. Click Next.

  

  1. In the following page, enter the required details:
    1. Enter the process name in Process Name.
    2. Select a Category.
    3. Enter the maximum number of rows that should be retrieved in Limit.
    4. Specify a schedule by selecting the appropriate options.
    5. Click:
      1. Previous to go back to the previous screen.
      2. Save to save the process. The process will be run by the Dataloader at the scheduled time if you have specified a schedule. However, if you have selected No Schedule, the process is not run but is updated in the process history.
      3. Cancel to abort the process.

Inserting Data:

The Insert option is used to insert the required records from the .CSV files in the target org.

  1. In the data loader page, click the Insert button on the toolbar.

  1. In the displayed dialog box, enter the following details: 
    • Salesforce source organization.
    • Type of Salesforce org (the corresponding URL is automatically loaded in the next field.)
    • Enter the Username.
    • In the field Password, enter the password.
    • In the field Security token, enter the security token.
    • Next, Click Login and fetch objects.

  1. On the next page, Select the Object. You can also find the required Label by typing it in the Quick find option provided. A list of Labels and API names are displayed.
  2. Then, click Next.
  3. In the subsequent page, upload the required CSV file. The following screen is displayed.

  1. You also have an option of Automap.

Note: AutoRABIT is the only web-based tool that provides an option of auto-mapping for both API names and Label names in the Data Loader. It compares the destination fields with the uploaded .csv files and if both of them match, the value is selected automatically. This automatically populates the .CSV file columns based on the API name or label.

  1. Click Next.
  2. On the next page, fill in the details of the process summary.
  3. Schedule the process by filling in the required details like day, time, and the interval in which the process has to be run by the Dataloader. The following screen is displayed.

  1. Click:
    • Previous to go back to the previous screen.
    • Save to save the process. The process will be run by the Dataloader at the scheduled time if you have specified a schedule. However, if you have selected No Schedule, the process is not run but is updated in the process history.
    • Cancel to abort the process.

Updating Data:

The Update option is used to update the records in the target org using .CSV files based on the id of the record.

  1. In the data loader page, click the Update button on the toolbar.

  1. In the displayed dialog box, enter the following details: 
    • Salesforce source organization.
    • Type of Salesforce org (the corresponding URL is automatically loaded in the next field.)
    • Enter the Username.
    • In the field Password, enter the password.
    • In the field Security token, enter the security token.
    • Next, Click Login and fetch objects.

  1. On the next page, Select the Object. You can also find the required Label by typing it in the Quick find option provided. A list of Labels and API names are displayed.
  2. Then, click Next.
  3. In the subsequent page, upload the required CSV file. Select the Automap option if required.

  1. Click Next.
  2. On the next page, fill in the details of the process summary.
  3. Schedule the process by filling in the required details like day, time, and the interval in which the process has to be run by the Dataloader. The following screen is displayed.

  1. Click:
    • Previous to go back to the previous screen.
    • Save to save the process. The process will be run by the Dataloader at the scheduled time if you have specified a schedule. However, if you have selected No Schedule, the process is not run but is updated in the process history.
    • Cancel to abort the process.

Upserting the Data:

Upsert is a combination of Updating and Inserting. If a record in a file matches an existing record, the existing record is updated with the values in your file. If no match is found, then the record is created as a new entity.

  1. In the data loader page, click the Upsert button on the toolbar.

  1. In the displayed dialog box, enter the following details: 
    • Salesforce source organization.
    • Type of Salesforce org (the corresponding URL is automatically loaded in the next field.)
    • Enter the Username.
    • In the field Password, enter the password.
    • In the field Security token, enter the security token.
    • Next, Click Login and fetch objects.

  1. On the next page, Select the Object. You can also find the required Label by typing it in the Quick find option provided. A list of Labels and API names are displayed.
  2. Then, click Next.
  3. In the subsequent page, upload the required CSV file. Select the Automap option if required.

  1. Click Next.
  2. On the next page, fill in the details of the process summary.
  3. Schedule the process by filling in the required details like day, time, and the interval in which the process has to be run by the Dataloader. The following screen is displayed.

  1. Click:
    • Previous to go back to the previous screen.
    • Save to save the process. The process will be run by the Dataloader at the scheduled time if you have specified a schedule. However, if you have selected No Schedule, the process is not run but is updated in the process history.
    • Cancel to abort the process.

Deleting Data:

The Delete option is used to delete the required records from the .csv files in the target org.

  1. In the Dataloader home-page, click the Delete button on the toolbar.
  2. In the displayed dialog box, enter the following details: 
    • Salesforce source organization.
    • Type of Salesforce org (the corresponding URL is automatically loaded in the next field.)
    • Enter the Username.
    • In the field Password, enter the password.
    • In the field Security token, enter the security token.
    • Next, Click Login and fetch objects.

  1. On the next page, Select the Object. You can also find the required Label by typing it in the Quick find option provided. A list of Labels and API names are displayed.
  2. Then, click Next.
  3. In the subsequent page, upload the required CSV file. Select the Automap option if required.
  4. Click Next.
  5. On the next page, fill in the details of the process summary.
  6. Schedule the process by filling in the required details like day, time, and the interval in which the process has to be run by the Dataloader. 
  7. Last, click on:
    • Previous to go back to the previous screen.
    • Save to save the process. The process will be run by the Dataloader at the scheduled time if you have specified a schedule. However, if you have selected No Schedule, the process is not run but is updated in the process history.
    • Cancel to abort the process.

Updating The Process Summary Page Of Dataloader

After every process is created, it is displayed in the process summary page. From the Process Summary page, you can view the history of all the processes are defined and run till date.
The following image shows the main functions offered in the process summary page.

  • Process Name: Shows the list of the names of the processes that have been created.
  • Date/Time: Shows the date and time on which that a particular process was created and run.
  • Edit: Allows the user to edit the details of a process. To edit a process, click edit and follow the  steps done during the creation of the process.
  • Run: Click Run to run a process immediately.
  • Schedule: Allows the user to schedule the process at which it has to be run.
  • Delete: Deletes a process. To delete, select the required process. Click Delete. A confirmation message is displayed asking whether you want to delete the process. On confirmation, the process is deleted. A deleted process will no longer be displayed in the summary page.
  • Clone: Creates a copy (clone) of the selected process. While cloning, the clone should be named. The cloned copy of the process is updated in the summary page:

  • Log :Logs provides information about the execution of a process:

Iconic Presentations in Single Dataloader:

Icon Description

We can filter the processes by their category from top right edge of dataloader panel

Success & Error count of a process

Filter icon

Every Process has following icons represents specific meaning as per below content.

Icon Description

Single dataloader process successfully completed.

Single dataloader process completed with failed status.

Run the saved process

Schedule the process on schedule in daily / weekly / fixed time basis.

Edit the existing process configuration

Delete the process

Clone the process with different name

Log to see the dataloader process running process on live

Download Successful / failed records in CSV format

Search a component or row from success / failed records.

  • Was this article helpful?