Salesforce Bulk API 2.0 Sink Connector for Confluent Cloud

The fully-managed Salesforce Bulk API 2.0 Sink connector for Confluent Cloud integrates Salesforce.com with Apache Kafka®. The connector performs insert, update, and delete operations on Salesforce SObjects using records available in Kafka topics and writes them to Salesforce. This connector uses Salesforce Bulk API 2.0.

Note

  • If you require private networking for fully-managed connectors, make sure to set up the proper networking beforehand. For more information, see Manage Networking for Confluent Cloud Connectors.

  • The connector supports Salesforce up to API version 65.0.

Features

The Salesforce Bulk API 2.0 Sink connector provides the following features:

  • API 2.0: Supports Salesforce Bulk API 2.0.

  • At least once delivery: The connector guarantees that records are delivered at least once to the Kafka topic. If the connector restarts, there could be duplicate records in the Kafka topic.

  • Supported data formats: The connector supports Avro, JSON Schema (JSON_SR), and Protobuf output data. Schema Registry must be enabled to use a Schema Registry-based format (for example, Avro, JSON_SR, or Protobuf). See Schema Registry Enabled Environments for additional information.

  • Supports multiple tasks: The connector supports running one or more tasks. More tasks may improve performance (that is, consumer lag is reduced with multiple tasks running).

  • Supports Client Credentials flow: The connector supports authentication using the Client Credentials flow that enables connecting to Salesforce without exposing the user credentials. To use CLIENT_CREDENTIALS grant type, you must enable the Client Credentials flow in your connected Salesforce application and assign an integration user.

For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect Usage Examples section.

Limitations

Be sure to review the following information.

Quick Start

Use this quick start to get up and running with the Salesforce Bulk API 2.0 Sink connector. The quick start provides the basics of selecting the connector and configuring it to capture records and record changes from Kafka topics.

Prerequisites
  • Kafka cluster credentials. The following lists the different ways you can provide credentials.

    • Enter an existing service account resource ID.

    • Create a Confluent Cloud service account for the connector. Make sure to review the ACL entries required in the service account documentation. Some connectors have specific ACL requirements.

    • Create a Confluent Cloud API key and secret. To create a key and secret, you can use confluent api-key create or you can autogenerate the API key and secret directly in the Cloud Console when setting up the connector.

Using the Confluent Cloud Console

Step 1: Launch your Confluent Cloud cluster

To create and launch a Kafka cluster in Confluent Cloud, see Create a kafka cluster in Confluent Cloud.

Step 2: Add a connector

In the left navigation menu, click Connectors. If you already have connectors in your cluster, click + Add connector.

Step 3: Select your connector

Click the Salesforce Bulk API 2.0 Sink connector card.

Salesforce Bulk API 2.0 Sink Connector Card

Important

At least one topic must exist in your Confluent Cloud cluster before creating the connector.

Step 4: Enter the connector details

Note

  • Make sure you have all your prerequisites completed.

  • An asterisk ( * ) designates a required entry.

At the Add Salesforce Bulk API 2.0 Sink Connector screen, complete the following:

Select the topic you want to send data to from the Topics list. To create a new topic, click +Add new topic.

  1. Select the way you want to provide Kafka Cluster credentials. You can choose one of the following options:

    • My account: This setting allows your connector to globally access everything that you have access to. With a user account, the connector uses an API key and secret to access the Kafka cluster. This option is not recommended for production.

    • Service account: This setting limits the access for your connector by using a service account. This option is recommended for production.

    • Use an existing API key: This setting allows you to specify an API key and a secret pair. You can use an existing pair or create a new one. This method is not recommended for production environments.

    Note

    Freight clusters support only service accounts for Kafka authentication.

  1. Click Continue.

  1. Add the Salesforce connection and authentication details:

    • Salesforce grant type: Sets the authentication grant type to PASSWORD , JWT_BEARER (Salesforce JSON Web Token (JWT)) or CLIENT_CREDENTIALS. Defaults to PASSWORD.

    • Salesforce instance: The URL of the Salesforce endpoint to use. The default is https://login.salesforce.com. This directs the connector to use the endpoint specified in the authentication response.

      Note

      The following properties are used based on the Salesforce grant type you choose.

      • JWT_BEARER: Requires username, consumer key, JWT keystore file, and JWT keystore password.

      • PASSWORD: Requires username, password, password token, consumer key, and consumer secret.

      • CLIENT_CREDENTIALS: Requires consumer key, consumer secret (client ID and client secret of a Salesforce connected application) and Salesforce domain URL in Salesforce instance option. The default value https://login.salesforce.com does not work for this option. To use CLIENT_CREDENTIALS, you must enable the Client Credentials flow in your connected Salesforce application and assign an integration user.

    • Salesforce username: The Salesforce username for the connector to use.

    • Salesforce password: The Salesforce password for the connector to use.

    • Salesforce password token: The Salesforce security token associated with the username.

    • Salesforce consumer key: The consumer key for the OAuth application.

    • Salesforce consumer secret: The consumer secret for the OAuth application.

    • Salesforce JWT keystore file: If using the grant type JWT_BEARER, upload the JWT keystore file.

    • Salesforce JWT keystore password: Enter the password used to access the JWT keystore file.

  2. Click Continue.

  1. Add the following details:

    • Select the Input Kafka record value format (data coming from the Kafka topic): AVRO, JSON_SR (JSON Schema), or PROTOBUF. A valid schema must be available in Schema Registry to use a schema-based message format (for example, Avro, JSON_SR (JSON Schema), or Protobuf). See Schema Registry Enabled Environments for additional information.

    • Salesforce Object Name: The Salesforce object to create a topic for.

    Show advanced configurations
    • Schema context: Select a schema context to use for this connector, if using a schema-based data format. This property defaults to the Default context, which configures the connector to use the default schema set up for Schema Registry in your Confluent Cloud environment. A schema context allows you to use separate schemas (like schema sub-registries) tied to topics in different Kafka clusters that share the same Schema Registry environment. For example, if you select a non-default context, a Source connector uses only that schema context to register a schema and a Sink connector uses only that schema context to read from. For more information about setting up a schema context, see What are schema contexts and when should you use them?.

    • Behavior on API errors: How the connector behaves when an Salesforce API error occurs. Valid options are fail and ignore (the default). If set to fail, the connector stops.

    • Max timeout milliseconds: The maximum time in milliseconds (ms) that the connector waits for all batch operations to complete. Defaults to 200000 ms.

    Auto-restart policy

    • Enable Connector Auto-restart: Control the auto-restart behavior of the connector and its task in the event of user-actionable errors. Defaults to true, enabling the connector to automatically restart in case of user-actionable errors. Set this property to false to disable auto-restart for failed connectors. In such cases, you would need to manually restart the connector.

    Consumer configuration

    • Max poll interval(ms): Set the maximum delay between subsequent consume requests to Kafka. Use this property to improve connector performance in cases when the connector cannot send records to the sink system. The default is 300,000 milliseconds (5 minutes).

    • Max poll records: Set the maximum number of records to consume from Kafka in a single request. Use this property to improve connector performance in cases when the connector cannot send records to the sink system. The default is 500 records.

    • Max Retry Time in milliseconds: In case of error when making a request to Salesforce, the connector will retry until this amount of time elapses. Defaults to 30000 ms.

    • Use Custom ID field: Whether or not to use a custom external ID field for insert or upsert operations. Defaults to false.

    • Custom ID field name: The name of a custom external ID field in the salesforce object (SObject) to structure Rest API calls for insert, upsert operations. Used when Use Custom ID fields is set to true. For additional information, see Considerations.

    • Salesforce ignore fields: A comma-separated list of fields from the source Kafka record to ignore when pushing a record into Salesforce.

    • Salesforce ignore reference fields: Whether or not to prevent reference type fields from being updated or inserted in SObjects. Defaults to false.

    • Override event type: Whether or not to override the Kafka SObject source record EventType(create, update, delete). If set to true, the connector uses operation specified in the Salesforce sink operation configuration property. Defaults to false.

    • Salesforce sink operation: The Salesforce sink operation to perform on the SObject. Options are insert, update, upsert, or delete. Used when Override event type is set to true. For additional information, see Considerations.

    • Salesforce version: The version of the Salesforce API to use. Defaults to latest.

    Transforms

    For all property values and definitions, see Configuration Properties.

  2. Click Continue.

The connector supports running one or more tasks. More tasks may improve performance (that is, consumer lag is reduced with multiple tasks running).

Click Continue.

  1. Verify the connection details by previewing the running configuration.

  2. After you’ve validated that the properties are configured to your satisfaction, click Launch.

    The status for the connector should go from Provisioning to Running.

Step 5: Check for records

Verify that records are being produced at the endpoint. For additional information, see Considerations.

For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect Usage Examples section.

Using the Confluent CLI

Complete the following steps to set up and run the connector using the Confluent CLI.

Important

Make sure you have all your prerequisites completed.

Step 1: List the available connectors

Enter the following command to list available connectors:

confluent connect plugin list

Step 2: List the connector configuration properties

Enter the following command to show the connector configuration properties:

confluent connect plugin describe <connector-plugin-name>

The command output shows the required and optional configuration properties.

Step 3: Create the connector configuration file

Create a JSON file that contains the connector configuration properties. The following example shows the required connector properties.

{
  "connector.class": "SalesforceBulkApiV2Sink,
  "name": "SalesforceBulkApiV2Sink_0",
  "kafka.auth.mode": "KAFKA_API_KEY",
  "kafka.api.key": "<my-kafka-api-key>",
  "kafka.api.secret": "<my-kafka-api-secret>",
  "topics": "TestBulkAPI",
  "input.data.format": "AVRO",
  "salesforce.grant.type": "PASSWORD",
  "salesforce.username": "<my-username>",
  "salesforce.password": "**************",
  "salesforce.password.token": "************************",
  "salesforce.consumer.key": "**************",
  "salesforce.consumer.secret": "************************",
  "salesforce.object": "<SObject-name>","
  "tasks.max": "1"
}

Note the following property definitions:

  • "connector.class": Identifies the connector plugin name.

  • "name": Sets a name for your new connector.

  • "kafka.auth.mode": Identifies the connector authentication mode you want to use. There are two options: SERVICE_ACCOUNT or KAFKA_API_KEY (the default). To use an API key and secret, specify the configuration properties kafka.api.key and kafka.api.secret, as shown in the example configuration (above). To use a service account, specify the Resource ID in the property kafka.service.account.id=<service-account-resource-ID>. To list the available service account resource IDs, use the following command:

    confluent iam service-account list
    

    For example:

    confluent iam service-account list
    
       Id     | Resource ID |       Name        |    Description
    +---------+-------------+-------------------+-------------------
       123456 | sa-l1r23m   | sa-1              | Service account 1
       789101 | sa-l4d56p   | sa-2              | Service account 2
    
  • ""topics": Enter a Kafka topic name or a comma-separated list of topics. A topic must exist before launching the connector.

  • "input.data.format": Sets the input data format (data coming from the Kafka topic): AVRO, JSON_SR (JSON Schema), or PROTOBUF. A valid schema must be available in Schema Registry to use a schema-based message format (for example, Avro, JSON_SR (JSON Schema), or Protobuf). See Schema Registry Enabled Environments for additional information.

  • "salesforce.grant.type": Sets the authentication grant type to PASSWORD (username+password) , JWT_BEARER (Salesforce JSON Web Token (JWT)) or CLIENT_CREDENTIALS. Defaults to PASSWORD.

    Note

    The following properties are used based on the Salesforce grant type you choose.

    • JWT_BEARER: Requires username, consumer key, JWT keystore file, and JWT keystore password.

    • PASSWORD: Requires username, password, password token, consumer key, and consumer secret.

    • CLIENT_CREDENTIALS: Requires consumer key, consumer secret (client ID and client secret of a Salesforce connected application) and Salesforce domain URL in Salesforce instance option. The default value https://login.salesforce.com does not work for this option. To use CLIENT_CREDENTIALS, you must enable the Client Credentials flow in your connected Salesforce application and assign an integration user.

  • "salesforce.username": The Salesforce username for the connector to use.

  • "salesforce.password": The Salesforce username password.

  • "salesforce.password.token": The Salesforce security token associated with the username.

  • "salesforce.consumer.key": The consumer key for the OAuth application.

  • "salesforce.consumer.secret": The consumer secret for the OAuth application.

  • "salesforce.jwt.keystore.file": Salesforce JWT keystore file. The JWT keystore file is a binary file and you supply the contents of the file in the property encoded in Base64. To use the salesforce.jwt.keystore.file property, encode the keystore contents in Base64, take the encoded string, add the data:text/plain:base64 prefix, and then use the entire string as the property entry. For example:

    "salesforce.jwt.keystore.file" : "data:text/plain;base64,/u3+7QAAAAIAAAACAAAAGY2xpZ...==",
    "salesforce.jwt.keystore.password" : "<password>",
    
  • "salesforce.jwt.keystore.password": Enter the password used to access the JWT keystore file.

  • ""salesforce.object"": The SObject that the connector polls for new and changed records.

  • "tasks.max": Enter the number of tasks in use by the connector. Organizations can run multiple connectors with a limit of one task per connector (that is, "tasks.max": "1").

Single Message Transforms: See the Single Message Transforms (SMT) documentation for details about adding SMTs using the CLI.

For all property values and description, see Configuration Properties. For additional information, see Considerations.

Step 4: Load the properties file and create the connector

Enter the following command to load the configuration and start the connector:

confluent connect cluster create --config-file <file-name>.json

For example:

confluent connect cluster create --config-file salesforce-bulk-api-v2-sink.json

Example output:

Created connector SalesforceBulkApiV2Sink_0 lcc-aj3qr

Step 5: Check the connector status

Enter the following command to check the connector status:

confluent connect cluster list

Example output:

ID          |            Name              | Status  |  Type
+-----------+------------------------------+---------+-------+
lcc-aj3qr   | SalesforceBulkApiV2Sink_0    | RUNNING | sink

Step 6: Check Check for records.

Verify that records are being produced at the endpoint. For additional information, see Considerations.

For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect Usage Examples section.

Configuration Properties

Use the following configuration properties with the fully-managed connector. For self-managed connector property definitions and other details, see the connector docs in Self-managed connectors for Confluent Platform.

Which topics do you want to get data from?

topics.regex

A regular expression that matches the names of the topics to consume from. This is useful when you want to consume from multiple topics that match a certain pattern without having to list them all individually.

  • Type: string

  • Importance: low

topics

Identifies the topic name or a comma-separated list of topic names.

  • Type: list

  • Importance: high

errors.deadletterqueue.topic.name

The name of the topic to be used as the dead letter queue (DLQ) for messages that result in an error when processed by this sink connector, or its transformations or converters. Defaults to ‘dlq-${connector}’ if not set. The DLQ topic will be created automatically if it does not exist. You can provide ${connector} in the value to use it as a placeholder for the logical cluster ID.

  • Type: string

  • Default: dlq-${connector}

  • Importance: low

reporter.result.topic.name

The name of the topic to produce records to after successfully processing a sink record. Defaults to ‘success-${connector}’ if not set. You can provide ${connector} in the value to use it as a placeholder for the logical cluster ID.

  • Type: string

  • Default: success-${connector}

  • Importance: low

reporter.error.topic.name

The name of the topic to produce records to after each unsuccessful record sink attempt. Defaults to ‘error-${connector}’ if not set. You can provide ${connector} in the value to use it as a placeholder for the logical cluster ID.

  • Type: string

  • Default: error-${connector}

  • Importance: low

Schema Config

schema.context.name

Add a schema context name. A schema context represents an independent scope in Schema Registry. It is a separate sub-schema tied to topics in different Kafka clusters that share the same Schema Registry instance. If not used, the connector uses the default schema configured for Schema Registry in your Confluent Cloud environment.

  • Type: string

  • Default: default

  • Importance: medium

Input messages

input.data.format

Sets the input Kafka record value format. Valid entries are AVRO, JSON_SR and PROTOBUF. Note that you need to have Confluent Cloud Schema Registry configured

  • Type: string

  • Importance: high

How should we connect to your data?

name

Sets a name for your connector.

  • Type: string

  • Valid Values: A string at most 64 characters long

  • Importance: high

Kafka Cluster credentials

kafka.auth.mode

Kafka Authentication mode. It can be one of KAFKA_API_KEY or SERVICE_ACCOUNT. It defaults to KAFKA_API_KEY mode.

  • Type: string

  • Default: KAFKA_API_KEY

  • Valid Values: KAFKA_API_KEY, SERVICE_ACCOUNT

  • Importance: high

kafka.api.key

Kafka API Key. Required when kafka.auth.mode==KAFKA_API_KEY.

  • Type: password

  • Importance: high

kafka.service.account.id

The Service Account that will be used to generate the API keys to communicate with Kafka Cluster.

  • Type: string

  • Importance: high

kafka.api.secret

Secret associated with Kafka API key. Required when kafka.auth.mode==KAFKA_API_KEY.

  • Type: password

  • Importance: high

How should we connect to Salesforce?

salesforce.grant.type

Salesforce grant type. Valid options are ‘PASSWORD’, ‘CLIENT_CREDENTIALS’ and ‘JWT_BEARER’.

  • Type: string

  • Default: PASSWORD

  • Importance: high

salesforce.instance

The URL of the Salesforce endpoint to use. When using ‘CLIENT_CREDENTIALS’ grant type, provide your Salesforce domain URL. The default is https://login.salesforce.com, which directs the connector to use the endpoint specified in the authentication response.

salesforce.username

The Salesforce username the connector should use.

  • Type: string

  • Importance: high

salesforce.password

The Salesforce password the connector should use.

  • Type: password

  • Importance: high

salesforce.password.token

The Salesforce security token associated with the username.

  • Type: password

  • Importance: high

salesforce.consumer.key

The client id(consumer key) for the Salesforce Connected app.

  • Type: password

  • Importance: high

salesforce.consumer.secret

The client secret(consumer secret) for the Salesforce Connected app.

  • Type: password

  • Importance: medium

salesforce.jwt.keystore.file

Salesforce JWT keystore file which contains the private key.

  • Type: password

  • Default: [hidden]

  • Importance: medium

salesforce.jwt.keystore.password

Password used to access JWT keystore file.

  • Type: password

  • Importance: medium

salesforce.object

The Salesforce SObject to write to.

  • Type: string

  • Importance: high

salesforce.use.custom.id.field

Flag to indicate whether to use the salesforce.custom.id.field.name for INSERT/UPSERT sink connector operations

  • Type: boolean

  • Default: false

  • Importance: medium

salesforce.custom.id.field.name

Name of a custom external ID field in SObject to structure Rest API calls for INSERT and UPSERT operations when salesforce.use.custom.id.field=true

  • Type: string

  • Default: “”

  • Importance: medium

salesforce.ignore.fields

Comma separate list of fields from the source Kafka record to ignore when pushing a record into Salesforce.

  • Type: string

  • Default: “”

  • Importance: medium

salesforce.ignore.reference.fields

Flag to prevent reference type fields from being updated or inserted in Salesforce SObjects.

  • Type: boolean

  • Default: false

  • Importance: medium

override.event.type

A flag to indicate that the Kafka SObject source record EventType(create, update, delete) is overriden to use the operation specified in the salesforce.sink.object.operation configuration setting

  • Type: boolean

  • Default: false

  • Importance: medium

salesforce.sink.object.operation

The Salesforce sink operation to perform on the SObject. This feature works if override.event.type is true.

  • Type: string

  • Default: insert

  • Importance: medium

salesforce.version

The version of Salesforce API to use.

  • Type: string

  • Default: 65.0

  • Importance: low

Connection details

behavior.on.api.errors

Error handling behavior config for any API errors.

  • Type: string

  • Default: ignore

  • Importance: low

request.max.retries.time.ms

In case of error when making a request to Salesforce, the connector will retry until this time (in ms) elapses. The default value is 30000 (30 seconds). Minimum value is 1 sec

  • Type: long

  • Default: 30000 (30 seconds)

  • Valid Values: [1000,…,250000]

  • Importance: low

max.timeout.ms

The maximum timeout in milliseconds that the connector will continue waiting for the completion of all batch operations.

  • Type: long

  • Default: 200000 (200 seconds)

  • Importance: low

Consumer configuration

max.poll.interval.ms

The maximum delay between subsequent consume requests to Kafka. This configuration property may be used to improve the performance of the connector, if the connector cannot send records to the sink system. Defaults to 300000 milliseconds (5 minutes).

  • Type: long

  • Default: 300000 (5 minutes)

  • Valid Values: [60000,…,1800000] for non-dedicated clusters and [60000,…] for dedicated clusters

  • Importance: low

max.poll.records

The maximum number of records to consume from Kafka in a single request. This configuration property may be used to improve the performance of the connector, if the connector cannot send records to the sink system. Defaults to 500 records.

  • Type: long

  • Default: 500

  • Valid Values: [1,…,500] for non-dedicated clusters and [1,…] for dedicated clusters

  • Importance: low

Number of tasks for this connector

tasks.max

Maximum number of tasks for the connector.

  • Type: int

  • Valid Values: [1,…]

  • Importance: high

Additional Configs

consumer.override.auto.offset.reset

Defines the behavior of the consumer when there is no committed position (which occurs when the group is first initialized) or when an offset is out of range. You can choose either to reset the position to the “earliest” offset (the default) or the “latest” offset. You can also select “none” if you would rather set the initial offset yourself and you are willing to handle out of range errors manually. More details: https://docs.confluent.io/platform/current/installation/configuration/consumer-configs.html#auto-offset-reset

  • Type: string

  • Importance: low

consumer.override.isolation.level

Controls how to read messages written transactionally. If set to read_committed, consumer.poll() will only return transactional messages which have been committed. If set to read_uncommitted (the default), consumer.poll() will return all messages, even transactional messages which have been aborted. Non-transactional messages will be returned unconditionally in either mode. More details: https://docs.confluent.io/platform/current/installation/configuration/consumer-configs.html#isolation-level

  • Type: string

  • Importance: low

header.converter

The converter class for the headers. This is used to serialize and deserialize the headers of the messages.

  • Type: string

  • Importance: low

value.converter.allow.optional.map.keys

Allow optional string map key when converting from Connect Schema to Avro Schema. Applicable for Avro Converters.

  • Type: boolean

  • Importance: low

value.converter.auto.register.schemas

Specify if the Serializer should attempt to register the Schema.

  • Type: boolean

  • Importance: low

value.converter.connect.meta.data

Allow the Connect converter to add its metadata to the output schema. Applicable for Avro Converters.

  • Type: boolean

  • Importance: low

value.converter.enhanced.avro.schema.support

Enable enhanced schema support to preserve package information and Enums. Applicable for Avro Converters.

  • Type: boolean

  • Importance: low

value.converter.enhanced.protobuf.schema.support

Enable enhanced schema support to preserve package information. Applicable for Protobuf Converters.

  • Type: boolean

  • Importance: low

value.converter.flatten.unions

Whether to flatten unions (oneofs). Applicable for Protobuf Converters.

  • Type: boolean

  • Importance: low

value.converter.generate.index.for.unions

Whether to generate an index suffix for unions. Applicable for Protobuf Converters.

  • Type: boolean

  • Importance: low

value.converter.generate.struct.for.nulls

Whether to generate a struct variable for null values. Applicable for Protobuf Converters.

  • Type: boolean

  • Importance: low

value.converter.int.for.enums

Whether to represent enums as integers. Applicable for Protobuf Converters.

  • Type: boolean

  • Importance: low

value.converter.latest.compatibility.strict

Verify latest subject version is backward compatible when use.latest.version is true.

  • Type: boolean

  • Importance: low

value.converter.object.additional.properties

Whether to allow additional properties for object schemas. Applicable for JSON_SR Converters.

  • Type: boolean

  • Importance: low

value.converter.optional.for.nullables

Whether nullable fields should be specified with an optional label. Applicable for Protobuf Converters.

  • Type: boolean

  • Importance: low

value.converter.optional.for.proto2

Whether proto2 optionals are supported. Applicable for Protobuf Converters.

  • Type: boolean

  • Importance: low

value.converter.scrub.invalid.names

Whether to scrub invalid names by replacing invalid characters with valid characters. Applicable for Avro and Protobuf Converters.

  • Type: boolean

  • Importance: low

value.converter.use.latest.version

Use latest version of schema in subject for serialization when auto.register.schemas is false.

  • Type: boolean

  • Importance: low

value.converter.use.optional.for.nonrequired

Whether to set non-required properties to be optional. Applicable for JSON_SR Converters.

  • Type: boolean

  • Importance: low

value.converter.wrapper.for.nullables

Whether nullable fields should use primitive wrapper messages. Applicable for Protobuf Converters.

  • Type: boolean

  • Importance: low

value.converter.wrapper.for.raw.primitives

Whether a wrapper message should be interpreted as a raw primitive at root level. Applicable for Protobuf Converters.

  • Type: boolean

  • Importance: low

errors.tolerance

Use this property if you would like to configure the connector’s error handling behavior. WARNING: This property should be used with CAUTION for SOURCE CONNECTORS as it may lead to dataloss. If you set this property to ‘all’, the connector will not fail on errant records, but will instead log them (and send to DLQ for Sink Connectors) and continue processing. If you set this property to ‘none’, the connector task will fail on errant records.

  • Type: string

  • Default: all

  • Importance: low

key.converter.key.subject.name.strategy

How to construct the subject name for key schema registration.

  • Type: string

  • Default: TopicNameStrategy

  • Importance: low

value.converter.decimal.format

Specify the JSON/JSON_SR serialization format for Connect DECIMAL logical type values with two allowed literals:

BASE64 to serialize DECIMAL logical types as base64 encoded binary data and

NUMERIC to serialize Connect DECIMAL logical type values in JSON/JSON_SR as a number representing the decimal value.

  • Type: string

  • Default: BASE64

  • Importance: low

value.converter.flatten.singleton.unions

Whether to flatten singleton unions. Applicable for Avro and JSON_SR Converters.

  • Type: boolean

  • Default: false

  • Importance: low

value.converter.ignore.default.for.nullables

When set to true, this property ensures that the corresponding record in Kafka is NULL, instead of showing the default column value. Applicable for AVRO,PROTOBUF and JSON_SR Converters.

  • Type: boolean

  • Default: false

  • Importance: low

value.converter.reference.subject.name.strategy

Set the subject reference name strategy for value. Valid entries are DefaultReferenceSubjectNameStrategy or QualifiedReferenceSubjectNameStrategy. Note that the subject reference name strategy can be selected only for PROTOBUF format with the default strategy being DefaultReferenceSubjectNameStrategy.

  • Type: string

  • Default: DefaultReferenceSubjectNameStrategy

  • Importance: low

value.converter.value.subject.name.strategy

Determines how to construct the subject name under which the value schema is registered with Schema Registry.

  • Type: string

  • Default: TopicNameStrategy

  • Importance: low

Auto-restart policy

auto.restart.on.user.error

Enable connector to automatically restart on user-actionable errors.

  • Type: boolean

  • Default: true

  • Importance: medium

Egress allowlist

connector.egress.whitelist
  • Type: string

  • Default: “”

  • Importance: high

Considerations

Note the following when using this connector.

Unexpected errors

When the connector is performing operations on Salesforce SObjects, unexpected errors can occur that will be reported. The following lists several reasons why errors may occur:

  • Attempting to insert a duplicate record. Rules for determining duplicates are configurable in Salesforce.

  • Attempting to delete, update, or upsert a record that does not exist because the Id field does not match.

  • Attempting an operation on a field where the Id field value matches a previously deleted Id field value.

ID field semantics

When the Salesforce Bulk API Sink connector consumes records on Kafka topics which originated from the Salesforce PushTopic Source connector, an Id field is included that is a sibling of the other fields in the body of the SObject. Note that the Id is only valid within the Salesforce organization from which the record was streamed. For upsert, delete, and update operations, attempting to rely on this Id field causes failures when used on different Salesforce organizations. Inserts always ignore the Id field because Id fields are internally fully-managed in Salesforce. Upsert operations must be used with the external ID configuration properties salesforce.use.custom.id.field=true and salesforce.custom.id.field.name=<externalIdField>.

Caution

For update and delete operations across Salesforce organizations, an external ID must be configured in Salesforce. Also, a custom ID must always be marked as an external ID across both organizations.

Input topic record format

The input topic record format is expected to be the same as the record format written to output topics by the Salesforce PushTopic Source connector. The Kafka key value is not required.

Read-Only fields

Salesforce SObject fields may not be writable by insert, update, or upsert operation because the fields are set with creatable=false or updatable=false attributes within Salesforce. If a write is attempted to a field with these attributes set, the sink connector excludes the field in the operation rather than fail the entire operation. This behavior is not configurable.

Event Type

The Salesforce Bulk API sink connector Kafka record format contains an _EventType field. This field describes the type of PushTopic event that generated the record, if the record was created by the Salesforce PushTopic Source connector. Types are created, updated, and deleted. When processing records, the sink connector (by default) maps the _EventType to either an insert, update, or delete operation on the configured SObject. This behavior can be overridden using the override.event.type=true and salesforce.sink.object.operation=<sink operation> fields. Overriding the event type ignores the _EventType field in the record and obeys the salesforce.sink.object.operation for every record.

API Limits

  • The Salesforce Bulk API sink connector is limited by number of batches to execute, records per batch, and length of the batch. For detailed limitations, see Bulk API Limits.

  • The Salesforce Bulk API supports upsert operations only when used with the external ID configuration properties salesforce.use.custom.id.field=true and salesforce.custom.id.field.name=<externalIdField>.

Next Steps

For an example that shows fully-managed Confluent Cloud connectors in action with Confluent Cloud for Apache Flink, see the Cloud ETL Demo. This example also shows how to use Confluent CLI to manage your resources in Confluent Cloud.

../_images/topology.png