MQTT Sink Connector for Confluent Cloud
The fully-managed MQTT Sink connector for Confluent Cloud streams data from Apache Kafka® to an MQTT broker.
Note
This Quick Start is for the fully-managed Confluent Cloud connector. If you are installing the connector locally for Confluent Platform, see MQTT Sink Connector for Confluent Platform.
If you require private networking for fully-managed connectors, make sure to set up the proper networking beforehand. For more information, see Manage Networking for Confluent Cloud Connectors.
Features
The MQTT Sink connector provides the following features:
At least once delivery: The connector guarantees that records are delivered at least once to the MQTT topic.
Supports multiple tasks: The connector supports running one or more tasks. More tasks may improve performance.
Schemas: The connector supports Avro, JSON Schema, and Protobuf input data formats. Schema Registry must be enabled to use a Schema Registry-based format. Note that the connector only supports bytes and string schemas. It does not support structs. If you want to have struct type schemas, you can store the struct data as bytes and select bytes in the connector.
For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect Usage Examples section.
Limitations
Be sure to review the following information.
For connector limitations, see MQTT Sink Connector limitations.
If you plan to use one or more Single Message Transforms (SMTs), see SMT Limitations.
If you plan to use Confluent Cloud Schema Registry, see Schema Registry Enabled Environments.
Quick Start
Use this quick start to get up and running with the Confluent Cloud MQTT sink connector. The quick start provides the basics of selecting the connector and configuring it to stream events to an MQTT broker.
- Prerequisites
Authorized access to a Confluent Cloud cluster on Amazon Web Services (AWS), Microsoft Azure (Azure), or Google Cloud.
Access to an MQTT broker.
The Confluent CLI installed and configured for the cluster. See Install the Confluent CLI.
Schema Registry must be enabled to use a Schema Registry-based format (for example, Avro, JSON_SR (JSON Schema), or Protobuf). See Schema Registry Enabled Environments for additional information.
For networking considerations, see Networking and DNS. To use a set of public egress IP addresses, see Public Egress IP Addresses for Confluent Cloud Connectors.
Schema Registry must be enabled to use a Schema Registry-based format (for example, Avro, JSON_SR (JSON Schema), or Protobuf). See Schema Registry Enabled Environments for additional information.
Kafka cluster credentials. The following lists the different ways you can provide credentials.
Enter an existing service account resource ID.
Create a Confluent Cloud service account for the connector. Make sure to review the ACL entries required in the service account documentation. Some connectors have specific ACL requirements.
Create a Confluent Cloud API key and secret. To create a key and secret, you can use confluent api-key create or you can autogenerate the API key and secret directly in the Cloud Console when setting up the connector.
Using the Confluent Cloud Console
Step 1: Launch your Confluent Cloud cluster
To create and launch a Kafka cluster in Confluent Cloud, see Create a kafka cluster in Confluent Cloud.
Step 2: Add a connector
In the left navigation menu, click Connectors. If you already have connectors in your cluster, click + Add connector.
Step 3: Select your connector
Click the MQTT Sink connector card.

Step 4: Enter the connector details
Note
Ensure you have all your prerequisites completed.
An asterisk ( * ) designates a required entry.
At the Add MQTT Sink Connector screen, complete the following:
If you’ve already populated your Kafka topics, select the topics you want to connect from the Topics list.
To create a new topic, click +Add new topic.
Select the way you want to provide Kafka Cluster credentials. You can choose one of the following options:
My account: This setting allows your connector to globally access everything that you have access to. With a user account, the connector uses an API key and secret to access the Kafka cluster. This option is not recommended for production.
Service account: This setting limits the access for your connector by using a service account. This option is recommended for production.
Use an existing API key: This setting allows you to specify an API key and a secret pair. You can use an existing pair or create a new one. This method is not recommended for production environments.
Note
Freight clusters support only service accounts for Kafka authentication.
Click Continue.
Enter the following MQTT broker connection details:
List of Server URIs: The MQTT broker URI. Must be in the format
<PROTOCOL>//:URI. The supported protocols are TCP, SSL, WS, and WSS. For TLS connections you must additionally provide credentials and upload Keystore and Truststore files.Username: Username to connect with, or blank to connect without a username.
Password: Password to connect with, or blank to connect without a password.
SSL Keystore: The location of the Java KeyStore file containing the private key to use for authenticating with the server.
Keystore Password: Password used to open the Java KeyStore file.
Key Password: Password for the client certificate contained in the Java KeyStore.
SSL Truststore: The location of the Java TrustStore file containing the certificates required to validate the SSL connection to the server.
Truststore Password: The password used to open the Java KeyStore file.
Click Continue.
Note
Configuration properties that are not shown in the Cloud Console use the default values. See Configuration Properties for all property values and definitions.
Select the Input Kafka record value format (data coming from the Kafka topic): AVRO, JSON_SR (JSON Schema), PROTOBUF, JSON (schemaless), or BYTES. A valid schema must be available in Schema Registry to use a schema-based message format (for example, AVRO, JSON_SR (JSON Schema), or PROTOBUF). See Schema Registry Enabled Environments for additional information.
Show advanced configurations
Schema context: Select a schema context to use for this connector, if using a schema-based data format. This property defaults to the Default context, which configures the connector to use the default schema set up for Schema Registry in your Confluent Cloud environment. A schema context allows you to use separate schemas (like schema sub-registries) tied to topics in different Kafka clusters that share the same Schema Registry environment. For example, if you select a non-default context, a Source connector uses only that schema context to register a schema and a Sink connector uses only that schema context to read from. For more information about setting up a schema context, see What are schema contexts and when should you use them?.
Retain Messages: Set whether messages for should be retained for future clients.
Auto-restart policy
Enable Connector Auto-restart: Control the auto-restart behavior of the connector and its task in the event of user-actionable errors. Defaults to
true, enabling the connector to automatically restart in case of user-actionable errors. Set this property tofalseto disable auto-restart for failed connectors. In such cases, you would need to manually restart the connector.
Clean Session?: Sets whether the client and server should remember their state after restarts and reconnects. For unreceived messages to be received when the client and server reconnecting, the MQTT Quality of Service (QOS) property must be set to at least
1or2. For more information, see Quality of Service in this man page.Connection Timeout: The amount of time to wait in seconds when connecting to the MQTT broker. The default is 30 seconds.
MQTT QOS: The default value is
0which means the message gets delivered once, with no confirmation. The QOS property must be set to at least1or2for unreceived messages to be received when the client and server reconnect. For more information, see Quality of Service in this man page.Connection Keepalive: Defines the maximum time interval between messages sent or received (in seconds). In the absence of a data-related message during the time period entered, the client sends a very small ping message for the broker to acknowledge. The default value is 60 seconds.
Max Retry Time: The maximum time in milliseconds (ms) the connector spends backing off and retrying a connection to the MQTT broker. The default value is 30000 ms (30 seconds).
Consumer configuration
Max poll interval(ms): Set the maximum delay between subsequent consume requests to Kafka. Use this property to improve connector performance in cases when the connector cannot send records to the sink system. The default is 300,000 milliseconds (5 minutes).
Max poll records: Set the maximum number of records to consume from Kafka in a single request. Use this property to improve connector performance in cases when the connector cannot send records to the sink system. The default is 500 records.
Transforms
Single Message Transforms: To add a new SMT, see Add transforms. For more information about unsupported SMTs, see Unsupported transformations.
Click Continue.
Based on the number of topic partitions you select, you will be provided with a recommended number of tasks.
To change the number of recommended tasks, enter the number of tasks for the connector to use in the Tasks field.
Click Continue.
Verify the connection details.
Click Launch.
The status for the connector should go from Provisioning to Running.
Step 5: Check the results on the broker
Verify that new records are being added to the MQTT broker
For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect Usage Examples section.
Tip
When you launch a connector, a Dead Letter Queue topic is automatically created. See View Connector Dead Letter Queue Errors in Confluent Cloud for details.
Using the Confluent CLI
Complete the following steps to set up and run the connector using the Confluent CLI.
Note
Make sure you have all your prerequisites completed.
Step 1: List the available connectors
Enter the following command to list available connectors:
confluent connect plugin list
Step 2: List the connector configuration properties
Enter the following command to show the connector configuration properties:
confluent connect plugin describe <connector-plugin-name>
The command output shows the required and optional configuration properties.
Step 3: Create the connector configuration file
Create a JSON file that contains the connector configuration properties. The following example shows the required connector properties.
{
"connector.class": "MqttSink",
"name": "MqttSink_0",
"input.data.format": "AVRO",
"kafka.auth.mode": "KAFKA_API_KEY",
"kafka.api.key": "<my-kafka-api-key>",
"kafka.api.secret": "<my-kafka-api-secret>",
"mqtt.server.uri" : ""tcp://192.0.0.1:1881",
"topics" : "kafka_topic_0",
"tasks.max" : "1"
}
Note the following property definitions:
"name": Sets a name for your new connector."connector.class": Identifies the connector plugin name."input.data.format": Supports AVRO, BYTES, JSON, JSON_SR (JSON Schema), or PROTOBUF. A valid schema must be available in Schema Registry to use a schema-based message format.
"kafka.auth.mode": Identifies the connector authentication mode you want to use. There are two options:SERVICE_ACCOUNTorKAFKA_API_KEY(the default). To use an API key and secret, specify the configuration propertieskafka.api.keyandkafka.api.secret, as shown in the example configuration (above). To use a service account, specify the Resource ID in the propertykafka.service.account.id=<service-account-resource-ID>. To list the available service account resource IDs, use the following command:confluent iam service-account list
For example:
confluent iam service-account list Id | Resource ID | Name | Description +---------+-------------+-------------------+------------------- 123456 | sa-l1r23m | sa-1 | Service account 1 789101 | sa-l4d56p | sa-2 | Service account 2
"mqtt.server.uri": The MQTT broker URI. Must be in the format<PROTOCOL>//:URI. The supported protocols are TCP, SSL, WS, and WSS. For TLS connections you must additionally provide credentials and upload Keystore and Truststore files. See the MQTT Sink configuration properties for these property values and definitions.Note
If the MQTT broker does not support anonymous mode, you must add the following two additional properties:
"mqtt.username":"<mqtt_broker_username>""mqtt.password":"<user_password>"
"topics": The Kafka topic name (or comma-separated topic names) where the data for the MQTT broker is located."tasks.max": Enter the number of tasks in use by the connector. The connector supports multiple tasks. More tasks may improve performance.
Note
The MQTT topic name where data lands is the same as the Kafka topic name.
Single Message Transforms: See the Single Message Transforms (SMT) documentation for details about adding SMTs using the CLI.
See Configuration Properties for all property values and definitions.
Step 4: Load the configuration file and create the connector
Enter the following command to load the configuration and start the connector:
confluent connect cluster create --config-file <file-name>.json
For example:
confluent connect cluster create --config-file mqtt-server-sink-config.json
Example output:
Created connector MqttSink_0 lcc-ix4dl
Step 5: Check the connector status
Enter the following command to check the connector status:
confluent connect plugin list
Example output:
ID | Name | Status | Type
+-----------+--------------+---------+------+
lcc-ix4dl | MqttSink_0 | RUNNING | sink
Step 6: Check the results in the database.
Verify that new records are being added to the MQTT database.
For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect Usage Examples section.
Tip
When you launch a connector, a Dead Letter Queue topic is automatically created. See View Connector Dead Letter Queue Errors in Confluent Cloud for details.
Configuration Properties
Use the following configuration properties with the fully-managed connector. For self-managed connector property definitions and other details, see the connector docs in Self-managed connectors for Confluent Platform.
How should we connect to your data?
nameSets a name for your connector.
Type: string
Valid Values: A string at most 64 characters long
Importance: high
Which topics do you want to get data from?
topics.regexA regular expression that matches the names of the topics to consume from. This is useful when you want to consume from multiple topics that match a certain pattern without having to list them all individually.
Type: string
Importance: low
topicsIdentifies the topic name or a comma-separated list of topic names.
Type: list
Importance: high
Schema Config
schema.context.nameAdd a schema context name. A schema context represents an independent scope in Schema Registry. It is a separate sub-schema tied to topics in different Kafka clusters that share the same Schema Registry instance. If not used, the connector uses the default schema configured for Schema Registry in your Confluent Cloud environment.
Type: string
Default: default
Importance: medium
Input messages
input.data.formatSets the input Kafka record value format. Valid entries are AVRO, JSON_SR, PROTOBUF, JSON or BYTES. Note that you need to have Confluent Cloud Schema Registry configured if using a schema-based message format like AVRO, JSON_SR, and PROTOBUF.
Type: string
Importance: high
Kafka Cluster credentials
kafka.auth.modeKafka Authentication mode. It can be one of KAFKA_API_KEY or SERVICE_ACCOUNT. It defaults to KAFKA_API_KEY mode.
Type: string
Default: KAFKA_API_KEY
Valid Values: KAFKA_API_KEY, SERVICE_ACCOUNT
Importance: high
kafka.api.keyKafka API Key. Required when kafka.auth.mode==KAFKA_API_KEY.
Type: password
Importance: high
kafka.service.account.idThe Service Account that will be used to generate the API keys to communicate with Kafka Cluster.
Type: string
Importance: high
kafka.api.secretSecret associated with Kafka API key. Required when kafka.auth.mode==KAFKA_API_KEY.
Type: password
Importance: high
How should we connect to MQTT Broker?
mqtt.server.uriThe URI of the MQTT broker. This must be given in the format <PROTOCOL>//:URI. The supported protocols are tcp, ssl, ws, wss. Note that for a connection that uses TLS, you must provide the required key stores and trust stores.
Type: list
Importance: high
mqtt.usernameUsername to connect with, or blank if a username is not required. Note: username field is masked as it may contain sensitive information
Type: password
Importance: high
mqtt.passwordPassword to connect with, or blank if a password is not required.
Type: password
Default: [hidden]
Importance: high
MQTT secure connection
mqtt.ssl.key.store.fileThe location of the Java KeyStore file containing the private key to use for authenticating with the server.
Type: password
Default: [hidden]
Importance: low
mqtt.ssl.key.store.passwordPassword used to open the Java KeyStore file.
Type: password
Default: [hidden]
Importance: medium
mqtt.ssl.key.passwordPassword for the client certificate contained in the Java KeyStore.
Type: password
Default: [hidden]
Importance: high
mqtt.ssl.trust.store.fileThe location of the Java TrustStore file containing the certificates required to validate the SSL connection to the server.
Type: password
Default: [hidden]
Importance: medium
mqtt.ssl.trust.store.passwordPassword used to open the Java TrustStore file.
Type: password
Default: [hidden]
Importance: medium
Connection Details
mqtt.clean.session.enabledSets whether the client and server should remember state across restarts and reconnects. Note that for unreceived messages to be received after reconnect you should set the QOS to 1 or above.
Type: boolean
Default: false
Importance: medium
mqtt.connect.timeout.secondsSets the connection timeout value in seconds.
Type: int
Default: 30
Importance: medium
mqtt.keepalive.interval.secondsThis value, measured in seconds, defines the maximum time interval between messages sent or received. In the absence of a data-related message during the time period, the client sends a very small “ping” message, which the server will acknowledge.
Type: int
Default: 60
Importance: medium
max.retry.time.msThe maximum time in milliseconds (ms) the connector will spend backing off and retrying failed operations (connecting to the MQTT broker and publishing records).
Type: int
Default: 30000 (30 seconds)
Importance: medium
mqtt.retained.enabledSet it to true for messages to be retained for future clients.
Type: boolean
Default: true
Importance: medium
mqtt.qosThe QOS level to write messages to the MQTT broker with.
Type: int
Default: 0
Importance: medium
Consumer configuration
max.poll.interval.msThe maximum delay between subsequent consume requests to Kafka. This configuration property may be used to improve the performance of the connector, if the connector cannot send records to the sink system. Defaults to 300000 milliseconds (5 minutes).
Type: long
Default: 300000 (5 minutes)
Valid Values: [60000,…,1800000] for non-dedicated clusters and [60000,…] for dedicated clusters
Importance: low
max.poll.recordsThe maximum number of records to consume from Kafka in a single request. This configuration property may be used to improve the performance of the connector, if the connector cannot send records to the sink system. Defaults to 500 records.
Type: long
Default: 500
Valid Values: [1,…,500] for non-dedicated clusters and [1,…] for dedicated clusters
Importance: low
Number of tasks for this connector
tasks.maxMaximum number of tasks for the connector.
Type: int
Valid Values: [1,…]
Importance: high
Auto-restart policy
auto.restart.on.user.errorEnable connector to automatically restart on user-actionable errors.
Type: boolean
Default: true
Importance: medium
Additional Configs
consumer.override.auto.offset.resetDefines the behavior of the consumer when there is no committed position (which occurs when the group is first initialized) or when an offset is out of range. You can choose either to reset the position to the “earliest” offset (the default) or the “latest” offset. You can also select “none” if you would rather set the initial offset yourself and you are willing to handle out of range errors manually. More details: https://docs.confluent.io/platform/current/installation/configuration/consumer-configs.html#auto-offset-reset
Type: string
Importance: low
consumer.override.isolation.levelControls how to read messages written transactionally. If set to read_committed, consumer.poll() will only return transactional messages which have been committed. If set to read_uncommitted (the default), consumer.poll() will return all messages, even transactional messages which have been aborted. Non-transactional messages will be returned unconditionally in either mode. More details: https://docs.confluent.io/platform/current/installation/configuration/consumer-configs.html#isolation-level
Type: string
Importance: low
header.converterThe converter class for the headers. This is used to serialize and deserialize the headers of the messages.
Type: string
Importance: low
value.converter.allow.optional.map.keysAllow optional string map key when converting from Connect Schema to Avro Schema. Applicable for Avro Converters.
Type: boolean
Importance: low
value.converter.auto.register.schemasSpecify if the Serializer should attempt to register the Schema.
Type: boolean
Importance: low
value.converter.connect.meta.dataAllow the Connect converter to add its metadata to the output schema. Applicable for Avro Converters.
Type: boolean
Importance: low
value.converter.enhanced.avro.schema.supportEnable enhanced schema support to preserve package information and Enums. Applicable for Avro Converters.
Type: boolean
Importance: low
value.converter.enhanced.protobuf.schema.supportEnable enhanced schema support to preserve package information. Applicable for Protobuf Converters.
Type: boolean
Importance: low
value.converter.flatten.unionsWhether to flatten unions (oneofs). Applicable for Protobuf Converters.
Type: boolean
Importance: low
value.converter.generate.index.for.unionsWhether to generate an index suffix for unions. Applicable for Protobuf Converters.
Type: boolean
Importance: low
value.converter.generate.struct.for.nullsWhether to generate a struct variable for null values. Applicable for Protobuf Converters.
Type: boolean
Importance: low
value.converter.int.for.enumsWhether to represent enums as integers. Applicable for Protobuf Converters.
Type: boolean
Importance: low
value.converter.latest.compatibility.strictVerify latest subject version is backward compatible when use.latest.version is true.
Type: boolean
Importance: low
value.converter.object.additional.propertiesWhether to allow additional properties for object schemas. Applicable for JSON_SR Converters.
Type: boolean
Importance: low
value.converter.optional.for.nullablesWhether nullable fields should be specified with an optional label. Applicable for Protobuf Converters.
Type: boolean
Importance: low
value.converter.optional.for.proto2Whether proto2 optionals are supported. Applicable for Protobuf Converters.
Type: boolean
Importance: low
value.converter.use.latest.versionUse latest version of schema in subject for serialization when auto.register.schemas is false.
Type: boolean
Importance: low
value.converter.use.optional.for.nonrequiredWhether to set non-required properties to be optional. Applicable for JSON_SR Converters.
Type: boolean
Importance: low
value.converter.wrapper.for.nullablesWhether nullable fields should use primitive wrapper messages. Applicable for Protobuf Converters.
Type: boolean
Importance: low
value.converter.wrapper.for.raw.primitivesWhether a wrapper message should be interpreted as a raw primitive at root level. Applicable for Protobuf Converters.
Type: boolean
Importance: low
key.converter.key.subject.name.strategyHow to construct the subject name for key schema registration.
Type: string
Default: TopicNameStrategy
Importance: low
value.converter.decimal.formatSpecify the JSON/JSON_SR serialization format for Connect DECIMAL logical type values with two allowed literals:
BASE64 to serialize DECIMAL logical types as base64 encoded binary data and
NUMERIC to serialize Connect DECIMAL logical type values in JSON/JSON_SR as a number representing the decimal value.
Type: string
Default: BASE64
Importance: low
value.converter.flatten.singleton.unionsWhether to flatten singleton unions. Applicable for Avro and JSON_SR Converters.
Type: boolean
Default: false
Importance: low
value.converter.reference.subject.name.strategySet the subject reference name strategy for value. Valid entries are DefaultReferenceSubjectNameStrategy or QualifiedReferenceSubjectNameStrategy. Note that the subject reference name strategy can be selected only for PROTOBUF format with the default strategy being DefaultReferenceSubjectNameStrategy.
Type: string
Default: DefaultReferenceSubjectNameStrategy
Importance: low
value.converter.value.subject.name.strategyDetermines how to construct the subject name under which the value schema is registered with Schema Registry.
Type: string
Default: TopicNameStrategy
Importance: low
Egress allowlist
connector.egress.whitelistType: string
Default: “”
Importance: high
Next Steps
For an example that shows fully-managed Confluent Cloud connectors in action with Confluent Cloud for Apache Flink, see the Cloud ETL Demo. This example also shows how to use Confluent CLI to manage your resources in Confluent Cloud.
