0% found this document useful (0 votes)
45 views

Dtle

The document describes how to deploy and configure DTLE for database replication. It includes instructions on installing DTLE, configuring Consul and Nomad, creating replication jobs, and examples of replicating from MySQL to Kafka.

Uploaded by

qq122356
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
45 views

Dtle

The document describes how to deploy and configure DTLE for database replication. It includes instructions on installing DTLE, configuring Consul and Nomad, creating replication jobs, and examples of replicating from MySQL to Kafka.

Uploaded by

qq122356
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

DTLE使用说明

部署描述
服务器 安装软件 使用项目

192.168.110.201 dtle 3.21.11.0, Docker 20.10.12, docker宿主机,安装dtle

mysql5.7 -- mysql5.7 数据
mysql 5.7 容器 * 2
同步

mysql 8 容器, kafka3.0.0容器, zookeeper mysql8 -- kafka3.0.0 数据同


容器 步

安装docker
step 1: 安装必要的一些系统工具

sudo yum install -y yum-utils device-mapper-persistent-data lvm2

常用软件安装 #yum install -y wget net-tools lsof telnet tree nmap sysstat lrzsz tar jq bind-utils

Step 2: 添加软件源信息
sudo yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-c
e.repo

Step 3: 更新并安装Docker-CE
sudo yum makecache
sudo yum -y install docker-ce

Step 4: 修改docker存储路径

systemctl stop docker

mv /var/lib/docker /data

vi /usr/lib/systemd/system/docker.service

#添加指定路径的参数--graph=/data/docker
ExecStart=/usr/bin/dockerd --graph=/data/docker -H fd:// --
containerd=/run/containerd/containerd.sock

systemctl daemon-reload
Step 5: 开启Docker服务
systemctl start docker

Step 6: 安装校验
输入: docker version

Step 7: 配置镜像加速器

通过修改daemon配置文件/etc/docker/daemon.json来使用加速器

sudo mkdir -p /etc/docker


sudo tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["https://36y72vr6.mirror.aliyuncs.com"]
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker

基于rpm包的安装DTLE
从此处下载dtle的 rpm 安装包, 并执行以下命令可安装dtle

rpm -ivh --prefix /opt/dtle dtle-<version>.rpm

配置文件位于

/opt/dtle/etc/dtle/

服务启动命令:

systemctl start dtle-consul dtle-nomad


systemctl enable dtle-consul dtle-nomad # 开机自动启动

日志文件位于 /opt/dtle/var/log/nomad/

说明:新版的DTLE包含了consul和nomad两个软件,其中nomad负责job调度,consul是注册中心,用
来存储复制过程的相关信息(如复制位置信息、GTID、NatsAddr等)

配置DTLE的consul,配置文件路径:/opt/dtle/etc/dtle
# Rename for each node
node_name = "consul0"
data_dir = "/opt/dtle/var/lib/consul"
ui = true

disable_update_check = true
# Address that should be bound to for internal cluster communications
bind_addr = "192.168.110.201"
# Address to which Consul will bind client interfaces, including the HTTP and DNS
servers
client_addr = "192.168.110.201"
advertise_addr = "192.168.110.201"
ports = {
# Customize if necessary. -1 means disable.
#dns = -1
#server = 8300
#http = 8500
#serf_wan = -1
#serf_lan = 8301
}

limits = {
http_max_conns_per_client = 4096
}

server = true
# For single node
bootstrap_expect = 1

# For 3-node cluster


#bootstrap_expect = 3
#retry_join = ["127.0.0.1", "127.0.0.2", "127.0.0.3"] # will use default serf
port

log_level = "INFO"
log_file = "/opt/dtle/var/log/consul/"

配置DTLE的nomad,配置文件路径:/opt/dtle/etc/dtle
name = "nomad0" # Rename for each node.
datacenter = "dc1" # Rename for each data center.
data_dir = "/opt/dtle/var/lib/nomad"
plugin_dir = "/opt/dtle/usr/share/dtle/nomad-plugin"

log_level = "Info"
log_file = "/opt/dtle/var/log/nomad/"

disable_update_check = true

bind_addr = "192.168.110.201"
# Change ports if multiple nodes run on the same machine.
ports {
http = 4646
rpc = 4647
serf = 4648
}
addresses {
# Default to `bind_addr`. Or set individually here.
#http = "127.0.0.1"
#rpc = "127.0.0.1"
#serf = "127.0.0.1"
}
advertise {
http = "192.168.110.201:4646"
rpc = "192.168.110.201:4647"
serf = "192.168.110.201:4648"
}

server {
enabled = true

bootstrap_expect = 1
# Set bootstrap_expect to 3 for multiple (high-availablity) nodes.
# Multiple nomad nodes will join with consul.
}

client {
enabled = true
options = {
"driver.blacklist" = "docker,exec,java,mock,qemu,rawexec,rkt"
}

# Will auto join other server with consul.


}

consul {
# dtle-plugin and nomad itself use consul separately.
# nomad uses consul for server_auto_join and client_auto_join.
# Only one consul can be set here. Write the nearest here,
# e.g. the one runs on the same machine with the nomad server.
address = "192.168.110.201:8500"
}

# nomad metrics
telemetry {
prometheus_metrics = true
use_node_name = true
}

plugin "dtle" {
config {
log_level = "Info" # Repeat nomad log level here.
data_dir = "/opt/dtle/var/lib/nomad"
nats_bind = "127.0.0.1:8193"
nats_advertise = "127.0.0.1:8193"
# Repeat the consul address above.
consul = "192.168.110.201:8500"

# By default, API compatibility layer is disabled.


api_addr = "127.0.0.1:8190" # for compatibility API
nomad_addr = "127.0.0.1:4646" # compatibility API need to access a nomad
server
# rsa_private_key_path indicate the file containing the private key for
decrypting mysql password that got through http api
# rsa_private_key_path = "xxx"
# cert_file_path = "PATH_TO_CERT_FILE"
# key_file_path = "PATH_TO_KEY_FILE"

publish_metrics = false
stats_collection_interval = 15
#ui_dir = "PATH_TO_UI_DIR"
}
}

MySQL 的单向复制
以下步骤以docker容器的方式快速演示如何搭建MySQL的单向复制环境.

创建网络

docker network create dtle-net

创建源端/目标端 MySQL

docker run --name mysql-src -e MYSQL_ROOT_PASSWORD=pass -p 33061:3306 --


network=dtle-net -d mysql:5.7 --gtid-mode=ON --enforce-gtid-consistency=1 --log-
bin=bin --server-id=1

docker run --name mysql-dst -e MYSQL_ROOT_PASSWORD=pass -p 33062:3306 --


network=dtle-net -d mysql:5.7 --gtid-mode=ON --enforce-gtid-consistency=1 --log-
bin=bin --server-id=2

检查是否联通:

> mysql -h 127.0.0.1 -P 33061 -uroot -ppass -e "select @@version\G"


< *************************** 1. row ***************************
@@version: 5.7.23-log

> mysql -h 127.0.0.1 -P 33062 -uroot -ppass -e "select @@version\G"


< *************************** 1. row ***************************
@@version: 5.7.23-log

检查DTLE节点是否正常:

> curl -XGET "192.168.110.201:4646/v1/nodes" -s | jq


< [
{
"Address": "192.168.110.201",
"Datacenter": "dc1",
"Drivers": {
"dtle": {
"Attributes": {
"driver.dtle": "true",
"driver.dtle.version": "..."
},
"Detected": true,
"Healthy": true,
}
},
"ID": "65ff2f9a-a9fa-997c-cce0-9bc0b4f3396c",
"Name": "nomad0",
"Status": "ready",
}
]
# (部分项目省略)

准备作业定义文件
准备文件job.json, 内容如下:

{
"Job": {
"ID": "dtle-demo1",
"Datacenters": ["dc1"],
"TaskGroups": [{
"Name": "src",
"Tasks": [{
"Name": "src",
"Driver": "dtle",
"Config": {
"Gtid": "",
"ReplicateDoDb": [{
"TableSchema": "demo",
"Tables": [{
"TableName": "demo_tbl",
"ColumnMapFrom": ["id", "mobile_type", "app_type",
"version_name", "versions", "url", "operator", "operating_time", "version_num",
"version_describe", "version_push_type", "enable"]
}]
}],
"ConnectionConfig": {
"Host": "172.18.0.2",
"Port": 3306,
"User": "root",
"Password": "pass"
}
}
}]
}, {
"Name": "dest",
"Tasks": [{
"Name": "dest",
"Driver": "dtle",
"Config": {
"ConnectionConfig": {
"Host": "172.18.0.3",
"Port": 3306,
"User": "root",
"Password": "pass"
}
}
}]
}]
}
}

其中定义了:

源端/目标端的连接字符串

要复制的表为 demo.demo_tbl

GTID点位为空, 表示此复制是 全量+增量 的复制. 如只测试增量复制, 可指定合法的GTID

准备测试数据
可在源端准备提前建表 demo.demo_tbl , 并插入数据, 以体验全量复制过程. 也可不提前建表.

创建复制任务

> curl -XPOST "http://192.168.110.201:4646/v1/jobs" -d @job.json -s | jq


< {
"EvalCreateIndex": 50,
"EvalID": "a5e9c353-5eb9-243e-983d-bc096a93ddca",
"Index": 50,
"JobModifyIndex": 49,
"KnownLeader": false,
"LastContact": 0,
"Warnings": ""
}

查看作业状态

> curl -XGET "http://192.168.110.201:4646/v1/job/dtle-demo" -s | jq '.Status'


< "running"

测试
此时可在源端对表 demo.demo_tbl 进行DDL/DML等各种操作, 查看目标端数据是否一致

MySQL到Kafka的数据变更通知
创建网络

docker network create dtle-net

创建源端 MySQL

docker run --name mysql-src -e MYSQL_ROOT_PASSWORD=pass -p 33061:3306 -e


TZ="Asia/Shanghai" --network=dtle-net -v /data/mysql-docker/conf/mysql8-
src/my.cnf:/etc/mysql/my.cnf -v /data/mysql-docker/data/mysql8-src:/var/lib/mysql
-d mysql:latest --gtid-mode=ON --enforce-gtid-consistency=1 --log-bin=bin --
server-id=1

检查是否联通:

> mysql -h 127.0.0.1 -P 33061 -uroot -ppass -e "select @@version\G"


< *************************** 1. row ***************************
@@version: 8.0.27-log

创建源端表结构

> mysql -h 127.0.0.1 -P 33061 -uroot -ppass -e "CREATE DATABASE demo; CREATE
TABLE demo.demo_tbl(a int primary key)"

创建目标端 Kafka

docker run --name kafka-zookeeper -p 2181:2181 -e ALLOW_ANONYMOUS_LOGIN=yes --


network=dtle-net -d bitnami/zookeeper
docker run --name kafka-dst -p 9092:9092 -e KAFKA_ZOOKEEPER_CONNECT=kafka-
zookeeper:2181 -e ALLOW_PLAINTEXT_LISTENER=yes --network=dtle-net -d
bitnami/kafka

检查是否联通:

> docker run -it --rm \


--network dtle-net \
-e KAFKA_ZOOKEEPER_CONNECT=kafka-zookeeper:2181 \
bitnami/kafka:latest kafka-topics.sh --list --zookeeper kafka-
zookeeper:2181
< Welcome to the Bitnami kafka container
Subscribe to project updates by watching https://github.com/bitnami/bitnami-
docker-kafka
Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-
kafka/issues

创建 dtle

docker run --name dtle-consul -p 8500:8500 --network=dtle-net -d consul:latest


docker run --name dtle -p 4646:4646 --network=dtle-net -d actiontech/dtle
检查是否正常:

> curl -XGET "127.0.0.1:4646/v1/nodes" -s | jq


< [{...}]

准备作业定义文件
准备文件job.json, 内容如下:

{
"Job": {
"ID": "m2kdemo",
"Datacenters": ["dc1"],
"TaskGroups": [{
"Name": "src",
"Tasks": [{
"Name": "src",
"Driver": "dtle",
"Config": {
"Gtid": "",
"ReplicateDoDb": [{
"TableSchema": "demo",
"Tables": [{
"TableName": "demo_tbl"
}]
}],
"ConnectionConfig": {
"Host": "localhost",
"Port": 33061,
"User": "root",
"Password": "pass"
}
}
}]
}, {
"Name": "dest",
"Tasks": [{
"Name": "dest",
"Driver": "dtle",
"Config": {
"KafkaConfig": {
"Topic": "demo-topic",
"Brokers": ["localhost:9092"],
"Converter": "json"
}
}
}]
}]
}
}

其中定义了:

源端 MySQL 的连接字符串
目标端 Kafka 的 broker 访问地址
要复制的表为 demo.demo_tbl
GTID点位为空, 表示此复制是 全量+增量 的复制. 如只测试增量复制, 可指定合法的GTID

创建复制任务

> curl -XPOST "http://192.168.110.201:4646/v1/jobs" -d @job.json -s | jq


< {...}

查看作业状态:

> curl -XGET "192.168.110.201:4646/v1/job/dtle-demo" -s | jq '.Status'


< "running"

测试
在源端写入数据:

> mysql -h 192.168.110.201 -P 33061 -uroot -ppass -e "INSERT INTO demo.demo_tbl


values(1)"
...

验证相关的topic存在:

> docker run -it --rm \


--network dtle-net \
-e KAFKA_ZOOKEEPER_CONNECT=kafka-zookeeper:2181 \
bitnami/kafka:latest kafka-topics.sh --list --zookeeper kafka-
zookeeper:2181
< Welcome to the Bitnami kafka container
Subscribe to project updates by watching https://github.com/bitnami/bitnami-
docker-kafka
Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-
kafka/issues

demo-topic.demo.demo_tbl

验证数据:
> docker run -it --rm \
--network dtle-net \
-e KAFKA_ZOOKEEPER_CONNECT=kafka-zookeeper:2181 \
bitnami/kafka:latest kafka-console-consumer.sh --bootstrap-server kafka-
dst:9092 --topic demo-topic.demo.demo_tbl --from-beginning
< ...
{"schema":{"type":"struct","optional":false,"fields":
[{"type":"struct","optional":true,"field":"before","fields":
[{"type":"int32","optional":false,"field":"a"}],"name":"demo-
topic.demo.demo_tbl.Value"},
{"type":"struct","optional":true,"field":"after","fields":
[{"type":"int32","optional":false,"field":"a"}],"name":"demo-
topic.demo.demo_tbl.Value"},
{"type":"struct","optional":false,"field":"source","fields":
[{"type":"string","optional":true,"field":"version"},
{"type":"string","optional":false,"field":"name"},
{"type":"int64","optional":false,"field":"server_id"},
{"type":"int64","optional":false,"field":"ts_sec"},
{"type":"string","optional":true,"field":"gtid"},
{"type":"string","optional":false,"field":"file"},
{"type":"int64","optional":false,"field":"pos"},
{"type":"int32","optional":false,"field":"row"},
{"type":"boolean","optional":true,"field":"snapshot"},
{"type":"int64","optional":true,"field":"thread"},
{"type":"string","optional":true,"field":"db"},
{"type":"string","optional":true,"field":"table"}],"name":"io.debezium.connector
.mysql.Source"},{"type":"string","optional":false,"field":"op"},
{"type":"int64","optional":true,"field":"ts_ms"}],"name":"demo-
topic.demo.demo_tbl.Envelope","version":1},"payload":{"before":null,"after":
{"a":11},"source":{"version":"0.0.1","name":"demo-
topic","server_id":0,"ts_sec":0,"gtid":null,"file":"","pos":0,"row":1,"snapshot"
:true,"thread":null,"db":"demo","table":"demo_tbl"},"op":"c","ts_ms":15397606825
07}}

此时可在源端对表 demo.demo_tbl 进行DDL/DML等各种操作, 查看目标端数据是否一致

复制任务管理
nomad查看任务

nomad job status -address=http://192.168.110.201:4646

nomad任务删除

nomad job stop -purge -address=http://192.168.110.201:4646 dtle-demo

nomad任务增加

nomad job run -address="http://192.168.110.201:4646" job.hcl


查看某个任务
nomad job status -address=http://192.168.110.201:4646 dtle-demo

consul 上使用recurse删除job_name下所有项目(主要存储的是进度,即 Gtid 或 BinlogFile & Pos)

$ curl -XDELETE "192.168.110.201:8500/v1/kv/dtle/job_name?recurse"

或者使用浏览器访问 192.168.110.201:8500, 使用Web UI管理。

排错
通过http://192.168.110.201:4646可以查看job执行状态

具体错误日志需访问/opt/dtle/var/log/nomad

https://github.com/actiontech/dtle

https://actiontech.github.io/dtle-docs-cn

You might also like