Redis动态删除主从节点

Redis动态删除主从节点

先删除从节点

1.找到要删除的redis节点的ip/端口/redisID

我们这里要删除7008(主节点)/7009(从节点)

7009节点信息

ip:192.168.1.34,

端口:7009

redisID:eab2aa8fe15448bc9b9c012a36d102e740172eca

如图

 

2.删除从节点操作

2.1删除从节点

命令:

     ~/redis-3.2.8/src/redis-trib.rb del-node 被删除目标节点ip:被删除目标节点端口 被删除目标节点ID

如下:

    ~/redis-3.2.8/src/redis-trib.rb del-node 192.168.1.34:7009  eab2aa8fe15448bc9b9c012a36d102e740172eca

[root@mini34 ~]# ~/redis-3.2.8/src/redis-trib.rb del-node 192.168.1.34:7009  eab2aa8fe15448bc9b9c012a36d102e740172eca

>>> Removing node eab2aa8fe15448bc9b9c012a36d102e740172eca from cluster 192.168.1.34:7009

>>> Sending CLUSTER FORGET messages to the cluster...

>>> SHUTDOWN the node.

[root@mini34 ~]#

如图已经没有7009节点了,删除成功

 

3.删除主节点

删除主节点时必须先将主节点中的槽(数据)分配到其他节点上后才能删除,不然会发生数据丢失

3.1迁移待删除主节点的数据

命令:

~/redis-3.2.8/src/redis-trib.rb reshard 目标主节点IP:目标主节点端口

如下:

~/redis-3.2.8/src/redis-trib.rb reshard mini34:7008

[root@mini34 ~]# ~/redis-3.2.8/src/redis-trib.rb reshard mini34:7008

>>> Performing Cluster Check (using node mini34:7008)

M: 4f257a0d79ae59ef55b8dfe81e6f89f945469b78 mini34:7008

   slots:0-5,5461-5467,10923-10928 (19 slots) master

   0 additional replica(s)

S: 5bce6e6e8db64dfb9f4dc704739ce5ba55a4e956 127.0.0.1:7004

   slots: (0 slots) slave

   replicates b37b29006c1b7c205cac9ccec729f020224370fa

S: 00f224f0da87d31321630dfbaa9ef0170b745706 127.0.0.1:7006

   slots: (0 slots) slave

   replicates c24f0c5d00233b81a79b7cf3b3d28dbcef123328

S: 8d440f94d4fe20cad6a6711e829461187d2141b1 127.0.0.1:7005

   slots: (0 slots) slave

   replicates e75adb7b8c4bb8a9f2256cd7291195a5664f5d54

M: e75adb7b8c4bb8a9f2256cd7291195a5664f5d54 127.0.0.1:7002

   slots:5468-10922 (5455 slots) master

   1 additional replica(s)

M: b37b29006c1b7c205cac9ccec729f020224370fa 127.0.0.1:7001

   slots:6-5460 (5455 slots) master

   1 additional replica(s)

M: c24f0c5d00233b81a79b7cf3b3d28dbcef123328 127.0.0.1:7003

   slots:10929-16383 (5455 slots) master

   1 additional replica(s)

[OK] All nodes agree about slots configuration.

>>> Check for open slots...

>>> Check slots coverage...

[OK] All 16384 slots covered.

需要移动多少个槽(我们7008上面只有19个数据槽)

How many slots do you want to move (from 1 to 16384)? 19

移动到哪个节点上

What is the receiving node ID? b37b29006c1b7c205cac9ccec729f020224370fa

Please enter all the source node IDs.

  Type 'all' to use all the nodes as source nodes for the hash slots.

  Type 'done' once you entered all the source nodes IDs.

从哪里移动数据槽(我们这里从7008移动)

Source node #1:4f257a0d79ae59ef55b8dfe81e6f89f945469b78

结束,开始执行

Source node #2:done

 

Ready to move 19 slots.

  Source nodes:

    M: 4f257a0d79ae59ef55b8dfe81e6f89f945469b78 mini34:7008

   slots:0-5,5461-5467,10923-10928 (19 slots) master

   0 additional replica(s)

  Destination node:

    M: b37b29006c1b7c205cac9ccec729f020224370fa 127.0.0.1:7001

   slots:6-5460 (5455 slots) master

   1 additional replica(s)

  Resharding plan:

    Moving slot 0 from 4f257a0d79ae59ef55b8dfe81e6f89f945469b78

    Moving slot 1 from 4f257a0d79ae59ef55b8dfe81e6f89f945469b78

    Moving slot 2 from 4f257a0d79ae59ef55b8dfe81e6f89f945469b78

    Moving slot 3 from 4f257a0d79ae59ef55b8dfe81e6f89f945469b78

    Moving slot 4 from 4f257a0d79ae59ef55b8dfe81e6f89f945469b78

    Moving slot 5 from 4f257a0d79ae59ef55b8dfe81e6f89f945469b78

    Moving slot 5461 from 4f257a0d79ae59ef55b8dfe81e6f89f945469b78

    Moving slot 5462 from 4f257a0d79ae59ef55b8dfe81e6f89f945469b78

    Moving slot 5463 from 4f257a0d79ae59ef55b8dfe81e6f89f945469b78

    Moving slot 5464 from 4f257a0d79ae59ef55b8dfe81e6f89f945469b78

    Moving slot 5465 from 4f257a0d79ae59ef55b8dfe81e6f89f945469b78

    Moving slot 5466 from 4f257a0d79ae59ef55b8dfe81e6f89f945469b78

    Moving slot 5467 from 4f257a0d79ae59ef55b8dfe81e6f89f945469b78

    Moving slot 10923 from 4f257a0d79ae59ef55b8dfe81e6f89f945469b78

    Moving slot 10924 from 4f257a0d79ae59ef55b8dfe81e6f89f945469b78

    Moving slot 10925 from 4f257a0d79ae59ef55b8dfe81e6f89f945469b78

    Moving slot 10926 from 4f257a0d79ae59ef55b8dfe81e6f89f945469b78

    Moving slot 10927 from 4f257a0d79ae59ef55b8dfe81e6f89f945469b78

Moving slot 10928 from 4f257a0d79ae59ef55b8dfe81e6f89f945469b78

是否执行以上的执行计划,yes是的

Do you want to proceed with the proposed reshard plan (yes/no)? yes

Moving slot 0 from mini34:7008 to 127.0.0.1:7001:

Moving slot 1 from mini34:7008 to 127.0.0.1:7001:

Moving slot 2 from mini34:7008 to 127.0.0.1:7001:

Moving slot 3 from mini34:7008 to 127.0.0.1:7001:

Moving slot 4 from mini34:7008 to 127.0.0.1:7001:

Moving slot 5 from mini34:7008 to 127.0.0.1:7001:

Moving slot 5461 from mini34:7008 to 127.0.0.1:7001:

Moving slot 5462 from mini34:7008 to 127.0.0.1:7001:

Moving slot 5463 from mini34:7008 to 127.0.0.1:7001:

Moving slot 5464 from mini34:7008 to 127.0.0.1:7001:

Moving slot 5465 from mini34:7008 to 127.0.0.1:7001:

Moving slot 5466 from mini34:7008 to 127.0.0.1:7001:

Moving slot 5467 from mini34:7008 to 127.0.0.1:7001:

Moving slot 10923 from mini34:7008 to 127.0.0.1:7001:

Moving slot 10924 from mini34:7008 to 127.0.0.1:7001:

Moving slot 10925 from mini34:7008 to 127.0.0.1:7001:

Moving slot 10926 from mini34:7008 to 127.0.0.1:7001:

Moving slot 10927 from mini34:7008 to 127.0.0.1:7001:

Moving slot 10928 from mini34:7008 to 127.0.0.1:7001:

查看7008节点信息

mini34:7008> cluster nodes

5bce6e6e8db64dfb9f4dc704739ce5ba55a4e956 127.0.0.1:7004 slave b37b29006c1b7c205cac9ccec729f020224370fa 0 1495136054047 8 connected

00f224f0da87d31321630dfbaa9ef0170b745706 127.0.0.1:7006 slave c24f0c5d00233b81a79b7cf3b3d28dbcef123328 0 1495136056060 3 connected

8d440f94d4fe20cad6a6711e829461187d2141b1 127.0.0.1:7005 slave e75adb7b8c4bb8a9f2256cd7291195a5664f5d54 0 1495136058077 2 connected

7008上面已经没有数据槽了

4f257a0d79ae59ef55b8dfe81e6f89f945469b78 127.0.0.1:7008 myself,master - 0 0 7 connected

e75adb7b8c4bb8a9f2256cd7291195a5664f5d54 127.0.0.1:7002 master - 0 1495136056565 2 connected 5468-10922

b37b29006c1b7c205cac9ccec729f020224370fa 127.0.0.1:7001 master - 0 1495136057069 8 connected 0-5467 10923-10928

c24f0c5d00233b81a79b7cf3b3d28dbcef123328 127.0.0.1:7003 master - 0 1495136052032 3 connected 10929-16383

mini34:7008>

 

数据迁移成功!!!!!

3.2删除主节点

命令:

     ~/redis-3.2.8/src/redis-trib.rb del-node 被删除目标节点ip:被删除目标节点端口 被删除目标节点ID

如下:

    ~/redis-3.2.8/src/redis-trib.rb del-node 192.168.1.34:7008  4f257a0d79ae59ef55b8dfe81e6f89f945469b78

 

[root@mini34 ~]# ~/redis-3.2.8/src/redis-trib.rb del-node 192.168.1.34:7008  4f257a0d79ae59ef55b8dfe81e6f89f945469b78

>>> Removing node 4f257a0d79ae59ef55b8dfe81e6f89f945469b78 from cluster 192.168.1.34:7008

>>> Sending CLUSTER FORGET messages to the cluster...

>>> SHUTDOWN the node.

#删除成功

[root@mini34 ~]#

mini34:7006> cluster nodes

b37b29006c1b7c205cac9ccec729f020224370fa 127.0.0.1:7001 master - 0 1495136336818 8 connected 0-5467 10923-10928

00f224f0da87d31321630dfbaa9ef0170b745706 127.0.0.1:7006 myself,slave c24f0c5d00233b81a79b7cf3b3d28dbcef123328 0 0 6 connected

8d440f94d4fe20cad6a6711e829461187d2141b1 127.0.0.1:7005 slave e75adb7b8c4bb8a9f2256cd7291195a5664f5d54 0 1495136332788 5 connected

c24f0c5d00233b81a79b7cf3b3d28dbcef123328 127.0.0.1:7003 master - 0 1495136333796 3 connected 10929-16383

e75adb7b8c4bb8a9f2256cd7291195a5664f5d54 127.0.0.1:7002 master - 0 1495136335812 2 connected 5468-10922

5bce6e6e8db64dfb9f4dc704739ce5ba55a4e956 127.0.0.1:7004 slave b37b29006c1b7c205cac9ccec729f020224370fa 0 1495136334806 8 connected

mini34:7006>

 

### 清空 Redis 集群中的主从节点数据 在 Redis 集群环境中,清空所有主从节点上的数据可以通过向集群广播 `FLUSHALL` 或者 `FLUSHDB` 命令来实现。这取决于是否要清除所有的数据库还是仅限于默认的单个数据库。 对于 Redis 集群而言,执行此操作前应当确保已经备份了重要的数据以防误删重要信息[^2]。 为了安全起见,在实际生产环境中建议先停止写入操作再进行清理工作,并且通知到可能受影响的应用方知晓维护窗口的存在。 具体命令如下所示: ```bash # 连接到任意一个集群节点并发送 FLUSHALL/FLUSHDB 广播消息给其他成员 redis-cli --cluster call <host>:<port> FLUSHALL ``` 上述命令会遍历整个集群并将指定指令传递至每一个节点上运行,从而达到统一管理的目的。如果只想针对特定范围内的键名实施删除动作,则可以考虑编写脚本来完成更精细的操作逻辑而不是简单粗暴地全部抹除掉。 另外需要注意的是,当使用 `--cluster call` 参数时,它能够帮助用户轻松地将一条命令分发出去影响全局;而直接通过客户端工具单独连接各个实例也是可行的方法之一,不过效率较低而且容易遗漏某些部分。 #### 使用 Python 实现批量清空功能 除了利用官方提供的 CLI 工具外,还可以借助编程语言如Python 来简化流程控制以及错误处理等方面的工作量。下面给出了一段简单的代码片段用于示范如何自动化这一过程: ```python import redis from itertools import chain def flush_redis_cluster(hosts_ports): r = redis.StrictRedis() # 获取全量slot映射关系表 slots_info = list(chain(*r.cluster('SLOTS'))) masters = set([node['host'] for node in slots_info]) for host_port in hosts_ports: try: client = redis.Redis.from_url(f'redis://{host_port}') if host_port.split(":")[0] not in masters: continue info = client.info(section='replication') role = info.get('role') print(f'Flushing {"master" if role == "master" else "slave"} at {host_port}...') client.flushall() except Exception as e: print(e) if __name__ == '__main__': hosts_ports = ['192.168.1.52:8000', 'other_node_ip:port'] flush_redis_cluster(hosts_ports) ``` 这段程序首先收集到了所有槽位对应的主节点地址列表,接着尝试依次访问这些位置并向它们发出刷新请求。由于从属副本通常不会接受来自外部源发起的数据变更类别的事务(比如本例中的清除),所以这里只对领导者角色进行了相应处理[^3]。
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值