MySQL-MMM安装指南 Multi-Master Replication Manager for MySQL

               


最基本的MMM安装必须至少需要2个数据库服务器和一个监控服务器下面要配置的MySQL Cluster环境包含四台数据库服务器和一台监控服务器,如下:

functioniphostnameserver id
monitoring host192.168.0.10mon-
master 1192.168.0.11db11
master 2192.168.0.12db22
slave 1192.168.0.13db33
slave 2192.168.0.14db44

如果是个人学习安装,一下子找5台机器不太容易,可以虚拟机就可以完成。

 配置完成后,使用下面的虚拟IP访问MySQL Cluster,他们通过MMM分配到不同的服务器。

iproledescription
192.168.0.100writer应用程序应该连接到这个ip进行写操作
192.168.0.101reader应用程序应该链接到这些ip中的一个进行读操作
192.168.0.102reader
192.168.0.103reader
192.168.0.104reader

结构图如下:

2. Basic configuration of master 1

First we install MySQL on all hosts:

aptitude install mysql-server

Then we edit the configuration file /etc/mysql/my.cnf and add the following lines - be sure to use different server ids for all hosts:

server_id           = 1log_bin             = /var/log/mysql/mysql-bin.log log_bin_index       = /var/log/mysql/mysql-bin.log.index relay_log           = /var/log/mysql/mysql-relay-bin relay_log_index     = /var/log/mysql/mysql-relay-bin.index expire_logs_days    = 10 max_binlog_size     = 100M log_slave_updates   = 1

Then remove the following entry:

bind-address = 127.0.0.1

Set to number of masters:

auto_increment_increment = 2

Set to a unique, incremented number, less than auto_increment_increment, on each server

auto_increment_offset = 1

Do not bind of any specific IP, use 0.0.0.0 instead:

bind-address = 0.0.0.0

Afterwards we need to restart MySQL for our changes to take effect:

/etc/init.d/mysql restart


3. Create users

Now we can create the required users. We'll need 3 different users:

functiondescriptionprivileges
monitor userused by the mmm monitor to check the health of the MySQL serversREPLICATION CLIENT
agent userused by the mmm agent to change read-only mode, replication master, etc.SUPER, REPLICATION CLIENT, PROCESS
relication userused for replicationREPLICATION SLAVE
GRANT REPLICATION CLIENT                 ON *.* TO 'mmm_monitor'@'192.168.0.%' IDENTIFIED BY 'monitor_password';GRANT SUPER, REPLICATION CLIENT, PROCESS ON *.* TO 'mmm_agent'@'192.168.0.%'   IDENTIFIED BY 'agent_password';GRANT REPLICATION SLAVE                  ON *.* TO 'replication'@'192.168.0.%' IDENTIFIED BY 'replication_password';

Note: We could be more restrictive here regarding the hosts from which the users are allowed to connect: mmm_monitor is used from 192.168.0.10. mmm_agent and replication are used from 192.168.0.11 - 192.168.0.14.

Note: Don't use a replication_password longer than 32 characters

4. Synchronisation of data between both databases

I'll assume that db1 contains the correct data. If you have an empty database, you still have to syncronize the accounts we have just created.

First make sure that no one is altering the data while we create a backup.

(db1) mysql> FLUSH TABLES WITH READ LOCK;

Then get the current position in the binary-log. We will need this values when we setup the replication on db2, db3 and db4.

(db1) mysql> SHOW MASTER STATUS;+------------------+----------+--------------+------------------+| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB |+------------------+----------+--------------+------------------+ | mysql-bin.000002 |      374 |              |                  | +------------------+----------+--------------+------------------+ 1 row in set (0.00 sec) 

DON'T CLOSE this mysql-shell. If you close it, the database lock will be removed. Open a second console and type:

db1$ mysqldump -u root -p --all-databases > /tmp/database-backup.sql

Now we can remove the database-lock. Go to the first shell:

(db1) mysql> UNLOCK TABLES;

Copy the database backup to db2, db3 and db4.

db1$ scp /tmp/database-backup.sql <user>@192.168.0.12:/tmpdb1$ scp /tmp/database-backup.sql <user>@192.168.0.13:/tmpdb1$ scp /tmp/database-backup.sql <user>@192.168.0.14:/tmp

Then import this into db2, db3 and db4:

db2$ mysql -u root -p < /tmp/database-backup.sqldb3$ mysql -u root -p < /tmp/database-backup.sqldb4$ mysql -u root -p < /tmp/database-backup.sql

Then flush the privileges on db2, db3 and db4. We have altered the user-table and mysql has to reread this table.

(db2) mysql> FLUSH PRIVILEGES;(db3) mysql> FLUSH PRIVILEGES;(db4) mysql> FLUSH PRIVILEGES;

On debian and ubuntu, copy the passwords in /etc/mysql/debian.cnf from db1 to db2, db3 and db4. This password is used for starting and stopping mysql.

Both databases now contain the same data. We now can setup replication to keep it that way.

Note: Import just only add records from dump file. You should drop all databases before import dump file.

5. Setup replication

Configure replication on db2, db3 and db4 with the following commands:

(db2) mysql> CHANGE MASTER TO master_host='192.168.0.11', master_port=3306, master_user='replication',               master_password='replication_password', master_log_file='<file>', master_log_pos=<position>;(db3) mysql> CHANGE MASTER TO master_host='192.168.0.11', master_port=3306, master_user='replication',               master_password='replication_password', master_log_file='<file>', master_log_pos=<position>;(db4) mysql> CHANGE MASTER TO master_host='192.168.0.11', master_port=3306, master_user='replication',               master_password='replication_password', master_log_file='<file>', master_log_pos=<position>;

Please insert the values return by “show master status” on db1 at the <file> and <position> tags.

Start the slave-process on all 3 hosts:

(db2) mysql> START SLAVE;(db3) mysql> START SLAVE;(db4) mysql> START SLAVE;

Now check if the replication is running correctly on all hosts:

(db2) mysql> SHOW SLAVE STATUS\G*************************** 1. row ***************************              Slave_IO_State: Waiting for master to send event                 Master_Host: 192.168.0.11                Master_User: replication                Master_Port: 3306               Connect_Retry: 60 …(db3) mysql> SHOW SLAVE STATUS\G*************************** 1. row ***************************              Slave_IO_State: Waiting for master to send event                 Master_Host: 192.168.0.11                Master_User: replication                Master_Port: 3306               Connect_Retry: 60 …(db4) mysql> SHOW SLAVE STATUS\G*************************** 1. row ***************************              Slave_IO_State: Waiting for master to send event                 Master_Host: 192.168.0.11                Master_User: replication                Master_Port: 3306               Connect_Retry: 60 …

Now we have to make db1 replicate from db2. First we have to determine the values for master_log_file and master_log_pos:

(db2) mysql> SHOW MASTER STATUS;+------------------+----------+--------------+------------------+ | File             | Position | Binlog_Do_DB | Binlog_Ignore_DB | +------------------+----------+--------------+------------------+ | mysql-bin.000001 |       98 |              |                  |+------------------+----------+--------------+------------------+ 1 row in set (0.00 sec) 

Now we configure replication on db1 with the following command:

(db1) mysql> CHANGE MASTER TO master_host = '192.168.0.12', master_port=3306, master_user='replication',              master_password='replication_password', master_log_file='<file>', master_log_pos=<position>;

Now insert the values return by “show master status” on db2 at the <file> and <position> tags.

Start the slave-process:

(db1) mysql> START SLAVE;

Now check if the replication is running correctly on db1:

(db1) mysql> SHOW SLAVE STATUS\G*************************** 1. row ***************************              Slave_IO_State: Waiting for master to send event                 Master_Host: 192.168.0.12                Master_User: <replication>                Master_Port: 3306               Connect_Retry: 60 …

Replication between the nodes should now be complete. Try it by inserting some data into both db1 and db2 and check that the data will appear on all other nodes.


6. Install MMM

Create user

Optional: Create user that will be the owner of the  MMM scripts and configuration files. This will provide an easier method to securely manage the monitor scripts.

useradd --comment "MMM Script owner" --shell /sbin/nologin mmmd

Monitoring host

First install dependencies:

aptitude install liblog-log4perl-perl libmailtools-perl liblog-dispatch-perl libclass-singleton-perl libproc-daemon-perl libalgorithm-diff-perl libdbi-perl libdbd-mysql-perl

Then fetch the latest mysql-mmm-common*.deb and mysql-mmm-monitor*.deb and install it:

dpkg -i mysql-mmm-common_*.deb mysql-mmm-monitor*.deb

Database hosts

On Ubuntu First install dependencies:

aptitude install liblog-log4perl-perl libmailtools-perl liblog-dispatch-perl iproute libnet-arp-perl libproc-daemon-perl libalgorithm-diff-perl libdbi-perl libdbd-mysql-perl

Then fetch the latest mysql-mmm-common*.deb and mysql-mmm-agent*.deb and install it:

dpkg -i mysql-mmm-common_*.deb mysql-mmm-agent_*.deb

On RedHat

yum install -y mysql-mmm-agent

This will take care of all the dependencies, which may include:

Installed:

mysql-mmm-agent.noarch 0:2.2.1-1.el5                                          

Dependency Installed:

libart_lgpl.x86_64 0:2.3.17-4                                                 mysql-mmm.noarch 0:2.2.1-1.el5                                                perl-Algorithm-Diff.noarch 0:1.1902-2.el5                                     perl-DBD-mysql.x86_64 0:4.008-1.rf                                            perl-DateManip.noarch 0:5.44-1.2.1                                            perl-IPC-Shareable.noarch 0:0.60-3.el5                                        perl-Log-Dispatch.noarch 0:2.20-1.el5                                         perl-Log-Dispatch-FileRotate.noarch 0:1.16-1.el5                              perl-Log-Log4perl.noarch 0:1.13-2.el5                                         perl-MIME-Lite.noarch 0:3.01-5.el5                                            perl-Mail-Sender.noarch 0:0.8.13-2.el5.1                                      perl-Mail-Sendmail.noarch 0:0.79-9.el5.1                                      perl-MailTools.noarch 0:1.77-1.el5                                            perl-Net-ARP.x86_64 0:1.0.6-2.1.el5                                           perl-Params-Validate.x86_64 0:0.88-3.el5                                      perl-Proc-Daemon.noarch 0:0.03-1.el5                                          perl-TimeDate.noarch 1:1.16-5.el5                                             perl-XML-DOM.noarch 0:1.44-2.el5                                              perl-XML-Parser.x86_64 0:2.34-6.1.2.2.1                                       perl-XML-RegExp.noarch 0:0.03-2.el5                                           rrdtool.x86_64 0:1.2.27-3.el5                                                 rrdtool-perl.x86_64 0:1.2.27-3.el5 

Configure MMM

All generic configuration-options are grouped in a separate file called /etc/mysql-mmm/mmm_common.conf. This file will be the same on all hosts in the system:

active_master_role          writer<host default>    cluster_interface       eth0    pid_path                /var/run/mmmd_agent.pid    bin_path                /usr/lib/mysql-mmm/    replication_user        replication    replication_password    replication_password    agent_user              mmm_agent    agent_password          agent_password</host><host db1>    ip                      192.168.0.11    mode                    master    peer                    db2</host><host db2>    ip                      192.168.0.12    mode                    master    peer                    db1</host><host db3>    ip                      192.168.0.13    mode                    slave</host><host db4>    ip                      192.168.0.14    mode                    slave</host><role writer>    hosts                   db1, db2    ips                     192.168.0.100    mode                    exclusive</role><role reader>    hosts                   db1, db2, db3, db4    ips                     192.168.0.101, 192.168.0.102, 192.168.0.103, 192.168.0.104    mode                    balanced</role>

Don't forget to copy this file to all other hosts (including the monitoring host).

On the database hosts we need to edit /etc/mysql-mmm/mmm_agent.conf. Change “db1” accordingly on the other hosts:

include mmm_common.confthis db1

On the monitor host we need to edit /etc/mysql-mmm/mmm_mon.conf:

include mmm_common.conf<monitor>    ip                      127.0.0.1    pid_path                /var/run/mmmd_mon.pid    bin_path                /usr/lib/mysql-mmm/    status_path             /var/lib/misc/mmmd_mon.status    ping_ips                192.168.0.1, 192.168.0.11, 192.168.0.12, 192.168.0.13, 192.168.0.14</monitor><host default>    monitor_user            mmm_monitor    monitor_password        monitor_password</host>debug 0

ping_ips are some ips that are pinged to determine whether the network connection of the monitor is ok. I used my switch (192.168.0.1) and the four database server.

7. Start MMM


Start the agents

(On the database hosts)

Debian/Ubuntu

Edit /etc/default/mysql-mmm-agent to enable the agent:

ENABLED=1
Red Hat

RHEL/Fedora does not enable packages to start at boot time per default policy, so you might have to turn it on manually so the agents will start automatically when server is rebooted:

chkconfig mysql-mmm-agent on

Then start it:

/etc/init.d/mysql-mmm-agent start

Start the monitor

(On the monitoring host) Edit /etc/default/mysql-mmm-monitor to enable the monitor:

ENABLED=1

Then start it:

/etc/init.d/mysql-mmm-monitor start

Wait some seconds for mmmd_mon to start up. After a few seconds you can use mmm_control to check the status of the cluster:

mon$ mmm_control show  db1(192.168.0.11) master/AWAITING_RECOVERY. Roles:   db2(192.168.0.12) master/AWAITING_RECOVERY. Roles:   db3(192.168.0.13) slave/AWAITING_RECOVERY. Roles:   db4(192.168.0.14) slave/AWAITING_RECOVERY. Roles: 

Because its the first startup the monitor does not know our hosts, so it sets all hosts to state AWAITING_RECOVERY and logs a warning message:

mon$ tail /var/log/mysql-mmm/mmm_mon.warn…2009/10/28 23:15:28  WARN Detected new host 'db1': Setting its initial state to 'AWAITING_RECOVERY'. Use 'mmm_control set_online db1' to switch it online.2009/10/28 23:15:28  WARN Detected new host 'db2': Setting its initial state to 'AWAITING_RECOVERY'. Use 'mmm_control set_online db2' to switch it online.2009/10/28 23:15:28  WARN Detected new host 'db3': Setting its initial state to 'AWAITING_RECOVERY'. Use 'mmm_control set_online db3' to switch it online.2009/10/28 23:15:28  WARN Detected new host 'db4': Setting its initial state to 'AWAITING_RECOVERY'. Use 'mmm_control set_online db4' to switch it online.

Now we set or hosts online (db1 first, because the slaves replicate from this host):

mon$ mmm_control set_online db1OK: State of 'db1' changed to ONLINE. Now you can wait some time and check its new roles!mon$ mmm_control set_online db2OK: State of 'db2' changed to ONLINE. Now you can wait some time and check its new roles!mon$ mmm_control set_online db3OK: State of 'db3' changed to ONLINE. Now you can wait some time and check its new roles!mon$ mmm_control set_online db4OK: State of 'db4' changed to ONLINE. Now you can wait some time and check its new roles!
参考:http://mysql-mmm.org/mmm2:guide
           

再分享一下我老师大神的人工智能教程吧。零基础!通俗易懂!风趣幽默!还带黄段子!希望你也加入到我们人工智能的队伍中来!https://blog.csdn.net/jiangjunshow

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值