postgresql-xl安装

 基于pgxc_ctl搭建安装

主机划分

192.168.43.100

postgresql

gtm

20001

  

192.168.43.154

postgresql2

gtm-proxy1

20001

coordinator1

20004 20010

datanode1

20008 20012

192.168.43.155

postgresql3

gtm-proxy1

20001

coordinator2

20005 20011

datanode2

20009 20013

所有机器修改/etc/hosts

关闭防火墙和selinux

[root@Postgresql ~]# systemctl stop firewalld.service   #停止firewall

[root@Postgresql ~]# systemctl disable firewalld.service  #禁止开机启动
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
 
[root@Postgresql ~]# vim /etc/selinux/config

基础依赖包安装

# yum install -y flex bison readline-devel zlib-devel openjade docbook-style-dsssl git gcc 

每个节点建立postgres组和用户

[root@Postgresql ~]#groupadd postgres

[root@Postgresql ~]#useradd -m -d /home/postgres postgres -g postgres

[root@Postgresql ~]#passwd postgres

配置免密登陆

[postgres@Postgresql ~]$ ssh-keygen -t rsa

[postgres@Postgresql ~]$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

[postgres@Postgresql ~]$ scp ~/.ssh/authorized_keys postgres@postgresql2:~/.ssh/

[postgres@Postgresql ~]$ scp ~/.ssh/authorized_keys postgres@postgresql3:~/.ssh/

[root@Postgresql ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub postgres@postgresql

[root@Postgresql ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub postgres@postgresql2

[root@Postgresql ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub postgres@postgresql3

三台机器都需要修改权限

[postgres@Postgresql ~]$ chmod 755 /home/postgres

[postgres@Postgresql ~]$ chmod 700 /home/postgres/.ssh

[postgres@Postgresql ~]$ chmod 600 /home/postgres/.ssh/authorized_keys

上传XL安装包并解压(三台机器都需要安装XL)

[root@Postgresql ~]# mkdir /data

[root@Postgresql ~]# cd /data

[root@Postgresql data]# ls
postgres-xl-10r1.1.tar.gz

#解压后
[root@Postgresql data]# ls
postgres-xl-10r1.1  postgres-xl-10r1.1.tar.gz
 
[root@Postgresql data]# chown -R postgres:postgres postgres-xl-10r1.1

[root@Postgresql data]# su - postgres
上一次登录:二 6月 25 16:00:53 CST 2019从 localhostpts/3 上

[postgres@Postgresql ~]$ cd /data/postgres-xl-10r1.1/

[postgres@Postgresql postgres-xl-10r1.1]$ mkdir -p /home/postgres/pgxl10

[postgres@Postgresql postgres-xl-10r1.1]$
./configure --prefix=/home/postgres/pgxl10

[postgres@Postgresql postgres-xl-10r1.1]$make

[postgres@Postgresql postgres-xl-10r1.1]$make install

[postgres@Postgresql postgres-xl-10r1.1]$cd contrib/

[postgres@Postgresql postgres-xl-10r1.1]$make

[postgres@Postgresql postgres-xl-10r1.1]$make install

配置基础环境变量

[postgres@Postgresql ~]$ vim .bashrc
export PGHOME=/home/postgres/pgxl10
export LD_LIBRARY_PATH=$PGHOME/lib:$LD_LIBRARY_PATH
export PATH=$PGHOME/bin:$PATH

[postgres@Postgresql ~]$ source .bashrc

[postgres@Postgresql ~]$ echo $PGHOME
/home/postgres/pgxl10

基础环境安装完成,以下步骤为集群配置,集群发起从GTM所在服务器进行执行即可

 

在postgres用户根目录下生成pgxc_ctl配置文件,创建集群文件存放目录和备份目录

$ pgxc_ctl              ---初次执行,会提示Error说没有配置文件,忽略即可
PGXC prepare            ---执行该命令将会生成一份配置文件模板
PGXC exit               --退出 pgxc_ctl交互窗

修改pgxc_ctl.conf文件配置

[postgres@Postgresql ~]$ cd pgxc_ctl/

[postgres@Postgresql pgxc_ctl]$ vim pgxc_ctl.conf
#---- OVERALL -----------------------------------------------------------------------------
#
pgxcOwner=postgres			# 用来操作XC集群的服务器账号和数据库账号,
							#数据库账号需要具有超级管理员权限,同时该用户还应是coordinator节点和datanode节点的超级管理员
							#这里直接采用postgres用户
pgxcUser=$pgxcOwner		# 设置XC集群的超级管理员,直接用刚预设的用户就行

tmpDir=/tmp					# xc集群使用的临时文件存放目录
localTmpDir=$tmpDir			# 本地使用的临时目录

configBackup=y					# 是否开启文件备份策略
configBackupHost=pgxc-linker	# 主备份配置文件 host to backup config file
configBackupDir=$HOME/pgxc_bak		# 备份目录
configBackupFile=pgxc_ctl.bak	# 备份文件名称 Backup file name --> Need to synchronize when original changed.

#---- GTM ------------------------------------------------------------------------------------

# GTM is mandatory.  You must have at least (and only) one GTM master in your Postgres-XC cluster.
# If GTM crashes and you need to reconfigure it, you can do it by pgxc_update_gtm command to update
# GTM master with others.   Of course, we provide pgxc_remove_gtm command to remove it.  This command
# will not stop the current GTM.  It is up to the operator.
#GTM配置是不可或缺的,XC集群中至少有一个GTM,通常结构是一个GTM和一个备份GTM,如果GTM崩溃了,可以使用pgxc_update_gtm命令更新,
#同时也可以使用pgxc_remove_gtm进行删除,删除命令不会直接停止当前的GTM,需要管理员先预先断开其与GTM的连接

#---- GTM Master -----------------------------------------------

#---- Overall ----
gtmName=gtm #gtm的名字
gtmMasterServer=192.168.43.100 #gtm所属服务器,之前已经配置了/etc/hosts中的主机名与IP的关系,这里可以直接使用主机名或IP
gtmMasterPort=20001 #gtm交互使用的端口号
gtmMasterDir=$HOME/pgxc/nodes/gtm #gtm所在的目录

#---- Configuration ---
gtmExtraConfig=none			# master节点和slave节点使用的gtm.conf文件,只做初始化使用,默认设置为无
gtmMasterSpecificExtraConfig=none	
							# 主GTM的初始化配置文件,默认设置为无,上面已经配置了基础参数,如果有自定义配置GTM,可以创建gtm.conf配置到此处

#---- GTM Slave -----------------------------------------------

# Because GTM is a key component to maintain database consistency, you may want to configure GTM slave
# for backup.

#---- Overall ------
gtmSlave=n					# 开启GTM从节点配置
							# all the following variables will be reset.
gtmSlaveName=gtmSlave       # GTM从节点名称
gtmSlaveServer=node12		# GTM从节点部署服务器IP地址,如果在服务器端已经配置了hosts的映射关系,则可以直接使用名称,否则需要配置为IP地址
gtmSlavePort=20001			# GTM从节点交互端口号
gtmSlaveDir=$HOME/pgxc/nodes/gtm	# GTM从节点部署目录
# 如果不配置GTM从节点,则GTM主节点发生故障时无法有效进行切换
# 管理员可通过pgxc_add_gtm_slave 进行添加处理

#---- Configuration ----
gtmSlaveSpecificExtraConfig=none # GTM从节点初始化安装时使用的配置文件,gtm.conf文件

#---- GTM Proxy -------------------------------------------------------------------------------------------------------
# GTM proxy will be selected based upon which server each component runs on.
# When fails over to the slave, the slave inherits its master's gtm proxy.  It should be
# reconfigured based upon the new location.
#
# To do so, slave should be restarted.   So pg_ctl promote -> (edit postgresql.conf and recovery.conf) -> pg_ctl restart
#
# You don't have to configure GTM Proxy if you dont' configure GTM slave or you are happy if every component connects
# to GTM Master directly.  If you configure GTL slave, you must configure GTM proxy too.

#---- Shortcuts ------
gtmProxyDir=$HOME/pgxc/nodes/gtm_pxy #Proxy的存放路径

#---- Overall -------
gtmProxy=y				# 是否开启proxy节点配置,关闭时相关的配置信息将被设置为空值
						# only when you dont' configure GTM slaves.
						# If you specify this value not to y, the following parameters will be set to default empty values.
						# If we find there're no valid Proxy server names (means, every servers are specified
						# as none), then gtmProxy value will be set to "n" and all the entries will be set to
						# empty values.
gtmProxyNames=(gtm_pxy1 gtm_pxy2)	# proxy节点名称
gtmProxyServers=(192.168.43.154 192.168.43.155)			# 存放的服务器,可以配置多个,这里配置在各个存储节点服务器上
gtmProxyPorts=(20001 20001)				
						#交互使用的端口号,上面配置了几台服务器,这里就需要配置几个端口号,每个端口号的所在位置和上面的服务器所在位置对应
gtmProxyDirs=($gtmProxyDir $gtmProxyDir)	# 各服务器中proxy的存放路径,这里直接统一使用一样的,方便各个存储节点中查找

#---- Configuration ----
gtmPxyExtraConfig=none		# proxy节点使用的gtm_proxy配置文件
gtmPxySpecificExtraConfig=(none none) #各节点服务器中的具体配置文件

#---- Coordinators ----------------------------------------------------------------------------------------------------

#---- shortcuts ----------
coordMasterDir=$HOME/pgxc/nodes/coord
coordSlaveDir=$HOME/pgxc/nodes/coord_slave
coordArchLogDir=$HOME/pgxc/nodes/coord_archlog

#---- Overall ------------
coordNames=(coord1 coord2)		# Master 和 slave 使用相同的名称
coordPorts=(20004 20005)			# 主节点端口号
poolerPorts=(20010 20011)			# Master pooler ports
coordPgHbaEntries=(192.168.43.0/24)				# 所有的coordinator(master/slave)节点均接受的服务端IP段,该配置只接受$pgxcOwner连接。
												# 如果不想放开指定段,可以使用coordExtraPgHba和coordSpecificExtraPgHba参数进行设置
												#只需要在()中填写指定的IP即可
#coordPgHbaEntries=(::1/128)	# Same as above but for IPv6 addresses

#---- Master -------------
coordMasterServers=(192.168.43.154 192.168.43.155)		# 主节点使用的服务器
coordMasterDirs=($coordMasterDir $coordMasterDir)    #coordinators 主节点存放地址
coordMaxWALsernder=5	# 配置从节点最大数量,如果配置0,则按照下面配置的外部文件进行查找该参数,如果不配置从节点,可以将该值设置为0
coordMaxWALSenders=($coordMaxWALsernder $coordMaxWALsernder)
				# 每个coordinator节点的最大协调数量,即从节点在交互时需要和各个节点进行交互,因此这里直接配置和上面从节点最大数量一致即可

#---- Slave -------------
coordSlave=n			# 凡是具有coordinator slave节点,则该配置必须设置为y,否则设置为n,设置为n时,以下关于slave节点的配置将被设置为none

coordUserDefinedBackupSettings=n	# Specify whether to update backup/recovery
									# settings during standby addition/removal.

coordSlaveSync=y		# 开启同步模式连接
coordSlaveServers=(192.168.43.154 192.168.43.155)			# 从节点所在服务器
coordSlavePorts=(20004 20005)			# Master ports
coordSlavePoolerPorts=(20010 20011)			# Master pooler ports
coordSlaveDirs=($coordSlaveDir $coordSlaveDir)
coordArchLogDirs=($coordArchLogDir $coordArchLogDir)

#---- Configuration files---
# 设置特定的非默认配置,可以通过bash脚本或额外的pg_hba.conf提供
coordExtraConfig=coordExtraConfig	
# coordinators的额外配置文件,即上面提到的外部配置文件,额外的配置将应用到所有的coordinators节点的postgresql.conf配置文件中。
# 以下为设置的最小参数,这里可以将自己的postgresql.conf配置增加到下面
cat > $coordExtraConfig <<EOF
#================================================
# Added to all the coordinator postgresql.conf
# Original: $coordExtraConfig
log_destination = 'stderr'
logging_collector = on
log_directory = 'pg_log'
listen_addresses = '*'
max_connections = 100
EOF

# 针对coordinator master节点附加的配置文件
# You can define each setting by similar means as above.
coordSpecificExtraConfig=(none none)
coordExtraPgHba=none	# 设置pg_hba.conf,该配置将应用到所有的coordinator节点的pg_hba.conf文件中
coordSpecificExtraPgHba=(none none)

#----- Additional Slaves -----
#以下配置为额外的多重级联复制配置,该配置不在当前使用的版本中使用
# Please note that this section is just a suggestion how we extend the configuration for
# multiple and cascaded replication.   They're not used in the current version.
#
coordAdditionalSlaves=n		# Additional slave can be specified as follows: where you
coordAdditionalSlaveSet=(cad1)		# Each specifies set of slaves.   This case, two set of slaves are
											# configured
cad1_Sync=n		  		# All the slaves at "cad1" are connected with asynchronous mode.
							# If not, specify "y"
							# The following lines specifies detailed configuration for each
							# slave tag, cad1.  You can define cad2 similarly.
cad1_Servers=(node08 node09 node06 node07)	# Hosts
cad1_dir=$HOME/pgxc/nodes/coord_slave_cad1
cad1_Dirs=($cad1_dir $cad1_dir $cad1_dir $cad1_dir)
cad1_ArchLogDir=$HOME/pgxc/nodes/coord_archlog_cad1
cad1_ArchLogDirs=($cad1_ArchLogDir $cad1_ArchLogDir $cad1_ArchLogDir $cad1_ArchLogDir)


#---- Datanodes -------------------------------------------------------------------------------------------------------

#---- Shortcuts --------------
datanodeMasterDir=$HOME/pgxc/nodes/dn_master
datanodeSlaveDir=$HOME/pgxc/nodes/dn_slave
datanodeArchLogDir=$HOME/pgxc/nodes/datanode_archlog

#---- Overall ---------------
#primaryDatanode=datanode1				# Primary Node.
# At present, xc has a priblem to issue ALTER NODE against the primay node.  Until it is fixed, the test will be done
# without this feature.
primaryDatanode=datanode1				# 主节点
datanodeNames=(datanode1 datanode2)     #所有节点名称
datanodePorts=(20008 20009)	# 主交互端口
datanodePoolerPorts=(20012 20013)	# 主交互池端口
datanodePgHbaEntries=(192.168.43.0/24)	# 所有的coordinator(master/slave)节点均接受的服务端IP段,该配置只接受$pgxcOwner连接。
										# 如果不想放开指定段,可以使用coordExtraPgHba和coordSpecificExtraPgHba参数进行设置
										#只需要在()中填写指定的IP即可

#---- Master ----------------
datanodeMasterServers=(192.168.43.154 192.168.43.155)	# 所有节点的服务器IP,不可设置为无,否则集群将不能运行
													# This means that there should be the master but is down.
													# The cluster is not operational until the master is
													# recovered and ready to run.	
datanodeMasterDirs=($datanodeMasterDir $datanodeMasterDir)
datanodeMaxWalSender=5								# 最大配置从节点,如果为0则需要在外部配置文件中进行设置,如果不配置从节点,可设置为0
datanodeMaxWALSenders=($datanodeMaxWalSender $datanodeMaxWalSender)
						# 每个节点发起数量

#---- Slave -----------------
datanodeSlave=n			# 存储从节点是否开启,设置y开启,则每个节点至少配置一个coordinator从节点

datanodeUserDefinedBackupSettings=n	# Specify whether to update backup/recovery
									# settings during standby addition/removal.

datanodeSlaveServers=(192.168.43.154 192.168.43.155)	# value none means this slave is not available
datanodeSlavePorts=(20008 20009)	# value none means this slave is not available
datanodeSlavePoolerPorts=(20012 20013)	# value none means this slave is not available
datanodeSlaveSync=y		# 是否开启存储从节点同步模式
datanodeSlaveDirs=($datanodeSlaveDir $datanodeSlaveDir)
datanodeArchLogDirs=( $datanodeArchLogDir $datanodeArchLogDir)

# ---- Configuration files ---
# You may supply your bash script to setup extra config lines and extra pg_hba.conf entries here.
# These files will go to corresponding files for the master.
# Or you may supply these files manually.
datanodeExtraConfig=none	# 额外的datanode配置文件,该文件中的配置将添加到所有存储节点的postgresql.conf配置文件中
datanodeSpecificExtraConfig=(none none)
datanodeExtraPgHba=none		# 额外的pg_hba.conf.  This file will be added to all the datanodes' postgresql.conf
datanodeSpecificExtraPgHba=(none none)

#----- Additional Slaves -----
datanodeAdditionalSlaves=n	# 是否开启额外的slave节点配置
# datanodeAdditionalSlaveSet=(dad1 dad2)		# Each specifies set of slaves.   This case, two set of slaves are
											# configured
# dad1_Sync=n		  		# All the slaves at "cad1" are connected with asynchronous mode.
							# If not, specify "y"
							# The following lines specifies detailed configuration for each
							# slave tag, cad1.  You can define cad2 similarly.
# dad1_Servers=(node08 node09 node06 node07)	# Hosts
# dad1_dir=$HOME/pgxc/nodes/coord_slave_cad1
# dad1_Dirs=($cad1_dir $cad1_dir $cad1_dir $cad1_dir)
# dad1_ArchLogDir=$HOME/pgxc/nodes/coord_archlog_cad1
# dad1_ArchLogDirs=($cad1_ArchLogDir $cad1_ArchLogDir $cad1_ArchLogDir $cad1_ArchLogDir)

#---- WAL archives -------------------------------------------------------------------------------------------------
walArchive=y	# wal日志记录,开启则设置y
				# Pgxc_ctl assumes that if you configure WAL archive, you configure it
				# for all the coordinators and datanodes.
				# Default is "no".   Please specify "y" here to turn it on.
#
#		End of Configuration Section
#
#==========================================================================================================================

 启动(gtm端)

[postgres@Postgresql ~]$ pgxc_ctl -c /home/postgres/pgxc_ctl/pgxc_ctl.conf init all
/bin/bash
Installing pgxc_ctl_bash script as /home/postgres/pgxc_ctl/pgxc_ctl_bash.
Installing pgxc_ctl_bash script as /home/postgres/pgxc_ctl/pgxc_ctl_bash.
Reading configuration using /home/postgres/pgxc_ctl/pgxc_ctl_bash --home /home/postgres/pgxc_ctl --configuration /home/postgres/pgxc_ctl/pgxc_ctl.conf
Finished reading configuration.
。。。 。。。
 pgxc_pool_reload
------------------
 t
(1 row)

Done.

查看所有节点状态

[postgres@Postgresql ~]$ pgxc_ctl monitor all

/bin/bash
Installing pgxc_ctl_bash script as /home/postgres/pgxc_ctl/pgxc_ctl_bash.
Installing pgxc_ctl_bash script as /home/postgres/pgxc_ctl/pgxc_ctl_bash.
Reading configuration using /home/postgres/pgxc_ctl/pgxc_ctl_bash --home /home/postgres/pgxc_ctl --configuration /home/postgres/pgxc_ctl/pgxc_ctl.conf
Finished reading configuration.
   ******** PGXC_CTL START ***************

Current directory: /home/postgres/pgxc_ctl
Running: gtm master
Running: gtm proxy gtm_pxy1
Running: gtm proxy gtm_pxy2
Running: coordinator master coord1
Running: coordinator master coord2
Running: datanode master datanode1
Running: datanode master datanode2

验证,查看所有节点

[postgres@postgresql2 ~]$ psql -p 20004
psql (11.3, server 10.6 (Postgres-XL 10r1.1))
Type "help" for help.

postgres=# select * from pgxc_node;
 node_name | node_type | node_port |   node_host    | nodeis_primary | nodeis_preferred |   node_id   
-----------+-----------+-----------+----------------+----------------+------------------+-------------
 coord1    | C         |     20004 | 192.168.43.154 | f              | f                |  1885696643
 coord2    | C         |     20005 | 192.168.43.155 | f              | f                | -1197102633
 datanode1 | D         |     20008 | 192.168.43.154 | t              | t                |   888802358
 datanode2 | D         |     20009 | 192.168.43.155 | f              | f                |  -905831925
(4 rows)

 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值