简介
Ansible roles 是为了层次化、结构化组织playbook
roles就是通过分别将变量、文件、任务、模块及处理器放置于 管理ansible角色
https://galaxy.ansible.com
目录结构:
1、使用ansible-galaxy初始化角色
[devops@server1 ansible]$ mkdir roles
[devops@server1 roles]$ ansible-galaxy init apache
[devops@server1 roles]$ ansible-galaxy init haproxy
[devops@server1 roles]$ ansible-galaxy init keepalived
2、将roles路径添加到ansible.cfg
3、apache:
tasks:任务
---
# tasks file for apache
- name: install httpd
yum:
name: httpd
state: present
- name: cpoy index.html
copy:
content: "{{ ansible_facts['hostname'] }}"
dest: /var/www/html/index.html
- name: cconfigure httpd
template:
src: httpd.conf.j2
dest: /etc/httpd/conf/httpd.conf
owner: root
group: root
mode: 644
notify: restart httpd
- name: start httpd and firewalld
service:
name: "{{ item }}"
state: started
loop:
- httpd
- firewalld
- name: configure firewalld
firewalld:
service: http
permanent: yes
immediate: yes
state: enabled
handlers:
[devops@server1 apache]$ cat handlers/main.yml
---
# handlers file for apache
- name: restart httpd
service:
name: httpd
state: restarted
templates:
Listen {{ http_host }}:{{ http_port }}
vars:
[devops@server1 apache]$ cat vars/main.yml
---
# vars file for apache
http_host: "{{ ansible_facts['default_ipv4']['address'] }}"
http_port: 80
haproxy:
tasks:
[devops@server1 haproxy]$ cat tasks/main.yml
---
# tasks file for haproxy
- name: install haproxy
yum:
name: haproxy
state: present
- name: configure haproxy
template:
src: haproxy.cfg.j2
dest: /etc/haproxy/haproxy.cfg
notify: restart haproxy
- name: start haproxy
service:
name: haproxy
state: started
handlers:触发器
[devops@server1 haproxy]$ cat handlers/main.yml
---
# handlers file for haproxy
- name: restart haproxy
service:
name: haproxy
state: restarted
templates:
[devops@server1 haproxy]$ cat handlers/main.yml
---
# handlers file for haproxy
- name: restart haproxy
service:
name: haproxy
state: restarted
[devops@server1 haproxy]$ cat vars/main.yml
---
# vars file for haproxy[devops@server1 haproxy]$ cat templates/haproxy.cfg.j2
#---------------------------------------------------------------------
# Example configuration for a possible web application. See the
# full configuration options online.
#
# http://haproxy.1wt.eu/download/1.4/doc/configuration.txt
#
#---------------------------------------------------------------------
#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
# to have these messages end up in /var/log/haproxy.log you will
# need to:
#
# 1) configure syslog to accept network log events. This is done
# by adding the '-r' option to the SYSLOGD_OPTIONS in
# /etc/sysconfig/syslog
#
# 2) configure local2 events to go to the /var/log/haproxy.log
# file. A line like the following can be added to
# /etc/sysconfig/syslog
#
# local2.* /var/log/haproxy.log
#
log 127.0.0.1 local2
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 4000
user haproxy
group haproxy
daemon
# turn on stats unix socket
stats socket /var/lib/haproxy/stats
#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
mode http
log global
option httplog
option dontlognull
option http-server-close
option forwardfor except 127.0.0.0/8
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
maxconn 3000
stats uri /status
#---------------------------------------------------------------------
# main frontend which proxys to the backends
#---------------------------------------------------------------------
frontend main *:80
acl url_static path_beg -i /static /images /javascript /stylesheets
acl url_static path_end -i .jpg .gif .png .css .js
default_backend app
#---------------------------------------------------------------------
# static backend for serving up images, stylesheets and such
#---------------------------------------------------------------------
# round robin balancing between the various backends
#---------------------------------------------------------------------
backend app
balance roundrobin
{% for host in groups['webserver'] %}
server {{ hostvars[host]['ansible_facts']['hostname'] }} {{ hostvars[host]['ansible_facts']['eth0']['ipv4']['address'] }}:80 check
{% endfor %}
keepalived:
tasks:
[devops@server1 keepalived]$ cat tasks/main.yml
---
# tasks file for keepalived
- name: install keepalived
yum:
name: keepalived
state: present
- name: configure keepalived
template:
src: keepalived.conf.j2
dest: /etc/keepalived/keepalived.conf
notify: restart keepalived
- name: start keepalived
service:
name: keepalived
state: started
handlers
[devops@server1 keepalived]$ cat handlers/main.yml
---
# handlers file for keepalived
- name: restart keepalived
service:
name: keepalived
state: restarted
templates:
[devops@server1 keepalived]$ cat templates/keepalived.conf.j2
! Configuration File for keepalived
global_defs {
notification_email {
root@localhost
}
notification_email_from keepalived@localhost
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id LVS_DEVEL
}
vrrp_instance VI_1 {
state {{ STATE }}
interface eth0
virtual_router_id {{ VRID }}
priority {{ PRORITY }}
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
172.25.14.100
}
}
主入口文件
[devops@server1 ansible]$ cat apache.yml
---
- hosts: all
tasks:
- import_role:
name: apache
when: ansible_hostname in groups['webserver']
- import_role:
name: haproxy
when: ansible_hostname in groups['lb']
- import_role:
name: keepalived
when: ansible_hostname in groups['lb']
主机组:
[lb]
server1 STATE=MASTER VRID=10 PRORITY=100
server4 STATE=BACKUP VRID=10 PRORITY=50
[test]
server2
[prod]
server3
[webserver:children]
test
prod
完成后使用ansible-playbook推送
查看VIP,VIP在server1上
当server1挂掉,此时server4接管,服务正常运行