环境设定(rhel7.3)
base2 | 172.25.78.12 | salt-master |
---|---|---|
base3 | 172.25.78.13 | salt-minion,httpd服务器 |
base4 | 172.25.78.14 | salt-minion,nginx服务器 |
base5 | 172.25.78.15 | salt-minion,haproxy+keepalived |
1.环境搭建
这是所有安装包和依赖包
libsodium-1.0.16-1.el7.x86_64.rpm
PyYAML-3.11-1.el7.x86_64.rpm
openpgm-5.2.122-2.el7.x86_64.rpm
python2-libcloud-2.0.0-2.el7.noarch.rpm
salt-2018.3.3-1.el7.noarch.rpm
python-cherrypy-5.6.0-2.el7.noarch.rpm
salt-api-2018.3.3-1.el7.noarch.rpm
python-crypto-2.6.1-2.el7.x86_64.rpm
salt-cloud-2018.3.3-1.el7.noarch.rpm
python-futures-3.0.3-1.el7.noarch.rpm
salt-master-2018.3.3-1.el7.noarch.rpm
python-msgpack-0.4.6-1.el7.x86_64.rpm
salt-minion-2018.3.3-1.el7.noarch.rpm
python-psutil-2.2.1-1.el7.x86_64.rpm
salt-ssh-2018.3.3-1.el7.noarch.rpm
python-tornado-4.2.1-1.el7.x86_64.rpm
salt-syndic-2018.3.3-1.el7.noarch.rpm
python-zmq-15.3.0-3.el7.x86_64.rpm
zeromq-4.1.4-7.el7.x86_64.rpm
- 在minion端都进行相同操作
[root@base5 ~]# vim /etc/yum.repos.d/yum.repo # yum源写的是自己的,如果没有自己搭建的salt的第三方yum仓库,就自己在网上下载相应的安装包
[rhel7.3]
name=rhel7.3
baseurl=http://172.25.78.254/rhel7.3
gpgcheck=0
[salt]
name=salt2018
baseurl=http://172.25.78.254/2018
gpgcheck=0
[root@base5 ~]# yum install -y salt-minion
[root@base5 ~]# vim /etc/salt/minion
16 master: 172.25.78.12 # 这是master的IP
[root@base5 ~]# systemctl start salt-minion
- 在master端
[root@base2 ~]# systemctl start salt-minion
[root@base2 ~]# salt-key -L
[root@base2 ~]# salt-key -A
[root@base2 ~]# salt-key -L
[root@base2 ~]# salt base5 test.ping
2.haproxy实现负载均衡集群
[root@base2 ~]# cd /srv/salt/
[root@base2 salt]# mkdir haproxy
[root@base2 salt]# cd haproxy/
[root@base2 haproxy]# vim install.sls # 因为系统自带haproxy,所以我们直接安装即可
haproxy-install:
pkg.installed:
- pkgs:
- haproxy
file.managed:
- name: /etc/haproxy/haproxy.cfg
- source: salt://haproxy/files/haproxy.cfg
service.running:
- name: haproxy
- reload: True
- watch:
- file: haproxy-install
[root@base2 haproxy]# mkdir files
[root@base2 haproxy]# salt base5 state.sls haproxy.install
- 在haproxy端查看,有配置文件生成说明安装成功
[root@base5 ~]# vim /etc/salt/minion
[root@base5 ~]# systemctl start salt-minion
[root@base5 ~]# cd /etc/haproxy/
[root@base5 haproxy]# ls
haproxy.cfg
[root@base5 haproxy]# scp haproxy.cfg [email protected]:/srv/salt/haproxy/files # 服务端需要haproxy的配置文件远程配置,但是在实际生产环境,配置文件需要在服务端自己写好,然后一键推送到客户端
- 回到服务端继续进行配置
[root@base2 haproxy]# cd files/
[root@base2 files]# ls
haproxy.cfg
[root@base2 files]# vim haproxy.cfg
59 stats uri /status
63 frontend main *:80 # 均衡器端监听80端口
64 default_backend app # 引用后端自定义服务器组名app
65
66 backend app # 设定默认后端,自定义名字为app
67 balance roundrobin # 轮询方式为加权轮询
68 server app1 172.25.78.13:80 check # 设定后端服务器apache的IP ,并引入健康检查
69 server app3 172.25.78.14:80 check # 设定后端服务器nginx的IP ,并引入健康检查
[root@base2 files]# salt base5 state.sls haproxy.install
- 在haproxy端
[root@base5 haproxy]# systemctl status haproxy # 查看服务状态,确保开启
[root@base3 ~]# cd /var/www/html/
[root@base3 html]# vim index.html # 编写apache页面
<h1>base3 --- apache </h1>
[root@base4 ~]# cd /usr/local/nginx/html/
[root@base4 html]# vim index.html # 编写nginx页面
<h1>base4 --- nginx </h1>
测试负载均衡
haproxy哈自带健康检查,当一台服务挂掉之后,另外一台服务器继续运行,不受影响
[root@base3 html]# systemctl stop httpd
[root@base2 ~]# cd /srv/salt/
[root@base2 salt]# vim top.sls
base:
'base2':
- haproxy.install
'base5':
- haproxy.install
'base3':
- httpd.service
'base4':
- nginx.service
[root@base2 salt]# salt base[3,4] state.highstate
3.keepalived+haproxy实现高可用
[root@base2 keepalived]# vim /etc/salt/minion
16 master: 172.25.78.12
[root@base2 keepalived]# systemctl restart salt-minion
[root@base2 keepalived]# salt-key -L
[root@base2 keepalived]# salt-key -A
[root@base2 keepalived]# salt-key -L
[root@base2 salt]# mkdir keepalived
[root@base2 salt]# cd keepalived/
[root@base2 keepalived]# vim install.sls
kp-install:
pkg.installed:
- pkgs:
- keepalived
[root@base2 keepalived]# salt base5 state.sls keepalived.install
- 在备用服务器上获取配置文件
[root@base5 ~]# cd /etc/keepalived/
[root@base5 keepalived]# ls
keepalived.conf
[root@base5 keepalived]# scp keepalived.conf [email protected]:/srv/salt/keepalived/files
- 回到服务端
[root@base2 keepalived]# mkdir files
[root@base2 keepalived]# ls files/
keepalived.conf
[root@base2 keepalived]# vim files/keepalived.conf
! Configuration File for keepalived
global_defs {
notification_email {
root@localhost
}
notification_email_from [email protected]
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id LVS_DEVEL
}
vrrp_instance VI_1 {
state {{ STATE }}
interface eth0
virtual_router_id {{ VRID }}
priority {{ PRIORITY }}
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
172.25.78.100
}
}
[root@base2 keepalived]# vim install.sls
kp-install:
pkg.installed:
- pkgs:
- keepalived
file.managed:
- name: /etc/keepalived/keepalived.conf
- source: salt://keepalived/files/keepalived.conf
- template: jinja
{% if grains['fqdn'] == 'base2' %}
STATE: MASTER
VRID: 78
PRIORITY: 100
{% elif grains['fqdn'] == 'base5' %}
STATE: BACKUP
VRID: 78
PRIORITY: 50
{% endif %}
service.running:
- name: keepalived
- reload: True
- watch:
- file: kp-install
[root@base2 keepalived]# cd ..
[root@base2 salt]# vim top.sls
base:
'base2':
- haproxy.install
- keepalived.install
'base5':
- haproxy.install
- keepalived.install
'base3':
- httpd.service
'base4':
- nginx.service
[root@base2 keepalived]# salt '*' state.highstate
- 查看keepalived是否成功安装并启动
[root@base5 keepalived]# systemctl status keepalived
[root@base2 keepalived]# systemctl status keepalived
- 查看vip自动添加到服务端
[root@base2 httpd]# ip a
测试负载均衡
[root@base2 httpd]# systemctl stop keepalived # 关闭master端的keepalived
[root@base2 httpd]# ip a # vip漂移到备用服务器上
- 在备用服务器上查看vip
[root@base5 ~]# ip a