Case overview
This case design adopts four-layer mode, which is mainly divided into front-end reverse proxy layer, Web layer, database cache layer and database layer. The front-end reverse proxy layer adopts the main standby mode, the Web layer adopts the cluster mode, the database cache layer adopts the main standby mode, and the database layer adopts the master-slave mode.
Topology, the solid line is normally the data flow to the connection, and the dashed line is the data flow connection in case of exception.
Case Environment
Host name |
IP Address |
Use |
Master |
192.168.10.157 |
Front-end Reverse proxy host, Redis cache host, MySQL data main library |
Backup |
192.168.10.161 |
Front-end reverse proxy standby, Redis cache standby, MySQL data from library |
Tomcat1 |
192.168.10.163 |
Web Services |
Tomcat2 |
192.168.10.164 |
Web Services |
Test machine |
192.168.10.134 |
Test |
The front end uses kepalived as a highly available software with virtual IP set to 192.168.10.100
Because the learning test environment is built, the firewall and SELinux for all of the above servers are turned off. If the production environment is recommended, turn on the firewall. All services are configured by default, and no optimizations are made, and the need to prop up larger traffic must be optimized.
In addition, the production environment recommends that the MySQL partition be set to XFS type because, in most scenarios, overall IOPS performance is higher, more stable, and less delayed than EXT4.
Installation steps deployment front-end two reverse proxy server 1, front-end two reverse proxy server installation Keepalived,nginx
#安装epel和nginx源rpm -ivh http://nginx.org/packages/centos/7/noarch/RPMS/nginx-release-centos-7-0.el7.ngx.noarch.rpmyum install -y keepalived nginx
2. Modify the keepalived configuration file
vi /etc/keepalived/keepalived.conf ! Configuration File for keepalivedglobal_defs { route_id NGINX_HA //修改,两台代理服务器区分}vrrp_script nginx { script "/opt/shell/nginx.sh" //触发脚本 interval 2 //两秒检查一次}vrrp_instance VI_1 { state MASTER //服务器类型为主(备:BACKUP) interface ens33 virtual_router_id 51 //修改,两台代理服务器区分 priority 100 //备代理服务器低于主 advert_int 1 authentication { auth_type PASS auth_pass 1111}track_script { nginx //引用上方定义好的nginx脚本}virtual_ipaddress { 192.168.10.100 //VIP }}
3, write trigger script, start keepalived service with open Nginx
mkdir /opt/shellvi /opt/shell/nginx.sh#!/bin/bashk=`ps -ef | grep keepalived | grep -v grep | wc -l`if [ $k -gt 0 ];then /bin/systemctl start nginx.serviceelse/bin/systemctl stop nginx.servicefichmod +x /opt/shell/nginx.sh
4, configure the Nginx front-end scheduling function and open the service
vi /etc/nginx/nginx.conf //在include 上面一行新增 upstream tomcat_pool { server 192.168.10.163:8080; //指明后端tomcat服务器IP地址和端口 server 192.168.10.164:8080; ip_hash; //会话稳固功能,否则无法通过vip地址登陆 } server { listen 80; server_name 192.168.10.100; //VIP location / { proxy_pass http://tomcat_pool; proxy_set_header X-Real-IP $remote_addr; } }nginx -t -c /etc/nginx/nginx.conf //测试配置文件语法systemctl start keepalived.service //启动keepalived,nginx启动会等待一会netstat -ntap | grep nginx
Deploy two Tomcat server 1, install JDK
tar xf jdk-8u144-linux-x64.tar.gz -C /optcd /optcp -rv jdk1.8.0_144/ /usr/local/javavi /etc/profile //加入环境变量export JAVA_HOME=/usr/local/javaexport JRE_HOME=/usr/local/java/jreexport PATH=$PATH:/usr/local/java/binexport CLASSPATH=./:/usr/local/java/lib:/usr/local/java/jre/libsource /etc/profile //立即生效java -version //可查看版本java version "1.8.0_144" …………
2. Install Tomcat
tar xf apache-tomcat-8.5.23.tar.gz -C /optcd /optcp -r apache-tomcat-8.5.23 /usr/local/tomcat8ln -s /usr/local/tomcat8/bin/startup.sh /usr/bin/tomcatupln -s /usr/local/tomcat8/bin/shutdown.sh /usr/bin/tomcatdown //方便管理,创建软链接
3. Start Tomcat
tomcatup netstat -anpt | grep 8080#可使用浏览器测试默认测试页是否正常显示http://192.168.10.163:8080/ http://192.168.10.164:8080/
Test http://192.168.10.163:8080/
4, test two Tomcat node scheduling situation
vi /usr/local/tomcat8/webapps/ROOT/index.jsp //修改默认网页内容
Modify the home page file after testing
Using the VIP test
5. Modify the Server.xml filecd /usr/local/tomcat8/conf/vi server.xml //跳到行尾,在148行Host name下新增 <Context path="" docBase="SLSaleSystem" reloadable="true" debug="0"></Context>//日志调试信息debug为0表示信息越少,docBase指定访问目录,SLSaleSystem为后面要部署的会员商城网站
Install and deploy MySQLOperate on two front-end proxy servers
1. Install MARIADB and startyum install -y mariadb-server mariadbsystemctl start mariadb.servicesystemctl enable mariadb.servicenetstat -anpt | grep 3306mysql_secure_installation //常规安全设置mysql -u root -p //使用设置好的密码进入数据库
2. Import the databasemysql -u root -p < slsaledb-2014-4-10.sql //导入数据库mysql -u root -pshow databases; //可查看到slsaledb数据库
Authorized
GRANT all ON slsaledb.* TO ‘root‘@‘%‘ IDENTIFIED BY ‘abc123‘;flush privileges;
3. Deploy the Web site on the Tomcat server#将网站压缩包解压至nginx首页目录tar xf SLSaleSystem.tar.gz -C /usr/local/tomcat8/webapps/cd /usr/local/tomcat8/webapps/SLSaleSystem/WEB-INF/classesvi jdbc.properties //修改数据库IP地址为VIP,以及授权的用户名root和密码abc123。
4. Website testinghttp://192.168.10.163:8080/ //默认的用户名admin 密码:123456http://192.168.10.164:8080/#节点没有问题后使用虚拟IP测试http://192.168.10.100 //输入虚拟地址测试登录,并且关闭主再测试登录
Use VIP to access post-login pages
Install and configure Redis master-slave 1 on two proxy servers, install Redis, and start#使用centos7.4 默认源安装yum install -y epel-releaseyum install redis -yvi /etc/redis.confbind 0.0.0.0 //改为监听所有systemctl start redis.servicenetstat -anpt | grep 6379
2. Primary Connection Testredis-cli -h 192.168.10.157 -p 6379 //测试连接192.168.10.157:6379> set name test //设置name 值是test192.168.10.157:6379> get name //获取name值
3. Configure the following line from the server configuration fileslaveof 192.168.10.157 6379 //266行,IP为主服务器的IP不是虚拟IP
Test
redis-cli -h 192.168.10.161 -p 6379 //登录从,获取值,成功说明主从同步成功192.168.10.161:6379> get name"test"
4. Configure the parameters of the Redis connection in the Mall projectvi /usr/local/tomcat8/webapps/SLSaleSystem/WEB-INF/classes/applicationContext-mybatis.xml38 <!--redis 配置 开始--> 47 <constructor-arg value="192.168.10.100"/> //VIP 48 <constructor-arg value="6379"/>
5. Test Cache Effectredis-cli -h 192.168.10.100 -p 6379192.168.10.100:6379> infokeyspace_hits:1 或者 keyspace_misses:2 //关注这个值,命中数和未命中数登录商城,然后反复点击需要数据库参与的操作页面,再回来检查keyspace_hits或者keyspace_misses 值变化。
6, configure the Redis cluster master-slave switch, only operate on the primary serverredis-cli -h 192.168.10.157 info Replication //获取当前服务器的角色vi /etc/redis-sentinel.conf17 protected-mode no //去掉#开启68 sentinel monitor mymaster 192.168.10.157 6379 1 //1表示1台从 注意:修改98 sentinel down-after-milliseconds mymaster 3000 //故障切换时间单位是毫秒systemctl start redis-sentinel.service //启动集群netstat -anpt | grep 26379redis-cli -h 192.168.10.157 -p 26379 info Sentinel //查看集群信息
Verifying master-Slave switching
systemctl stop redis.service //主上关闭redisredis-cli -h 192.168.10.157 -p 26379 info Sentinel //发现主变成了从:192.168.10.161
Verifying the data synchronization situation
#从上创建数据redis-cli -h 192.168.10.161 -p 6379 192.168.10.161:6379> set name2 test2OK192.168.10.161:6379> get name2"test2"service redis start //把主启动redis-cli -h 192.168.10.157 -p 6379192.168.10.157:6379> get name2"test2"
configuring MySQL Master-Slave synchronization 1, MySQL Primary server configurationvi /etc/my.cnf // 在[mysqld]下添加如下内容binlog-ignore-db=mysql,information_schemacharacter_set_server=utf8log_bin=mysql_binserver_id=1 //从需要修改,与此区分log_slave_updates=truesync_binlog=1systemctl restart mariadb //重启数据库netstat -anpt | grep 3306mysql -u root -pshow master status; //记录日志文件名称和 位置值grant replication slave on *.* to ‘rep‘@‘192.168.10.%‘ identified by ‘123456‘; //授权同步flush privileges;
2, MySQL from the server configurationvi /etc/my.cnf //在[mysqld]下修改如下内容,其余与主一样server_id=2systemctl restart mariadb //重启数据库netstat -anpt | grep 3306mysql -u root -pchange master to master_host=‘192.168.10.157‘,master_user=‘rep‘,master_password=‘123456‘,master_log_file=‘mysql_bin.000001‘,master_log_pos=245;start slave; //开始同步show slave status\G; Slave_IO_Running: Yes Slave_SQL_Running: Yes
You can build a table in MySQL to verify the synchronization from the test
Mega PV Site Architecture case