Nginx works in a multi-process manner, with a master process and multiple worker processes
The master process is used to manage worker processes
(1) Send a signal to each worker process
(2) Monitor worker processes, and when a worker process has an exception, a new worker process is started.
The master process will first establish a socket,fork function to listen to the generation of sub-process workers, inherit scoket, generally speaking, when there is a connection to enter, all accept on this socket above the process will be notified, Only one process will accept the connection, and the others will fail. ---> Shared locks
(surprise group) when an event is triggered, all threads/processes waiting for this will be awakened, but only one to respond.
Solve this phenomenon Nginx proposed the concept of shared lock, with this lock (such as Accept_mutex) the same time, there will be only one process in the accept connection.
Nginx is more efficient than Apache because Nginx uses asynchronous non-blocking mode, and Apache uses synchronous blocking
Blocking and non-blocking
Blocking and non-blocking refers to whether an action is performed such that the operation ends again, or returns immediately.
For example, to a restaurant to order, the transfer menu to the chef:
(1) Blocking: Wait in the window until the chef finishes the dish to the window and then serve, during which time the waiter cannot do anything else.
(2) Non-blocking: can first go to do other things, after a while to the window to say hello No, no good will come again to continue to inquire, know well so far.
Synchronous and asynchronous
Synchronization and Asynchrony are an attribute of the event itself;
(1) Synchronization: The waiter to deal with the chef directly, the food is not good, the waiter directly know, know that the food is good to the waiter by the chef.
(2) Async: Between the chef and the waiter there is a dish-chef, after the cook, by the food to pass the dish to the Dish window, notify the waiter or not notice.
Synchronous things can only be done in a blocking way
Asynchronous things can be done in a blocking and non-blocking way. Non-blocking can be either an active query or a passive receive, the efficiency of passive reception is higher than the active query.
Lab Environment:
redhat6.5 Virtual Machine Three
Server1 172.25.44.1
Server2 172.25.44.2
Server3 172.25.44.3
Firewall and SELinux are off
Edit/etc/hosts file, add parsing
Server1:
Using the nginx-1.8.1 version
Tar zxf nginx-1.8.1.tar.gz # # #解压
CD nginx-1.8.1 # # #进入解压目录
Vim Src/core/nginx.h # # #屏蔽nginx版本信息, security
#define Nginx_ver "nginx/"
Vim AUTO/CC/GCC # # #关闭debug调式模式 (software bloated if turned on)
178 # Debug
179 #CFLAGS = "$CFLAGS-G" # # # #注释此行
Yum insall pcre-devel openssl-devel gcc gcc-c++-y # #需要解决的一些依赖关系
./configure \
--prefix=/usr/local/nginx \ # # #安装的位置
--with-http_stub_status_module \ # # #监控模块
--with-http_ssl_module
Make && make install
Add the path/usr/local/nginx/sbin to the ~/.bash_profile file
SOURCE ~/.bash_profile # # #使文件生效
Here are some common nginx commands.
Nginx # # #开启nginx
NGINX-T # # #检查配置文件
Nginx-s Reload # # #重新加载
Nginx-s Stop # # #停止
Vim/usr/local/nginx/conf/nginx.conf
#user nobody;
Worker_processes 1; # # #cpu使用个数
#worker_cpu-affinity 0001 0010 0100 1000; # #绑定cpu, 4 CPUs
Events {
Use Epoll; # # #异步非阻塞模式
Worker_connections 1024; # #连接数
}
Upstream linux{
Server 172.25.44.2:80;
Server 172.25.44.3:80;
22}
Location/{
Proxy_pass Http://linux; # # #代理
# root HTML;
Wuyi # index index.html index.htm;
52}
Reload Service Nginx-s Reload
Server2:
Yum Install Httpd-y
echo "/ETC/INIT.D/HTTPD start
Server3:
Yum Install Httpd-y
echo "Server3" >/var/www/html/index.html
/ETC/INIT.D/HTTPD start
Test: Web Access test Server1 IP 172.25.44.1
Test results for Server2 and Server3 polling display
Vim/usr/local/nginx/conf/nginx.conf
Upstream linux{
Ip_hash; # # #哈希绑定, only one page can be accessed by the same IP
Server 172.25.44.2:80;
Server 172.25.44.3:80;
23}
Nginx-t Nginx-s Reload
Web page test access, because the hash binding IP, the first time display Server2, when Server2 has not hung up, has been refreshed, has been shown sever2, hanging off the words will show Server3
If the client of a local area network accesses the server at the same time, it causes the server to be distributed unevenly; sticky implements cookie-based load balancing
Sticky sticky algorithm requires static compilation
Nginx-s Stop # # #停止nginx
Tar zxf nginx-sticky-module-1.0.tar.gz-c nginx-1.8.1 # # #解压到指定位置
Make clean
./configure--prefix=/usr/local/nginx--with-http_stub_status_module--with-http_ssl_module \
--add-module=nginx-sticky-module-1.0/# # #需要加上nginx Sticky module
Make && make install
Sticky no_fallback;## #不做故障转移; when Server2 hangs up, the page appears on the wrong page and does not move to Server3
Server 172.25.33.22:80 down; not participating in scheduling
Server 172.25.33.22:80 weight=5; The weight is 5, proportional to the number of occurrences
True machine for in $ (seq);d o Curl Http://172.25.44.1;done # # #用命令进行检测
Common Fair Url_hash Third party with Stiky, need to recompile
RR ip_hash weight Local, no recompilation required
Tomcat Lightweight application server, project deployment to Tomcat, you can access the project's pages externally.
Server2 Server3
Cd/usr/local
SH jdk-6u32-linux-x64.bin
Tar zxf apache-tomcat-7.0.37.tar.gz
Ln-s jdk1.6.0_32 Java
ln-sapache-tomcat-7.0.37 Tomcat
Vim/etc/profile
Export Java_home=/usr/local/java
Export classpath=.: $JAVA _home/lib: $JAVA _home/jre/lib
Export path= $PATH: $JAVA _home/bin
Source/etc/profile
Cd/usr/local/tomcat/bin
./startup.sh
Visit: 172.25.44.2:8080 172.25.44.3:8080
Default Publishing Directory/usr/local/tomcat/webapps/root
Vim test.jsp # # #编辑测试页
The time is: <%=new java.util.Date ()%>
Visit http://172.25.44.2:8080/test.jsp
http://172.25.44.3:8080/test.jsp
Server1
Vim/usr/local/nginx/conf/nginx.conf
HTTP {
Upstream linux{
Server 172.25.44.2:8080; Poll listening 8080 Port
Server 172.25.44.3:8080;
22}
Location/{
# Proxy_pass Http://linux;
root HTML;
Wuyi index index.html index.htm;
52}
Location ~ \.jsp$ {
Proxy_pass Http://linux;
67}
Restart Service
Server2 Server3
test.jsp Content
<%@ page contenttype= "text/html; CHARSET=GBK "%>
<%@ page import= "java.util.*"%>
<body>
Server Info:
<%
Out.println (REQUEST.GETLOCALADDR () + ":" + request.getlocalport () + "<br>");%>
<%
Out.println ("<br> ID" + session.getid () + "<br>");
String dataname = Request.getparameter ("Dataname");
if (dataname! = null && dataname.length () > 0) {
String DataValue = Request.getparameter ("DataValue");
Session.setattribute (Dataname, DataValue);
}
Out.print ("<b>session list</b>");
Enumeration E = Session.getattributenames ();
while (E.hasmoreelements ()) {
String name = (string) e.nextelement ();
String value = Session.getattribute (name). toString ();
OUT.PRINTLN (name + "=" + value+ "<br>");
SYSTEM.OUT.PRINTLN (name + "=" + value);
}
%>
<form action= "test.jsp" method= "POST" >
Name:<input type=text size=20 name= "Dataname" >
<br>
Key:<input type=text size=20 name= "DataValue" >
<br>
<input type=submit>
</form>
</body>
Test: 172.25.44.1/test.jsp
Sticky mode when the user accesses, as long as the access host Tomcat does not hang, it has been access to the same
server2:bin/shutdown.sh
The stored data is saved in 44.2, when 44.2 hangs, goes to 44.3, but the data before 44.2 is gone
Therefore, the concept of cross storage is proposed.
Memcache Cache cross-storage solves session problems
Server2 Server3
Yum Install Memcached-y
/usr/local/tomcat/conf/context.xml
<Context>
20
<!--Default set of monitored resources--
<WatchedResource>WEB-INF/web.xml</WatchedResource>
23
<!--uncomment this to disable session persistence across Tomcat restart S--
<!--
<manager pathname= ""/>
--
28
<!--uncomment this to enable Comet connection tacking (provides events
In session expiration as well as WebApp lifecycle)--
<!--
<valve classname= "Org.apache.catalina.valves.CometConnectionManagerValve"/>
---
<manager classname= "De.javakaffee.web.msm.MemcachedBackupSessionManager"
Memcachednodes= "n1:172.25.44.2:11211,n2:172.25.44.3:11211" # # # # # #
Failovernodes= "N1" # # #根据自己的ip修改节点号
PNs requesturiignorepattern= ". *\. (ICO|PNG|GIF|JPG|CSS|JS) $ "
transcoderfactoryclass= "De.javakaffee.web.msm.serializer.kryo.KryoTranscode rfactory"
/>
</Context>
Send jar to directory/usr/local/tomcat/lib,
RM-FR Memcached-session-manager-tc6-1.6.3.jar
(The 7.0 version of Tomcat used here, so delete the 6 version)
/etc/init.d/memcached start
bin/startup.sh
Test Access http://172.25.44.1/test.jsp
This approach mainly solves the problem of cross storage, T1 stored in the T2 memcache, so that even when T1 and M1 hang at the same time, the data will not be lost.
Nginx+tomcat-based load balancing and memcached to solve cross-storage