Centos + nginx configure server load balancer from scratch. centosnginx_PHP tutorial

Source: Internet
Author: User
Tags install openssl nginx load balancing
Centos + nginx configure server load balancer from scratch, centosng.pdf. Centos + nginx configure server load balancer from scratch. centosnginxnginx server load balancer understands that nginx is a lightweight and high-performance webserver. it can do the following two things: configure Server load balancer from scratch as centos + nginx, centosnginx
Nginx load balancing

Nginx is a lightweight and high-performance webserver. it can do the following two things:

  • As an http server (same as apache)
  • Server load balancer as a reverse proxy server

Nginx can be seen everywhere now. it is often seen that nginx is displayed on the webpage after a crash, this also shows that nginx is used by more and more users due to its high performance, simple configuration, and open-source ticket.

The first type is used as an http server, combined with the php-fpm process, to process the sent requests. nginx itself does not parse php, but serves as a server and accepts the requests sent from the client, if it is a php request, it is handed over to the php process for processing and the result after php processing is completed is sent to the client. This is simple. after nginx + php-fpm is installed, configure the respective configuration files and start them. For the operating principle, see the following explanation:

Nginx does not support direct calling or parsing of external programs. all external programs (including PHP) must be called through the FastCGI interface. The FastCGI interface is a socket in Linux (this socket can be a file socket or an ip socket ). To call the CGI program, you also need a FastCGI wrapper (wrapper can be understood as the program used to start another program). This wrapper is bound to a fixed socket, such as a port or file socket. When Nginx sends a CGI request to this socket, through the FastCGI interface, wrapper receives the request and then derives a new thread, this thread calls the interpreter or external program to process the script and read the returned data. then, wrapper transmits the returned data to Nginx through the FastCGI interface along a fixed socket. finally, nginx sends the returned data to the client. This is the entire operating process of Nginx + FastCGI, as shown in.

The reverse proxy is the opposite of the forward proxy (or proxy). you have heard of the proxy. in order to access B resources more conveniently, you can access B resources indirectly through resource, the feature is that the user knows what the website is to be accessed, but the reverse proxy user does not know what processing is done behind the proxy server. the reverse proxy service actually processes the server on the intranet, the internet can only access the reverse proxy server, which greatly improves security.

Install software

Nginx installation is simple

1. install the environment required for nginx, pcre (for rewrite), zlib (for compression), and ssl. you can also download, compile, and install nginx by yourself.

yum -y install zlib;

yum –y install pcre;

yum –y install openssl;

2. download and install nginx-* .tar.gz.

tar –zxvf nginx-1.2.8.tar.gz –C ./;

cd nginx-1.2.8;

./congigure --prefix=/usr/local/nginx;

make && make install;

3. configuration

During configuration, you only need to modify the content between http {}. The first modification is to set the server group and add between http nodes.

Upstream myServer {
Server www.myapp2.com: 80; # Here is the server address for your own server load balancer.
Server www.myapp1.com: 8080; # Here is the address to participate in server load balancer 2
}

Upstream in nginx supports the following methods: Round Robin (by default, all servers are accessed one by one in chronological order. if a server goes down, it will be automatically removed), weight (the server's azimuth probability is proportional to weight, which can be configured when server configuration is uneven), ip_hash (hash calculation for the ip address of each request, and assign the corresponding server according to certain rules) and fair (distribute requests according to the response time (rt) of each server, and rt knows that the allocation is prioritized) and url_hash (requests are allocated according to the hash value of the access url). here I use the default training mode.

Direct requests to myServer

Location /{
Proxy_pass http: // myServer;
}

The complete file (delete comment) is as follows:

worker_processes  1;events {    worker_connections  1024;}http {    include       mime.types;    default_type  application/octet-stream;    sendfile        on;    keepalive_timeout  65;    upstream myServer{       server www.myapp1.com:80;       server www.myapp2.com:8080;    }    server {        listen       80;        server_name  my22;        location / {            proxy_pass   http://myServer;        }    }}
Sets the backend of the reverse proxy as the two servers of server load balancer.

We can see that there are two server addresses in the previous step: www.myapp1.com: 80 and www.myapp2.com: 8080. The nginx above is installed on the virtual machine, and the two servers are installed on the win8 system of the local machine, apache virtualhost is used to set two domain names. the codes under these two domain names are independent of each other, and the settings are simple:

1. set the apache configuration file

I am using the xampp integrated environment. There are two places to modify. add the listening port in httpd. conf.

Listen 8080.

That is to say, this part listens to two ports.

Listen 80
Listen 8080.

Check whether the following sentence is opened. if it is not opened, open it, as shown below.

# Virtual hostsInclude conf/extra/httpd-vhosts.conf

Add the following content in the httpd-vhosts.conf,

  
   
ServerName www.myapp1.com # corresponding domain name, server load balancer server address DocumentRoot E: \ soft \ xampp \ htdocs \ www.myapp1.com # Code folder
  
  
   
ServerName www.myapp2.com DocumentRoot E: \ soft \ xampp \ htdocs \ www.myapp2.com
  

Modify the windows hosts file and append the following content.

127.0.0.1        www.myapp1.com127.0.0.1        www.myapp2.com

Modify the linux/etc/hosts file and append the following content.

192.168.1.12 www.myapp1.com # the address above this address corresponds to my win8 local IP address 192.168.1.12 www.myapp2.com

I put a file index. php [E: \ soft \ xampp \ htdocs \ www.myapp1.com \ index. php] in www.myapp1.com: 80]

Www.myapp2.com: 8080 also contains a file index. php [E: \ soft \ xampp \ htdocs \ www.myapp2.com \ index. php]

The content in the file is basically the same,I'm the myapp2There is a difference between myapp1 and myapp2.

If you can enter www.myapp1.com: 80 and www.myapp2.com: 8080 in the win8 browser, different results will be displayed.

In addition, the following results are displayed under centos (I beautify it myself), indicating that the configuration is successful.

[root@bogon nginx]# curl www.myapp1.com:80I'm the myapp1
【view】1[root@bogon nginx]# curl www.myapp2.com:8080I'm the myapp2
【view】1
  ";echo "【view】{$_SESSION['view']}";

View Results

After all are OK, you can access it through a browser to see the effect.

Forget to mention that the nginx proxy server address is http: // 192.168.1.113,

After entering http: // 192.168.1.113/index. php in the browser, you will find that

I'm the myapp2、I'm the myapp1

The two pages are exchanged back and forth, and the view will not be refreshed twice. This also proves that the default training mode is mentioned above, but there is another common problem here, when a user accesses a website and does not process it, the session will be saved on different servers (here I use two different folders to simulate two servers ), there may be multiple sets of session data. how can this problem be solved? in the next article, we will talk about this problem, which is actually quite simple.

The copyright of this article is owned by the author iforever (luluyrt@163.com), without the author's consent to prohibit any form of Reprint, repost the article must be in the obvious position on the article page to give the author and the original connection, otherwise, you are entitled to pursue legal liability.

Ingress nginx server load balancer is a lightweight and high-performance webserver. it can do the following two things...

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.