Install Nginx in Centos7.2 for Load Balancing

Source: Internet
Author: User
Tags nginx load balancing

Install Nginx in Centos7.2 for Load Balancing
Download Nginx

Download the source code file from the official website at http://nginx.org/en/download.html and select the latest region. Download address: Workshop.

Uninstall httpd

If the httpd service is installed by default, uninstall it. It doesn't matter if you don't unmount it. It's just convenient to handle port 80 by default.

yum -y remove httpd
Extract
tar -xzvf nginx-xxxxxx.tar.gz
Install the compiler and dependent Libraries
yum install gcc gcc-c++ zlib-devel pcre-devel openssl-devel openssl-libs openssl -y

If the installation is complete

Pre-installation Configuration

Cd command to the decompressed directory.

./configure --prefix=/usr/local/nginx

In this way, Nginx is installed in the/usr/local/nginx directory.

Compile
make
Install
make install

After the installation is complete, you do not need to use the absolute path to operate Nginx After configuring the environment variable:

vim /etc/profile.d/http.sh

Add the following content:

export PATH=/usr/local/nginx/sbin:$PATH

Effective Configuration:

source !$
Start Nginx
nginx

Nginx-s is followed by stop and reload to disable and reload nginx. If you run nginx directly, the service is started. If you are prompted that the port is in use at startup, you need to find the occupied process or change the listening port in the/usr/local/nginx/conf/nginx. conf file.

Access Nginx

Enter http: // ip: port in the browser. If "Welcome to nginx!" appears !" The installation is successful. If the access fails, check whether the firewall disables the corresponding port.

Server Load balancer configuration example
# User nobody; worker_processes 2; # error_log logs/error. log; # error_log logs/error. log notice; # error_log logs/error. log info; # pid logs/nginx. pid; events {accept_mutex on; # Sets network connection serialization to prevent group alarms. The default value is on multi_accept on; # Sets whether a process accepts multiple network connections at the same time, the default value is off worker_connections 1024; # maximum number of connections} http {include mime. types; # file extension and file type ing table. This ing table is mainly used for static resources default_type application/octet-stream deployed on nginx; # log format log_format main' $ remo Te_addr-$ remote_user [$ time_local] "$ request" ''$ status $ response" $ http_referer "'' "$ http_user_agent" "$ http_x_forwarded_for" '; access_log logs/access. log main; sendfile on; # tcp_nopush on; # keepalive_timeout 0; keepalive_timeout 65; # connection timeout gzip on; # reverse proxy # [configuration 1] This configuration is a combination of [configuration 4] and [configuration 5] # This configuration forwards requests to two WEB servers and assigns the target host based on the Client IP address, at the same time, traffic is allocated by weight upstream app1 {ip_hash; server 192.168.14.132: 8080 weight = 5; se Rver 192.168.14.weight: 80 weight = 3 ;}# [configuration 2] # Default Server Load balancer configuration. nginx uses HTTP Server Load balancer to distribute requests. # Upstream app1 {# server 192.168.14.132: 8080; # server 192.168.14.133: 80 ;#}# [configuration 3] # configure the minimum connection load balance. nginx will try its best not to use busy servers, instead, it distributes new requests to less busy servers. # Upstream app1 {# least_conn; # server 192.168.14.132: 8080; # server 192.168.14.timeout: 80 ;#}# [configuration 4] # session persistence configuration, using ip-hash, the IP address of the client is used as the hash key # To determine which server in the server group should be selected for the client request. # This method ensures that requests from the same client are always directed to the same server unless the server is unavailable. # Upstream app1 {# ip_hash; # server 192.168.14.132: 8080; # server 192.168.14.20.: 80 #}# [configuration 5] # weighted Load Balancing configuration, the server weight is used to further affect the nginx load balancing algorithm. # Servers with no weight configured mean that all specified servers are considered equally qualified for a specific load balancing method. # Upstream app1 {# ip_hash; # server 192.168.14.132: 8080 weight = 3; # server 192.168.14.weight: 80 weight = 2; # server 192.168.14.134: 80; # server 192.168.14.135: 80; #} server {# multiple servers can be configured to listen to different IP addresses and different ports listen 80; # The listening port server_name localhost; # The listening server # charset koi8-r; # access_log logs/host. access. log main; # The Reverse oblique rod represents all connections. This configuration is used to assign all connections to the upstream agent named app1 for load balancing. location/{proxy_pass http://app1 ;} # Image file path. Generally, static files are deployed on the local machine to speed up the response. # multiple such locations can be configured to meet various requirements. location ~ \. (Gif | jpg | png) $ {root/home/root/images;} location ~ \. (Iso | zip | txt | doc | docx) $ {root/home/root/files;} # error_page 404/404 .html; # redirect server error pages to the static page/50x.html # error_page 500 502 503 504/50 x.html; location =/50x.html {root html ;} # FastCGI is the Common Gateway Interface (CGI). # For me, use Tomcat instead. Ignore this configuration. # Location ~ \. Php $ {# root html; # fastcgi_pass 127.0.0.1: 9000; # fastcgi_index index. php; # fastcgi_param SCRIPT_FILENAME/scripts $ fastcgi_script_name; # include fastcgi_params; #}# Add a blacklist to prohibit a person from accessing a specific file # concurs with nginx's one ## location ~ /\. Ht {# deny all ;#}# another virtual host using mix of IP-, name-, and port-based configuration # server {# listen 8000; # listen somename: 8080; # server_name somename alias another. alias; # location/{# root html; # index index.html index.htm; #}#}# HTTPS server # server {# listen 443 ssl; # server_name localhost; # ssl_certificate cert. pem; # ssl_certificate_key cert. key; # ssl_se Ssion_cache shared: SSL: 1 m; # ssl_session_timeout 5 m; # ssl_ciphers HIGH :! ANULL :! MD5; # ssl_prefer_server_ciphers on; # location/{# root html; # index index.html index.htm ;#}#}}

After the configuration, remember to execute the following command to make the configuration take effect.

nginx -s reload

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.