This article tries to explain the operation steps and simple configuration of using keepalived + Nginx + Tomcat to build a high-availability load balancing environment in Ubuntu Server environment, which does not involve performance tuning. Let's talk about their respective roles:
tomcat– Application Server
nginx– reverse proxy server, as a load balancer
group, which is the node of each library we introduce multiple machines, each of which holds the same data, and in general the load is distributed by multiple machines, and the load balancer distributes the load to the machine that is down when there is an outage. This solves the problem of fault tolerance.As shown, t
course, there is always a solution to the problem. We introduce the concept of clustering , which I call the group, which is the node of each library we introduce multiple machines, each of which holds the same data, and in general the load is distributed by multiple machines, and the load balancer distributes the load
are very serious. In other words, there is still a problem with our solution, and fault-tolerant performance is not a test. Of course, there is always a solution to the problem. We introduce the concept of clustering , which I call the group, which is the node of each library we introduce multiple machines, each of which holds the same data, and in general the load is distributed by multiple machines, and the loa
Recently, when using Microsoft cloud, we found that azure launched the standard version of Server Load balancer, which should be good news for many users with high security requirements, you can configure SNAT.
With azure Server Load balancer, you can:
Load Balancing the I
Heartbeat + LVS build a high-availability server Load balancer Cluster
1. Introduction to heartbeat:
The Heartbeat project is an integral part of the Linux-HA project, which implements a high-availability cluster system. Heartbeat service and cluster communication are two key components of a highly available cluster. In the heartbeat project, the Heartbeat module
Configure apache2.2load balancing tomctomcat6cluster .txtReferences:Http://man.chinaunix.net/newsoft/ApacheMenual_CN_2.2new/mod/mod_proxy.htmlHttp://man.chinaunix.net/newsoft/ApacheMenual_CN_2.2new/mod/mod_proxy_balancer.htmlThe so-called load balancing refers to the fact that a single server cannot respond to a large number of requests within a short period of time on the server side,In this case, the server requires a mechanism to distribute request
Currently, F5 type dedicated server Load balancer is the most widely used Server Load balancer device in the data center. You may learn more about Server Load balancer by focusing on DNS technology. The DNS Server
Iping object that checks whether a service instance is functioning properly, defaults to NULL, and constructs a time injectionDefines the execution policy object ipingstrategy for checking service instance operations, in BaseloadbalancerSerialpingstrategy is used by default, traverse checkIrule object that defines the processing rules for load balancing, from BaseloadbalancerChooseserver (Object key), which actually delegates the selection task to th
/local/nginx/sbin: $PATH------------------------SOURCE!$EffectStart NginxNginx----------------Enter ip\\\\\\\\\\\\\\\\\\\\\\ on the browser and do not conflict with HTTP portAppear"Welcome to nginx! ”Installation SuccessfulWhen you are going to change the configuration fileUpstream App1 {Ip_hash;Server 192.168.1.51:80;Server 192.168.1.52:80;Server 192.168.1.53:80;}server {Listen 80;server_name localhost;#charset Koi8-r;#access_log Logs/host.access.log Main;Location/{Proxy_set_header x-forwarded-
Spring cloud Client Server Load balancer Ribbon, cloudribbonI. Server Load balancer
Load Balancing: built on the existing network structure, it provides a cheap, effective, and transparent method to expand the bandwidth of network devices and servers, increase throughput, en
CDN Load Balance [1]
As the core components of the existing network increase with the increase of business volume and the rapid growth of traffic and data traffic, the processing capability and computing strength also increase accordingly, making a single server unable to bear. In this case, if you discard the existing device to perform a large number of hardware upgrades, this will cause a waste of existing resources, and if you face the next increa
IIS Server Load balancer-Application Request Route: ARR introduces Server Load balancer. I believe you are no longer familiar with it. This series mainly introduces the Server Load balancer software that can be used in IIS: micros
In network applications, "Server Load balancer" is no longer a new topic. From hardware to software, there are also many ways to achieve Server Load balancer. The server Load balancer discussed here does not refer to Server
Nginx Server Load balancer configuration instance details
First, let's take a brief look at what Server Load balancer is. Simply understanding what it means literally can explain that N servers are equally loaded, A server is not idle because of its high load downtime. The p
concept of clustering , which I call the group, which is the node of each library we introduce multiple machines, each of which holds the same data, and in general the load is distributed by multiple machines, and the load balancer distributes the load to the machine that is down when there is an outage. this solves t
group, which is the node of each library we introduce multiple machines, each of which holds the same data, and in general the load is distributed by multiple machines, and the load balancer distributes the load to the machine that is down when there is an outage. This solves the problem of fault tolerance.As shown, t
test. Of course, there is always a solution to the problem. We introduce the concept of clustering , which I call group, that is, each node of the library we introduce multiple machines, each machine holds the same data, in general, many of these machines load, when there is a downtime situation, The load balancer allocates the
LVS's working mechanism and scheduling algorithm have been recorded in my previous articles. See hereThe working mechanism of LVS is similar to iptabls. Some work in the user space (ipvsadmin) and some work in the kernel space. user space: used to define some load balancing objects and policies, for example, load the persistent connection to port 80 of the TCP protocol, or
port 80.The nginx download address is as follows:Nginx download: http://nginx.net/Download the version used in this test: nginx/Windows-0.8.22
Download and decompress the package to C:, and change the directory name to nginx.
Practice steps:
First:
On the local server (10.60.44.126), create a website using the port 808, for example:
IIS website binding settings
Second:
Create a website in the Remote IIS 10.60.44.127 using the port 808, for example:
Remote IIS binding settings
Note: The first
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.