Configure an HTTP Load balancer using HAProxy

Source: Internet
Author: User

Configure an HTTP Load balancer using HAProxy

As Web-based applications and services increase, IT system administrators have more and more responsibilities. When unexpected events such as traffic peaks, traffic increases, or internal challenges such as hardware damage or urgent repairs, your Web applications must maintain availability in any case. Even the popular devops and CD may threaten the reliability and performance consistency of your Web Services.

Unpredictable, inconsistent performance is unacceptable. But how can we eliminate these shortcomings? In most cases, a suitable Server Load balancer solution can solve this problem. Today, I will show you how to configure an HTTP Load balancer using HAProxy.

What is HTTP load balancing?

HTTP Server Load balancer is a network solution that distributes incoming HTTP or HTTPs requests to a group of servers that provide the same Web application content for response. By balancing requests among multiple servers, Server Load balancer can prevent server spof and improve overall availability and response speed. It also allows you to scale out or contract horizontally by adding or removing servers and adjust the workload.

When and under what circumstances should I use Server Load balancer?

Server Load balancer can improve the server performance and maximum availability. When your server starts to experience high loads, you can use Server Load balancer. Or when you design an architecture for a large project, using Server Load balancer at the front end is a good habit. It is useful when your environment needs to be expanded.

What is HAProxy?

HAProxy is a popular open-source load balancing and proxy software for TCP/HTTP servers on the GNU/Linux platform. HAProxy is a single-threaded, event-driven architecture that can easily process 10 Gbps traffic and is widely used in production environments. Its functions include automatic health check, custom Server Load balancer algorithms, HTTPS/SSL support, and session rate restrictions.

How to achieve load balancing in this tutorial

In this tutorial, We will configure a Load Balancing Based on HAProxy for the HTTP Web server.

Prerequisites

You must have at least one server, or preferably two Web servers to verify your load balancing function. We assume that the backend HTTP Web server has been configured and can run.

Haproxy + Keepalived build Weblogic high-availability server Load balancer Cluster

Keepalived + HAProxy configure high-availability Load Balancing

Haproxy + Keepalived + Apache configuration notes in CentOS 6.3

Haproxy + KeepAlived WEB Cluster on CentOS 6

Haproxy + Keepalived build high-availability Load Balancing

Haproxy + Apache for Linux Server Load balancer Software

Install HAProxy in Linux

For most releases, we can use the release Package Manager to install HAProxy.

Install HAProxy In Debian

In Debian Wheezy, we need to add the source, create a file "backports. list" under/etc/apt/sources. list. d, and write the following content.

  1. Deb http://cdn.debian.net/debian wheezy backbackports main

Refresh the data in the repository and install HAProxy

  1. # Apt ­ get update
  2. # Apt ­ get install haproxy
Install HAProxy in Ubuntu
  1. # Apt ­ get install haproxy
Install HAProxy in CentOS and RHEL
  1. # Yum install haproxy
Configure HAProxy

This tutorial assumes that there are two running HTTP Web servers whose IP addresses are 192.168.100.2 and 192.168.100.3. We configured the Server Load balancer on 192.168.100.4.

To make HAProxy work normally, you need to modify some options in/etc/haproxy. cfg. We will explain these changes in this section. Some configurations may vary depending on the GNU/Linux release, which will be marked out.

1. Configure the log function

The first thing you need to do is configure the log function for HAProxy, which will be useful in troubleshooting. Log configuration can be found in the global segment of/etc/haproxy. cfg. The following is the HAProxy log configuration for different Linux hair styles.

CentOS or RHEL:

Enable logs in CentOS/RHEL and set the following:

  1. Log 127.0.0.1 local2

Replace:

  1. Log 127.0.0.1 local0

Then configure the log segmentation of HAProxy in/var/log. We need to modify the current rsyslog configuration. To be concise and clear, create a file named haproxy. conf under/etc/rsyslog. d and add the following content:

  1. $ ModLoad imudp
  2. $ UDPServerRun 514
  3. $ Template Haproxy, "% msg % \ n"
  4. Local0. = info ­/var/log/haproxy. log; Haproxy
  5. Local0.notice-/var/log/haproxy-status. log; Haproxy
  6. Local0 .*~

This configuration splits HAProxy logs in/var/log based on $ template. Restart the rsyslog application to apply these changes.

  1. # Service rsyslog restart
Debian or Ubuntu:

Enable log In Debian or Ubuntu and set the following content

  1. Log/dev/log local0
  2. Log/dev/log local1 notice

Replace:

  1. Log 127.0.0.1 local0

Then configure log segmentation for HAProxy, edit haproxy. conf under/etc/rsyslog. d/(may be 49-haproxy.conf In Debian), and write your content below

  1. $ ModLoad imudp
  2. $ UDPServerRun 514
  3. $ Template Haproxy, "% msg % \ n"
  4. Local0. = info ­/var/log/haproxy. log; Haproxy
  5. Local0.notice-/var/log/haproxy-status. log; Haproxy
  6. Local0 .*~

This configuration splits HAProxy logs in/var/log based on $ template. Restart the rsyslog application to apply these changes.

  1. # Service rsyslog restart
2. Set the default options

The next step is to set the default options for HAProxy. In the default segment of/etc/haproxy. cfg, replace it with the following Configuration:

  1. Ults
  2. Log global
  3. Mode http
  4. Option httplog
  5. Option dontlognull
  6. Retries 3
  7. Option redispatch
  8. Maxconn 20000
  9. Contimeout 5000
  10. Clitimeout 50000
  11. Srvtimeout 50000

The above configuration is recommended when HAProxy is HTTP load balancing, but it is not necessarily the best solution for your environment. You can study the HAProxy manual and configure it yourself.

3. Web cluster configuration

The Web cluster configuration defines a set of available HTTP servers. Most of our Server Load balancer settings are here. Now we will create some basic configurations to define our nodes. Replace all content in the configuration file starting from the frontend segment with the following:

  1. Listen webfarm *: 80
  2. Mode http
  3. Stats enable
  4. Stats uri/haproxy? Stats
  5. Stats realm Haproxy \ Statistics
  6. Stats auth haproxy: stats
  7. Balance roundrobin
  8. Cookie LBN insert indirect nocache
  9. Option httpclose
  10. Option forwardfor
  11. Server web01 192.168.100.2: 80 cookie node1 check
  12. Server web02 192.168.100.3: 80 cookie node2 check

"Listen webfarm *: 80" defines the address and port of the Server Load balancer listener. For the purpose of the tutorial, I set it to "*", indicating that the listener is on all interfaces. In actual scenarios, this setting may be inappropriate and should be replaced with the NIC interface that can be accessed from the internet.

  1. Stats enable
  2. Stats uri/haproxy? Stats
  3. Stats realm Haproxy \ Statistics
  4. Stats auth haproxy: stats

As defined in the preceding settings, the Server load balancer status statistics can be obtained through http: // <load-balancer-IP>/haproxy? Stats access. Simple HTTP authentication is required for access. The username is "haproxy" and the password is "stats ". These settings can be replaced with your own authentication method. If you do not need status statistics, you can disable it completely.

The following is an example of HAProxy statistics.

The "balance roundrobin" line indicates the Load Balancing Type we use. In this tutorial, we use a simple round robin algorithm to fully meet the needs of HTTP load balancing. HAProxy also provides other load balancing types:

  • Leastconn: Schedule requests to servers with the least number of connections
  • Source: hash the Client IP address of the request, and schedule the request to the backend server based on the hash value and server weight.
  • Uri: hash the left half of the URI (the part before question mark), and schedule the request based on the hash result and server weight.
  • Url_param: Scheduling Based on the URL query parameters of each http get request. Fixed request parameters will be scheduled to the specified server.
  • Hdr (name): Scheduling Based on the <name> Field in the HTTP Header

"Cookie LBN insert indirect nocache" indicates that our Server Load balancer stores cookie information and can bind nodes in the backend server pool to a specific session. The cookie of a node is stored as a custom name. Here, we use "LBN". You can specify other names. The backend node saves the cookie Session.

  1. Server web01 192.168.100.2: 80 cookie node1 check
  2. Server web02 192.168.100.3: 80 cookie node2 check

The above is the definition of our Web server node. The server is represented by an internal name (such as web01 and web02), an IP address, and a unique cookie string. Cookie strings can be customized. Here I use simple node1, node2... node (n)

For more details, please continue to read the highlights on the next page:

  • 1
  • 2
  • Next Page

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.