Using APM for load Balancing scenarios under Windows platform-load balancing (bottom)

Source: Internet
Author: User
Tags apm

Overview

In our previous Windows platform distributed architecture practice-load balancing, we discussed the load balancing of Web sites through NLB (Network load Balancer) under the Windows platform and demonstrated its effectiveness through stress testing, which can be said to be ideal. At the same time, we also collected a lot of problems, such as how to use the SESSION,NLB in this distributed architecture, there is a server hanging out will lead to external exposure of the address can not be accessed, if the synchronization between the server, if the better heat repair and so on, We also mentioned in the previous article that NLB provides a very simple function, in order to answer the questions we mentioned earlier, and in order to provide a more comprehensive and complete load balancing scheme, we look at another implementation of the Windows platform Load Balancer APR (application Request Router + Web Farm + Url Rewriter), I hope we can solve some of the implementation of the problem.

Directory
    • Installation Configuration Load Balancing
      • Installing the required Components
      • Configure load Balancing
    • Site Deployment and synchronization
      • Synchronization of the Setup program with the running environment
      • Synchronization of site content with configuration
      • Configure the Portal Server
      • Verifying load Balancing
    • Application of Session in APR distributed environment
      • Server Affinity (servers affinity)
    • Build more than one APR server to increase availability
    • Summary
Install configuration load Balancer Install related components

Not much to say, in order to achieve a more complete load balancing, we need to introduce the following 5 components.

    • Web Deploy V3.0
    • Web Platform Installer V5.0
    • Web Fram 2 for IIS7
    • Applicaiton Request Router 3 for IIS
    • URL Rewriter 2 for IIS7

Installing Web Fram must first install Web Deploy and Web Platform, so I put both of them in front of you, you can also refer to the order in the above, and of course you have to install IIS and ASP. We will not this time on the principle of these 5 components to do a detailed introduction, you know how to operate on it, if interested students can continue to deepen. The installation process is very simple, basically each component only needs to click the button to be able, after all installs, you will see a server farms node under IIS on the current machine.

Configure load Balancing

If you still have an impression, when using NLB to configure load balancing, we don't need a separate machine to be the main portal. We install NLB on all Web servers and select any one to add additional Web servers and set up a separate IP as the entry address. But for APR, things are a little different. We need a separate portal server to receive all requests, and it forwards all requests to the other real Web servers according to the configured rules.

3 Web servers We're still using the three that we did last time, we add a virtual machine with the same configuration, and then we install the APR on all 4 servers (including the 5 components we've listed above) and we can start configuring. We'll start by creating a Web farm on our portal server. In IIS, right-click the Create server Farm, Farms,

We hook up "server farm is available for load balancing (use load Balancing in Web Fram)", "Provision Server Farm" below, we also hook up and enter an account for it, This account requires permission to access all servers in this Web farm. Provision is mainly used to achieve the main-slave server synchronization, we temporarily ignore it, and then specifically.

We will be 192.168.1.130 set up our main Web server, we will combine provision (I do not know this to translate into Chinese what is called, literal translation "provide" seems very awkward) function can be implemented on the primary server deployment and configuration changes will be automatically synchronized to other servers. After you finish creating the Web farm, we can configure it later in IIS. We can see the status of each Web server by clicking on the servers under each Web farm, whether it is connected properly, etc.

At the same time we can click on each Web farm to manage the following functions, here we click on Mono.

Configuring the Load Balance algorithm

The first thing we need to do is go into "load Balance", where you can choose the load balancing algorithm: Rotation scheduling, random allocation, URL parameters, request first class. If you do not know what these algorithms do, then go to review the previous article. Apr provides us with the following 7 algorithms:

    1. Weighted round robin According to the weights according to the request data distribution
    2. Weighted total traffic is allocated according to the size of the request and response bytes by weight
    3. Least current request priority forwarding to the server that currently processes the fewest requests
    4. Least response Time priority forwarding to the server that is currently responding fastest
    5. The server variable hash allocates the request according to the hash of the server variable, which includes the cookie, URL, header information, and more.
    6. Query string hash allocates the request based on the hash of the URL query string, and if the query string contains more than one parameter (? name=jesse&location=sh), it is judged by the hash of the entire query string.
    7. The request hash allocates requests based on the server variable or the hash of the URL, for example, if the server variable is query_string, then the hash value is the corresponding value in the query STRING.

As you can already guess, the following 3 algorithms can be used to achieve the session in this distributed environment access, but due to other configuration, so we say, let us focus on this load balancing configuration, so we first choose the relatively simple least Response time, who is currently returning the fastest response we will give it the request, verify the sentence, "Ani ah."

Configure forwarding Rules

The APR mechanism is acting as a proxy server, which is responsible for receiving requests, but does not do any processing, but distributes the requests directly to the specific Web server. We can also configure some rules, some request forwarding, some requests do not forward, this is thanks to our URL rewrite component. We can enter the "Routing Rules" to make the relevant configuration.

Site deployment synchronizes with the synchronization installer and the running environment

In the real world, if we were to use NLB for the first deployment, we would need a server-to-server deployment, and it would be cumbersome to do some other configuration for IIS. The provision functionality provided to us in APR can help to achieve such a synchronization function.

In the features view of the server farm, we can find the following two types of provision:

    • Application Provisioning: Mainly used to synchronize Web site related including content, configuration and so on, Web deploy is used here.
    • Platform Provisioning: Mainly used to synchronize the installation program, in fact, this Platform refers to our installation of the Web Platform Installer, that is, we on the primary server through the Web Platform installed programs or components, if platform provisioning is enabled, all other servers will be installed automatically.

We can do a platform Provisioning example, click on our server Farm, in the function view on the right, double-click Platform Provisioning, tick the following two options.

Then we click on the left and right servers, select our primary server (Primary), select "Install Product" in the Action list on the menu, and the program installed in the pop-up form will be automatically installed to all the other servers in the current server farm.

Site content Synchronization

Like the idea above, we don't need to deploy each program again, we just need to deploy it on the primary server, and all the content and IIS settings will be automatically synced to the other servers, which is what application provisioning to do for us. We can double-click on "Applicaiton Provisioning" in the function view on the right by clicking on our server Farm, then check the following two items.

Next, we only need to build our site on our master server and then deploy our site, including some application pools for the site configuration is only necessary to do on the main server, we do not need to go to each server to decorate it again.

Configure the Portal Server

  Since the portal server does not do any processing just to forward the request, then we also need to put the content of our site under the Portal Server IIS? This depends on the different scenes, you can build an empty site Nothing, you can also use it to make a simple file server, in the previous step will not forward the static files, so that our Web farm servers only handle dynamic requests, but also reduce their pressure. Of course, if you have a separate file server, that's better. For testing purposes we do not build any Web sites on the portal server, just use the default Web site that comes with IIS.

Some people may have doubts because I have this same question when I configure the server farm. " all requests are received by the portal Server and then distributed to the specific servers in the farm, how is the Portal Server configured?" is 80 or 8080 port, if I have built several websites, then which one website request will be received by farm and then forwarded? "

I do not have any site configuration in the Portal server, that is to say that there is a local http://localhost site is accessible, for the external address is http://192.168.1.129/, then why when the external access 192.168.1.129, will it be processed by the server in the farm? This is thanks to our URL Rewrite module, we can click on our farm Mono, go to function view and then click Routing Rules----Routing rules, click on the list of actions to the right of the URL Rewrite ... Manage the routing rules in more detail.

In our URL rewrite window, we will see that an inbound rule has been created for us by default.

We can double-click that rule to view the details, or to edit it, we can see that this rule actually matches all inbound requests with wildcards and forwards them to our server Farm:mono. It is the URL rewrite here played a role, of course, we may also change the * to other wildcard characters, and use regular expressions to match all can be, these are the URL rewite inside the function, it can be moved directly over the use.

The URL rewrite helps us match the inbound request and then forwards it to the farm, at the farm level APR to forward the request to the specific server to process the request according to our configured load balancing algorithm. Now let's look back and see what the 5 components we started with were all going to work.

    • Web Deploy: Participate in application Provisioning (site content and configuration synchronization)
    • Web Platform Installer: Participating in Platform Provisioning (App environment sync)
    • Web Farm: Main organizer and container
    • Application Request Router: Load Balancing processing
    • URL Rewrite: Inbound request matching, etc.
Verifying load Balancing

So far, we have built a load balancer with APR + Web farm, and the end result is that when we visit http://192.168.1.129 outside, it is actually handled by 3 WEB servers in our farm, accreditations, let's verify. The method of verification is simple, we put different files on each server to identify which server is currently processing the response (remember to application provisioning when deploying the file, otherwise the files on the master server will be synced to other servers).

On web-02 and web-03, the names of the servers that are not in front of each other are returned. However, there may be a small problem in testing, that is, when we visit http://192.168.1.129, is always handled by web-01, because our page is simple, and only one user access, all the back of the two servers did not play a role. At this point we can remove web-01 and web-02 from the farm, and all requests will be handled by WEB-01.


It is that simple, the Web farm provides us with this feature is very practical, we can at runtime to dynamically add or remove the server at any time. Remember when we had only one server, in order to not affect the user as much as possible, the release is selected at the night after 10 points, so often a release on the night. Think that if there is this function, the day can also be released, as long as the first to take some machines from the Web farm, release a good test pass and then put up and put some of those also take down the release. Of course, this scenario is only suitable for some small and medium-sized sites, once the site is large, the release will be a very strict process, and generally have a dedicated publisher or tool.

Application of Session in APR distributed environment

The use of the session in the distributed environment is actually controversial, some people say that teachers are not allowed to use the session, but some people are trying to use it. For the moment, we will not discuss whether it is right or not, because there is no best structure, only the most appropriate structure, and the so-called existence is reasonable. We all have to admit that the session has brought a lot of convenience to the development of many management systems, as well as some small web sites, and that development is fast and can bring visible performance gains. All individuals think that it is necessary to look at the scene with or without it. But from the point of view of learning, we should consider the feasibility, and the pros and cons between them, in order to help us make the most appropriate decision in the real situation.

Server Affinity (servers affinity)

In the farm's feature view, there is a server affinity feature that can be used to track requests or provide a stickiness between a server and a client, and when the first request is processed, all requests that originate behind the client of the request are handed over to the same server for processing. This is the server Affinity. Apr provides us with two options:

    • Client Affinity: A cookie is assigned to requests from different clients, and then the cookie is followed to identify which server the request should be handled by.
    • Host name Affinity: Based on host name for sticky processing, there are two provider that can be used:
      • Microsoft.Web.Arr.HostNameRoundRobin: Ensure that the server is distributed as evenly as possible
      • Microsoft.Web.Arr.HostNameMemory: Allocate according to memory usage, ensure the memory usage of each server is balanced.

It was so easy to use the session in a distributed architecture like APR, and we can use the Client Affinitt with the Host Name affinity, depending on the situation.

Build more than one APR server to improve reliability

Remember what we mentioned in the previous article, the introduction of load balancing helped increase the two point: Reliability and scalability. Multiple servers together, even if part of the problem will not cause the entire site to be inaccessible, improve our reliability. Dynamically add and remove servers at any time without compromising site access, providing our scalability. But there's a problem here, but what if the server on the APR is having a problem ? Although this possibility is relatively low, because our APR server just did a very simple forwarding request function, and did not run the real site, but still do not queue there will be other exceptions cause IIS or Web farm to stop running, for problems like this, We can once again improve the reliability of our website by deploying multiple APR servers.

An APR server can distribute the request to the specific server, if it is more than one APR server, then who will decide which APR server the request is handled by?

Do you remember the NLB we talked about in the last chapter? It does not require a separate server configuration, it only needs to install NLB on the target machine and then configure an address to be exposed to the external. So this time, when we visit an external address, there are a few steps to follow:

    1. NLB gets the request information first, and then the specific APR server responds to the request.
    2. When an APR responds to a request, it is handed over to a specific Web server after the configured load balancing algorithm.
    3. After the Web server request is finished, the response information is returned to the client
Summary

Using APR provides us with a more comprehensive load-balancing function than NLB, which combines APR and NLB for higher availability, but because APR uses proxies, performance is a bit lower than NLB, but sometimes stability is more important, right? Of course, there are a lot of other options we can try, for example, Ngnix has been a long time ago in the open source community has been a good reputation. Our two is to let everyone on load balancing have a more perceptual understanding of the real project process also to consider the architecture of our code, how to ensure that our system can operate flawlessly in a distributed environment, and really play the power of distribution, we have a long way to go, Using distributed cache instead of session scheme, database cluster, service cluster and queue, etc., we are one of the breach, welcome to continue to pay attention! Finally I wish everyone work coding happy:)

Jesse's original address: http://www.cnblogs.com/jesse2013/p/dlws-loadbalancer2.html

Using APM for load Balancing scenarios under Windows platform-load balancing (bottom)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.