Microsoft Azure provides a sample load balancing Service Application

Source: Internet
Author: User
Keywords Applications azure azure examples load balancing services

Microsoft http://www.aliyun.com/zixun/aggregation/13357.html ">azure provides load-balancing services for virtual machines (IaaS) and cloud Services (PaaS) hosted in them. Load balancing supports application scaling and provides application recovery and other benefits.

You can access the load balancing service by specifying the input endpoints on the service through the Microsoft Azure portal or the service model of the application. When you deploy a managed service with one or more input endpoints on Microsoft Azure, it automatically configures the load balancing services provided by the Microsoft Azure platform. To take full advantage of the flexibility/redundancy of the service, you need to have at least two virtual machines serving the same endpoint.

The following illustration is an example of an application hosted in Microsoft Azure that uses the load balancing service to boot incoming traffic (address/port 1.2.3.4:80) to three virtual machines that are listening on port 80.

The main features of the Microsoft Azure load Balancing Service are described below:

Paas/iaas Support

Microsoft Azure Load Balancing services are available for all tenant types (IaaS or PaaS) and all operating system types (Windows or any Linux based operating system).

PaaS tenants are configured through the service model. IaaS tenants are configured through the Admin portal or PowerShell.

Layer-4 load balancer, based on hash algorithm assignment

The Microsoft Azure load balancer is a Layer-4 load balancer. The Microsoft Azure load balancer allocates the load between a set of available servers (virtual machines) by computing the hash function for the traffic received on the given input endpoint. The hash function is computed so that all packets from the same connection (TCP or UDP) are eventually located on the same server. The Microsoft Azure load balancer calculates the hash function used to map traffic to available servers using 5 information (source, source, destination, destination, protocol type). The hash function We select makes the connection to the server very random. However, depending on the traffic pattern, different connections may be mapped to the same server. (Note that the distribution of connections to the server is not polling, nor does it have a request queue, as is erroneously claimed in other articles or blogs.) The basic premise of a hash function is to obtain a large number of requests from different clients, so that the request can be perfectly distributed across the server.

Multi-protocol support

The load Balancing service in Microsoft Azure supports the TCP and UDP protocols. The customer can specify the protocol in the input endpoint specification of its service model through the PowerShell or admin portal.

Multi-point Support

Managed services can specify multiple input endpoints that will automatically complete configuration on the load balancing service.

At present, multiple endpoints with the same ports and protocols are not supported. The maximum number of ports a managed service can have is also limited, currently set to 150.

Internal Endpoint Support

Each service can specify up to 25 internal ports that are not exposed to the load balancer and are used for communication between service roles.

Direct Port endpoint Support (instance input endpoint)

Managed services can specify that a given endpoint should not load-balance, but instead directly access the virtual machine hosting the service. This allows the application to control client direct redirection to a given application instance (VM) without having to load-balance each request (load balancing can cause redirection to different instances).

Auto-reconfigure extensions/shrinks, service fixes, and updates

The load Balancing service works with the Microsoft Azure Computing service to ensure that the number of server instances specified for the input endpoint expands or shrinks (because the number of Web Role/worker role instances is increased or additional persistent VMS are placed under the same load Balancing group) The load Balancing service is automatically reconfigured to load balance for an increased or reduced number of instances.

The load Balancing service is also reconfigured in a transparent manner in response to service repair operations performed by Microsoft Azure Fabric Controller or to customer-performed service updates.

Service Monitoring

The load Balancing Service provides a feature that detects the health of various server instances and lets the server instances that are not running are no longer polled. The following three types of probes are supported: Guest agent probes (on the PaaS VM), HTTP custom probes, and TCP custom probes. For the guest agent, the load Balancing service queries the guest agent in the VM to understand the status of the service. For HTTP, the load Balancing service determines the health of an instance by obtaining the specified URL. For TCP, it relies on successfully establishing a TCP session with the defined probe port.

SOURCE NAT (SNAT)

All outbound traffic from the service uses the same VIP address as the inbound traffic for source NAT (SNAT) processing. We'll delve into the workings of SNAT in future articles.

Flow optimization within data center

The Microsoft Azure load Balancer optimizes traffic between Microsoft Azure data centers in the same area, allowing traffic between azure tenants who communicate through the VIP and in the same area to bypass Microsoft completely after the TCP/IP connection starts Azure load Balancer.

VIP Exchange

The Microsoft Azure load balancer supports swapping two tenant VIPs to move tenants in the staging environment to the production environment, and vice versa. The VIP Exchange operation allows the client to use the same VIP to communicate with the service when deploying a new version of the service. New versions of services can be deployed and tested in staging environments without interfering with production traffic. When the new version passes all the necessary tests, it can be delivered to the production environment by swapping with the existing production service. Existing connections established with the old production environment will remain unchanged. The new connection will be directed to the "new" production environment.

Example: Load Balancing Services

Next we'll learn how to use most of the functionality provided by the load-balancing service in the sample cloud service. We want to model the PaaS tenants as shown in the following illustration:

The tenant has two front-end (FE) roles and a back-end (BE) role. The FE role exposes four load balancing endpoints that use HTTP, TCP, and UDP protocols. One of the endpoints is also used to notify the load balancer of the health of the role. The be role exposes three endpoints that use HTTP, TCP, and UDP protocols. The FE and be roles expose a direct port endpoint to the corresponding service instance.

Use the Azure service model to represent the above services, as follows (some schema details have been removed to make them clearer):

<servicedefinition name= "Probetenant" >

<LoadBalancerProbes>

<loadbalancerprobe name= "myprobe" protocol= "http" path= "probe.aspx" intervalinseconds= "5" timeoutInSeconds= "100" />

</LoadBalancerProbes>

<workerrole name= "Berole" Vmsize= "Sgt" >

<Endpoints>

<internalendpoint name= "be_internalep_tcp" protocol= "TCP"/>

<internalendpoint name= "be_internalep_udp" protocol= "UDP"/>

<internalendpoint name= "be_internalep_http" protocol= "Http" port= "no"/>

<instanceinputendpoint name= "Instanceep_be" protocol= "tcp" localport= ">"

<AllocatePublicPortFrom>

<fixedportrange min= "10210" max= "10220"/>

</AllocatePublicPortFrom>

</InstanceInputEndpoint>

</Endpoints>

</WorkerRole>


<workerrole name= "Ferole" Vmsize= "Sgt" >

<Endpoints>

<inputendpoint name= "fe_external_http" protocol= "Http" port= "10000"/>

<inputendpoint name= "fe_external_tcp" protocol= "TCP" port= "10001"/>

<inputendpoint name= "fe_external_udp" protocol= "UDP" port= "10002"/>

<inputendpointname= "Http_probe" protocol= "HTTP" port= "loadbalancerprobe=" Myprobe


<instanceinputendpoint name= "Instanceep" protocol= "tcp" localport= ">"

<AllocatePublicPortFrom>

<fixedportrange min= "10110" max= "10120"/>

</AllocatePublicPortFrom>

</InstanceInputEndpoint>


<internalendpoint name= "fe_internalep_tcp" protocol= "TCP"/>

</Endpoints>

</WorkerRole>

</ServiceDefinition>

When analyzing the service model, let's start with the definition of a normal running probe, which the load balancer should use to query the health of the service:

<LoadBalancerProbes>

<loadbalancerprobe name= "myprobe" protocol= "http" path= "probe.aspx" intervalinseconds= "5" timeoutInSeconds= "100" />

</LoadBalancerProbes>

This means that we have an HTTP custom probe that uses a URL relative to the path "probe.aspx". This probe will later be appended to a fully specified endpoint.

Next we define the FE role as Workerrole. This role has multiple load balancing endpoints using HTTP, TCP, and UDP, as follows:

<inputendpoint name= "fe_external_http" protocol= "Http" port= "10000"/>

<inputendpoint name= "fe_external_tcp" protocol= "TCP" port= "10001"/>

<inputendpoint name= "fe_external_udp" protocol= "UDP" port= "10002"/>

Since we do not assign custom probes to these endpoints, the health of the above endpoints is controlled by the guest agent on the VM and can be changed by the service using the StatusCheck event.

Next we define an extra HTTP endpoint on port 80 that uses the custom probes we defined previously (Myprobe):

<inputendpoint name= "Http_probe" protocol= "HTTP" port= "no" loadbalancerprobe= "Myprobe"/>

The load balancer combines endpoint information and probe information to create the following form of URL:HTTP://{VM dip}:80/probe.aspx for querying the health of the service. This service will be (in the log?) Finds that the same IP accesses it on a regular basis. This is a health probe request from the node host running the VM.

The service must respond to the load balancer through the HTTP 200 status code to assume that the service is functioning correctly. Any other HTTP status code, such as 503, directly causes the load balancer to stop sending polling to the VM.

The probe definition also controls the detection frequency. In these cases, the load balancer probes the endpoint every 15 seconds. If no positive response is received within 30 seconds (two probe interval), the probe is closed and the VM terminates polling. Similarly, if the service has terminated polling, the service will immediately resume polling when a positive response is received. If the service fluctuates between normal/abnormal operation, the load balancer may decide to postpone the polling service until it allows a large number of probes to function properly.

The FE service exposes a set of direct ports, one for each instance input endpoint, which is directly connected to the FE instance on the following specified port:

<instanceinputendpoint name= "Instanceep_be" protocol= "tcp" localport= ">"

<AllocatePublicPortFrom>

<fixedportrange min= "10210" max= "10220"/>

</AllocatePublicPortFrom>

</InstanceInputEndpoint>

The above definition uses TCP ports 10110, 10111 ... Port 80 connected to each FE role VM instance. This feature can be used in a variety of ways:

A) direct access to the given instance, only action on that instance

(b) Redirect the user application to a specific instance after it has passed the load-balancing endpoint. This can be used for a "sticky" session to a given instance. It should be noted that this can cause instances to overload and remove any redundancy.

Finally, the FE role exposes an internal endpoint that can be used for communication between fe/be roles:

<internalendpoint name= "fe_internalep_tcp" protocol= "TCP"/>

Each role can use the Roleenvironment class to discover the endpoints it exposes, as well as the endpoints exposed by each of the other roles.

The be role can also be modeled as a workerrole.

The be role does not expose any load balancing endpoints, exposing only internal endpoints that use HTTP, TCP, and UDP:

<internalendpoint name= "be_internalep_tcp" protocol= "TCP"/>

<internalendpoint name= "be_internalep_udp" protocol= "UDP"/>

<internalendpoint name= "be_internalep_http" protocol= "Http" port= "no"/>

The be role also exposes an instance input endpoint that is directly connected to the being instance:

<instanceinputendpoint name= "Instanceep_be" protocol= "tcp" localport= ">"

<AllocatePublicPortFrom>

<fixedportrange min= "10210" max= "10220"/>

</AllocatePublicPortFrom>

</InstanceInputEndpoint>

The above definition uses TCP port 10210 ... Port 80 connected to the instance of each be role VM.

We hope that the example above demonstrates how to model a service using all the load balancing features.

In future articles, we will see the actual application of this tenant and provide code examples. In addition, we will describe the following in more detail:

A) how SNAT works

b) Custom Probes

c) Virtual networks

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.