Why is DNS-based global load balancing (GSLB) not working?

Source: Internet
Author: User
Tags http cookie http redirect browser cache

Why DNS Based Global Server Load Balancing (GSLB) doesn ' t work

Pete Tenereillo
3/11/04
Copyright Tenereillo, Inc. 2004

Preface

Fred: Joe, I'm going to catch a flight, how long does it take from Hollywood to Los Angeles International Airport?
Joe: Well ... It depends on which way you go.
Fred: Well ... I think I should take the freeway, right?
Joe: Well, it's a technical question, and I can answer it. It takes 20 minutes to walk at high speed at 60km/h speed.
Fred: OK, thanks.
(Fred went to Rodeo Drive an hour before boarding, then blocked for two hours on the way to Los Angeles International Airport and missed the flight.) )
Fred: (on the phone) Joe, the traffic is awful, I missed my flight! , you said the journey takes only 20 minutes!
Joe: Oh, you didn't ask how long it would take in a traffic jam.
Fred: Is the traffic bad at this time of day?
Joe: Are you kidding me? Traffic has been awful, this is Los Angeles!

The answer to a question may be correct under given conditions, but if the answer ignores known details, it loses the meaning of the discussion. Maybe as a technician we want every problem to have a solution, so sometimes we overlook the most obvious things. Perhaps there are too many details to consider that will confuse us. Or we'll forget about it in a few details.

Extract

The DNS-based Global load Balancing (GSLB) solution is designed to provide the Internet DNS service and other functions and features other than the standard DNS service. This article describes the problems that arise when using GSLB features in most internet services, compared to http,https,ftp, streaming, and other applications or protocols based on the B/S architecture. At this point I should have added "if GSLB can simultaneously enhance the high availability of B/S Architecture Internet Service", but I do not, because people always expect GSLB solution can be well deployed high availability. This is "obvious".

The Punch Line

Returning multiple A records in a DNS response is a best practice for GSLB high-availability deployments 1, but returning multiple A records will somehow affect the load-balancing characteristics of GSLB (such as traffic control or site selection algorithms). So the real value of global load balancing (or multi-site traffic control) is questionable (see why DNS-based global load balancing (GSLB) doesn't work?). II). People must make compromises between high availability and load balancing. A technical explanation will be provided below.

The fundamental goal of GSLB

Multi-site Deployment internet site enhanced high availability is beyond dispute, and if a catastrophic event causes one site to be unavailable, the other site must be able to take over the user's request so that the transaction is sustainable. Here is an example of a server site deployed in Los Angeles and New York, respectively:

An Internet connection fails, a power outage, a SLB device failure, a Dos attack, or a catastrophic event will cause the entire site to stop serving, and the GSLB device must detect a failure and route the request to the rest of the site to ensure that the client request responds and the transaction can continue.

DNS resolution

For completeness reasons, this section reviews the DNS parsing process using GSLB. If you are a GSLB expert you can skip this section. Shows the steps for clients to resolve the full domain name (FQDN) www.trapster.net

Site A in Los Angeles, use virtual IP 1.1.1.1. Site B in New York City, using virtual IP 2.2.2.2
。 The GSLB device acts as the authoritative name server for www.trapster.net. When a DNS request arrives at GSLB, GLSB is responsible for deciding to return 1.1.1.1 or 2.2.2.2.

1) The Edge Resolver (the software program running on the client PC) initiates a resolution request to the local DNS, in this case, "local DNS" refers to the DNS server of the client's Internet provider (ISP) in Georgia State Atlanta. The client either receives DNS parsing results or receives an error message. This query is called a "recursive" query. Note: The edge resolver does not support the "mining" query results on the Internet, which is the work of the DNS server.
2) The client's local DNS server is the client Agent "iteration" query, queries the root name server, and eventually queries the www.trapster.net authoritative name server. In this example, GSLB is the authoritative name server.
3) gslb with each site of the software or equipment to run a communication program, collect information about each site, such as the health of the site, the number of session connections, response time and so on.
4) The software or device of each site runs a measure of a dynamic characteristic, such as the round Trip Time (RTT), geographic interval, BGP hop count, etc. that measure the site to the client's local DNS server.
5) GSLB uses the information collected from steps 3 and 4 to return the computed optimal results to the client's local DNS server, which is either 1.1.1.1 or 2.2.2.2, and if the DNS keepalive time (TTL) is not 0, the result is cached on the client's local DNS server So that other clients that share the local DNS server can use this result directly (not repeat step 2-4).

After the DNS resolution is complete, the client establishes a TCP connection to the relatively optimal site.

Browser DNS Caching

Internet Explorer, Netscape Browser, other browsers, and even Web Proxy cache programs and mail servers are built-in "DNS cache". The DNS cache is a small database that can store DNS resolution results for a period of time. Typically, the storage time for DNS results is specified by the authoritative DNS server that answers this result. This period is called the KeepAlive Time (TTL). Unfortunately, the browser's cache is not able to get the TTL returned by the DNS server. This is because the DNS request is done by invoking the operating system's gethostbyname () function (or other function that provides similar functionality), which returns only one or more IP addresses that correspond to the requested domain name (the system call does not allow the requesting application to get the TTL). To solve this problem, the browser developer introduced a configurable TTL value. In IE, this value is 30 minutes by default and can be modified by WinDOS's registry. In Netscape, this value is 15 minutes by default and can be configured by modifying a line in the Prefs.js file.

The frequency of DNS resolution requests depends primarily on the different browsers. The old version of the browser request time interval is fixed, corresponding to each site, ie browser every 30 minutes to execute a request, Netscape is 15 minutes, regardless of the user/client connection in this time period of the site number of times. Click Refresh, and even other combinations do not change this behavior. The only way to refresh the browser's DNS cache is to exit and restart the browser (or restart the computer). In most cases, "Restart the browser" means to close all the pages that are being browsed, not just the page where the connection problem occurs--when the page is connected, it (restarting the browser) does not necessarily go to the user. Microsoft has fixed this issue long ago. However, in recent statistics (2007-8), a significant portion of the browser is still affected by the problem. For more questions about DNS caching and browsers, see http://www.tenereillo.com/BrowserDNSCache.htm

Problems with browser caching

Browser caching can have a significant impact on GSLB. If a site cannot provide services because of a catastrophic accident, all clients currently connected to the site will experience connectivity failures until the DNS cache in the browser expires, or the user restarts the browser or computer. At the same time, any DNS server that specifies that the failed site IP is cached as the client for the local DNS server also encounters a connection failure. This is obviously unacceptable.

Can help demonstrate this rather serious problem. Taking a financial industry site (offering securities, stock trading, online banking, etc.) as an example, using the Active/backup load scheme, the program is the simplest and most widely used GSLB configuration, see:

Fictitious a site www.ReallyBigWellTrustedFinancialSite.com, using Site A (1.1.1.1) in Los Angeles as the primary site, Site B (2.2.2.2) as the standby site.

1) in order to implement the scheme. The www.ReallyBigWellTrustedFinancialSite.com DNS resolution response needs to reply to the unique result, or the "A record"--ip address 1.1.1.1. A GSLB device is deployed on the Internet, using the best advanced site health monitoring technology.
2) Thousands of users connect at site A, a smooth transaction transaction, all users to the IP address 1.1.1.1 cached in their browser.

Now the disaster has occurred, as shown in:

1) GSLB Excellent advanced site health monitoring technology immediately detected a failure.
2) GSLB notes that site B is still healthy and starts returning IP address 2.2.2.2 to route the new request to Site B.
3) All online users still 1.1.1.1 the IP address of site A and cache it in their browser. GSLB There is no way to notify these users, because these users will not initiate a new DNS request until the browser cache expires.

For all online users, the site failure actually lasted half an hour, completely beyond the control of GSLB devices based on advanced site health monitoring technology.

Yet this is not the worst, and the situation will be worse, such as:

1) Some new client browser caches and the local DNS server cache do not have the results resolved in 1.1.1.1. These users will request www.ReallyBigWellTrustedFinancialSite.com.
2) The local DNS of these clients will proxy their initiating iteration address resolution (at least for the first request agent resolution), and the final request to GSLB,GSLB will respond to the results of the healthy site--ip 2.2.2.2, everything is fine.
3) However, some clients ' local DNS server caches already have resolution 1.1.1.1, or because the TTL set by GSLB in this result does not expire, either the local DNS server ignores the low or zero TTL value (in fact, Some DNS servers do). Because the parsing results are still in the cache, the local DNS server does not initiate an iterative request to GSLB, and it is not aware that the site a--1.1.1.1 has been invalidated, so these new clients will experience a half-hour failure, completely unaffected by GSLB.

Ways to troubleshoot browser caching problems

There is a long-established solution for browser caching issues, which is that the authoritative DNS server (or GSLB) returns multiple DNS results ("A record") in the resolution response.

It is not a trick to return multiple A records in a parse response. It is not a feature of the load balancer device manufacturer. The DNS protocol supports the return of multiple A records in the parsing response at the beginning of the design. Applications such as browsers, proxy servers, mail servers, and so on, can use this feature.

Shows how he works:

1) the client request resolves the FQDN www.trapster.net.
2) After iterating the query (not shown), the client's local DNS server returns two A records--1.1.1.1 and 2.2.2.2 (in this order).
3) The client establishes a request to the IP 1.1.1.1 of site 1.

1) When a client accesses site A for a smooth business event, a catastrophic event causes the site to fail. The client loses connection to site A.
2) because the second a record 2.2.2.2 is also included in the original parsing results, the client will smoothly connect to the site B2. Note: This depends on the architecture of the commercial application, some connection states, such as login status, shopping cart, financial matters, and so on, may be lost due to a disaster, but the client can still continue business activities at Site B.

There is no need for GSLB devices to reply to multiple A records in the parsing results, although most GSLB devices support this feature. All important DNS servers support replying to multiple A records, and basically all B/s-based commercial sites will return multiple A records in the parsing results in response to browser caching issues.

An Axiom

For GSLB, the only way to achieve high availability is to include multiple A records in the DNS resolution results.

High-availability implementations have an extremely large number of alternative solutions, but none of them really works (see "Alternatives" below), in addition to modifying all the registries that may access the site's PC, replying to multiple A records in the parsing results is the only way.

Why does the multiple a record counteract the GSLB load balancing algorithm?

As mentioned earlier in this article, the DNS server can return multiple A records in the parsing results. GSLB devices can also return multiple A records. Even if the Internet site already has a DNS server (or purchased a DNS resolution Service), the owner of the Internet site will still purchase GSLB devices, usually up to $30000 per unit, in order to obtain more features than normal DNS servers can provide.
Here's the problem:

None of these features can be used with multiple A records .

Simple site Active/standby algorithm No, static site preference algorithm not, IANA based site preference algorithm not, DNS
Persistence algorithm not, RTT or step detection algorithm is not, based on geotargeting redirection is not ... No one feature is allowed! Just to show why:

DNS parsing based on GSLB has been described earlier (for simplicity, steps like health check, RTT measurement, etc.) are omitted.
1) We assume that the GSLB device sets site A as the preferred site, the IP address 1.1.1.1, which returns the parsing results in the parse response in the following order:
-1.1.1.1
-2.2.2.2
2) The client's local DNS receives the result of the resolution and caches the results. At this point, local DNS returns the results to the client, possibly in the following order:
-1.1.1.1
-2.2.2.2
Or:
-2.2.2.2
-1.1.1.1

Currently, almost all commercial GSLB devices return sequential multiple A records, often referred to as "sequential lists". It is assumed that the ordered result sequence will be passed through the Internet to DNS requestor 3, unfortunately, this assumption is wrong.

practice proves that the address order of DNS resolution results will be tampered with by the client's local DNS!

The Local DNS server tampered with the address order in the resolution results in order to balance traffic to different sites, which is the default action for most provider DNS servers 4. The idea was to set the TTL of DNS response results to 0 to avoid the local DNS tampering sequence. Unfortunately, the sequence of parsing results is still tampered with, completely unaffected by the GSLB or authoritative name resolution server, in which case it is not possible for a deterministic control client to prioritize access to a site.

Website cookies: guess what? Still not working!

Most sites that are deployed frequently require the continuation of the session. In other words, how a client connects to site A, it must be ensured that during the entire session, the client is connected to site A. Even if the site is well synchronized to accommodate a certain level of session persistence, then real-time synchronization is not possible.

The

Browser DNS cache is a glimmer of hope for resolving session persistence. After the client resolves the www.trapster.net, the result is the site A,ip 1.1.1.1, and the client continues to connect to site A until the browser cache expires. As mentioned earlier, IE's expiration time is 30 minutes, Netscape is 15 minutes. Obviously, using this method alone does not satisfy the persistence of sessions longer than 30 minutes (or 15 minutes) because the browser will re-resolve the domain name after time-out, and the client may be connected to the wrong site. Also, whether 30 minutes or 15 minutes, is a fixed period of time, not the client to stop using the wait time. For example, when a user accesses the www.trapster.net and then takes a 29-minute call, hangs up the phone, continues to browse www.trapster.net to start ordering certain items, the browser is re-parsed after one minute and most likely directs the user to the wrong site.

The

DNS cache time-out problem is well known, so basically all SLB (server load Balancing) will solve the problem. Use a method called a "site cookie", usually only for the HTTP protocol (and some vendors implement the method for streaming media protocol).

1) The result of the client-side resolution Www.trapster.net-site A,IP address 1.1.1.1. The client connects to site A to start the business transaction. While connected to site A, the SLB device at site A has an HTTP cookie that indicates which site (or even a specific server) the client needs to continue to connect to.
2) After a period of time, the client browser's DNS cache expires, the client will re-parse, this time the resolution is site B,IP Address 2.2.2.2, the address will be cached in the client browser for 30 minutes (or 15 minutes), the client now establishes a connection to site B, and when the client connects to site B, the site cookie it sends indicates that the client's current session needs to be connected to site A.
3) After the SLB device at site B reads this cookie, it sends an HTTP redirect. The FQDN portion of the HTTP redirect cannot be www.trapster.net, because the resolution 2.2.2.2 for that domain name is still cached in the client's browser. Also, you cannot use address 1.1.1.1 redirection, because Server software and SSL authentication usually do not work properly if you do not use DNS domain names for redirection. For this reason, a site-independent FQDN is typically used. In this example, the HTTP redirect domain name may be www-a.trapster.net (or site-a.www.trapster.net).
4) The client now uses Www-a.trapster.net to reconnect to the original site for business transactions.
as shown:

After a period of time, especially if the site has experienced a long session time (such as stock trading or financial sites), a large percentage of users will connect to the site through a site-independent FQDN. Even after a short
session time, some users use the site-independent FQDN www-a.trapster.net.

Because there are users already using www-a.trapster.net, for high reliability considerations, not only need to use multiple A records for www.trapster.net, but also to use multiple A records for www-a.trapster.net. If both IP 1.1.1.1 and IP 2.2.2.2 are included in the parsing results of www-a.trapster.net, as described earlier, the client may be connected to site A or site B!

site Cookies do not work well with multiple A records, and the same reason that GSLB does not work well with multiple A records!

is GSLB site health Check useful?

We have already proved that multiple a records are necessary, but the remaining question is "can they be qualified for a site based on B/s architecture?" "When you receive parsing results that contain multiple A records, browser-based clients use their own" health detection "method, which is why multiple A records are designed in the DNS protocol. There may be situations where GSLB is required to not return a record of a failed site that has been detected in the parsing result, but there are many situations that require a record of the failed site to be returned, even if the site is explicitly detected as invalid. This section shows a scenario in which many kinds of failures are short-lived, in other words, the same problem may occur in order or at the same time between different sites. For example:
1) power failure may affect data centers in one region, and may affect data centers in other regions during cable network tuning.
2) A denial of server attack (DoS) usually attacks the specified IP address. A Dos attack might attack the IP address 1.1.1.1 first and then attack the IP address 2.2.2.2.
3) A computer virus may affect data center A, it may take half an hour to manually kill the virus, during which the virus may erupt in data center B.
4) There is a problem with the ISP's internal network, and a router problem can affect the network in different regions of a country.

According to the previous example, site a uses the IP address 1.1.1.1, site B uses the IP address 2.2.2.2, if a SLB or GSLB device (or a binding health detection script), found that site A is invalid, should only return a single a record 2.2.2.2 in the results of the resolution? If site B uses IP address 2.2.2.2 then also fails, but site A is in a half-hour window, recovering from a failure
, it is best to always return a record of two sites, even if the health detection process detects a failure. Remember, it is rarely counterproductive to return multiple records because the client automatically connects to a healthy site in the A-record list without human intervention.

Alternative methods

There is also a less severe failure that can occur. All server failures at one site, however, the power, Internet connection, and SLB devices of the site are working properly. Such issues have many commercially available solutions, including backup redirection, triangulation,proxying, or nating. For completeness the solutions are discussed here, but this section explains that although these solutions can address less severe server failures, they do not address more important site failure issues.

Triangulation

Triangulation is a connection recovery method that applies to all IP protocols.
1) The client has connected to site A, and the normal browsing site.
2) All server failures for site A (but in this case, the SLB, Internet connection, switch, router, power, and so on are normal). The software running on site A SLB detects a server failure, of course all existing TCP connections are lost, but the client tries to reconnect. At site A's SLB has a pre-established TCP tunnel with site B beforehand, and the slb of site a forwards the client's new connection request over the TCP tunnel to site B.
3) The SLB device at site B chooses a new server to service the client and returns the packet directly to the client using the fake address 1.1.1.1.

Backup redirection

Backup redirection is only valid for protocols that support redirection on the application layer (such as HTTP, HTTPS, some streaming media protocols, and so on).
1) The client initiates a request to site A, and the requested URL is the FQDN www.trapster.net, which has been resolved to the IP address 1.1.1.1.
2) The SLB of site A finds that all servers are faulty and sends an HTTP redirect to the user to connect to site B. The redirect here must use a different FQDN, which can be www-b.trapster.net. If HTTP redirection uses www.trapster.net, the client will reconnect to site a using the already cached address (1.1.1.1). and HTTP redirection may not use an IP address, because most servers, SSL authentication, and so on, require clients to access the site through the FQDN, not the IP address.
3) The client now accesses site B.

IP Proxy and NAT

IP proxies (and NAT) are valid for all IP protocols, and they are not described in detail here. These two methods, after discovering all the server failures of site A, will load the client's connection to the VIP (2.2.2.2) on site B, just as if the client connection load is balanced to the local server.

Triangulation, backup redirection, IP Proxy and NAT issues

These methods do help with rapid site failure recovery, but only if the Internet connection, network devices, power, and local SLB devices are working properly-in other words, only if the server fails. If these methods are used in conjunction with multiple A records, it is questionable whether these methods work. If you use these methods alone, instead of using multiple A records, you are "missing a watermelon and picking sesame seeds." Without taking into account the catastrophic failure of the site, there is no compelling argument to use GSLB. It seems best to forget gslb completely, discard those costs and complexities, gather all the servers in one data center instead of two, using redundant power, network connectivity, network equipment, and SLB devices.

This means that even if high availability in disaster situations is the most basic requirement for GSLB, GSLB can only be met in certain circumstances, so:

for high-availability requirements, triangulation, backup redirection, IP proxy, and NAT, these methods are not sufficient, or are not required. Browser-based clients still need to use multiple A records. 5

BGP host Route Injection

There is also a solution, often called BGP host route Injection (HRI), and at least two vendors are also called "Global IPs". He is not a simple alternative to GSLB, but a solution to replace DNS-based GSLB. The following is an overview of its principles:

0) (client initiated DNS resolution, www.trapster.net only one IP address--1.1.1.1 in the parsing result.) )
1) The SLB device (or router) of site A and site B has declared the address 1.1.1.1 to the Internet. The Internet router propagates 1.1.1.1 of the routing information through BGP, exchanging metrics at the same time, eventually propagating the routing information to the closest Internet router to the client, which chooses the path with the least measure, and joins the route 1.1.1.1 into the routing table.
2) The client connects to the closest site on the topology at this time.

In this case, the server (all) failed, or the Internet connection failure, power failure, SLB device failure, network equipment failure, such as catastrophic failure.

1) The SLB device at site A discovers a server failure, stops the IP address 1.1.1.1 (or the SLB, Interconnect network connection is destroyed by BGP to the Internet, in which case the announcement will obviously stop).
2) routes converge between devices in the interconnection network, and routing entries to site A are eventually deleted.
3) The client re-establishes the connection, still connected to IP 1.1.1.1, but this time the connection is site B.

Although at the theoretical level, the implementation of this scheme is like GSLB's dream, but it rarely really deploys the implementation, the following illustrates why:
-Internet routing is quite complex, and the practice of declaring the same IP address in different regions is unreliable. During a client's session, if a routing change occurs, the packet is intermittently transferred between site A and site B, and the client is still not able to access it even if all two sites are working properly.
-The time of the route convergence will be quite long. When a site fails, the client's browser times out and jumps to the failed page. If the user attempts to access manually, the final connection will be restored, but the recovery time of more than 5 minutes is also possible, such a long time of failure for the commercial site is unacceptable.
-A single IP address (host address) that is declared by BGP is usually ignored by the Internet router. One possible solution is to declare the entire network address segment, but just for gslb to do so, it is equivalent to wasting expensive public IP address resources (because only one address, or a small number of addresses, is actually used).
-For security reasons, source address filtering (sometimes called "bogon" filtering) is configured on the router, and source address filtering prevents the same IP address from being declared from different geographies. This problem can usually be resolved by negotiation with the ISP. However, even if the ISP negotiated with the removal of the source address filtering, usually someone inadvertently re-add, which usually causes the site to lose the Internet connection, the need for fault repair processing.

BGP HRI is a robust solution for sites distributed across a smaller geographic network, and may be useful for some Internet applications, but deployment is fairly rare because it does not perform as well as theoretical analysis in practice.

Conclusion

The only way to implement the GSLB high availability of B/s architecture is to return multiple A records in the parse response, but returning multiple A records will destroy any current site selection algorithms. Because of these features, such as the basic Active-standby algorithm, the DNS persistence algorithm, the site selection algorithm based on RTT or step or BGP hops, the IP address geotargeting-based or IANA-based site selection will not work.

The good news is that consumers can now use the original flower gslb on each unit $30000, to buy more servers and improve the internal synchronization capabilities of the site!

Watering it down

At the risk of confusing the theory of the preceding article 6, wrote the following:

At least in theory, GSLB devices can operate in a "best choice + Polling" mode, for example if an FQDN has two sites in Europe, two sites in the United States, and for European clients, for better service, GSLB only returns a record of two sites in Europe to the client's local DNS server. The client's local DNS server polls the payload between two sites. See:

0) before this (not shown on the diagram), the client requests to resolve the FQDN www.trapster.net, and the final parsing request iterates over the Www.trapster.net authoritative server, which is the GSLB device. GSLB performs a site selection algorithm to select the Frankfurt and Paris sites for customers, excluding Los Angeles and New York.
1) GSLB returns two sites of IP 1.1.1.1 and 2.2.2.2.
2) The order of a records in the parsing results is disrupted by the client's local DNS server, but in this case the behavior is acceptable. The client connects to the Frankfurt or Paris site.

To better illustrate, I randomly select a term "area" to describe this gslb topology.
Site A and Site B mentioned above are classified as "European GSLB regions", and Site C and site D are zoned "US region". This method, only in the site of the site distribution around the world, and divided into different regions, each region contains at least two data centers for each other backup, in this case, can be global load balance. If Www.trapster.net's data centers are located in London, New York, and Tokyo, then the "Best choice + Polling" model will not work. A client in London, GSLB needs to return a record of the London site (the nearest site), and it must return at least another site's IP (New York or Tokyo, two sites are not close to London). Clients connect to London sites or to other sites. Obviously, this obviously violates the original intention of the GSLB site selection algorithm. At the same time, some site selection algorithms (such as step calculations) do not work in the "Best choice + Polling" mode (left to the reader), and the DNS persistence algorithm cannot be combined with "best choice + polling". Even though this complex implementation can serve a very large global site, this approach is not a common solution for the case discussed in this article.

Copyright Tenereillo, Inc. 2004

Original address: http://www.tenereillo.com/GSLBPageOfShame.htm

  1. This paper discusses Internet behavior in the sense of the larger internet. Certainly custom solutions can be created. For
    Example, there is the possibility of distribution of special software to run on client computers. Such Solutions is almost
    Never practical for Internet sites, and is therefore not discussed here. Also, exceptions can be found in Noncustom
    Solutions, such as browsers or DNS servers that is configured to behave differently than described here. Again, those is
    Very rare. Finally, there is applications that is not required to serve browser based clients. This paper does isn't apply to
    Those applications. ?
  2. The timeliness of the connection attempt to the second site are application dependent, but essentially all browser based
    Applications would try the second IP address after a few unanswered TCP SYN packets. ?
  3. Some Solutions also attempt to achieve site weighting by returning duplicates. For example, to get 75% of traffic on site
    One they would return the ordered list {1.1.1.1, 1.1.1.1, 1.1.1.1, 2.2.2.2}. This feature does does reliably, as most DNS
    Servers note the duplicates and round robin the list as {1.1.1.1, 2.2.2.2}. ?
  4. the order that A Records was returned by the most commonly deployed DNS server, BIND, is as follows. The first record
    is chosen at random. The remaining records is returned in a cyclic order. For example, if the ordered list was {1.1.1.1, 2.2.2.2,
    3.3.3.3} The first response might be {2.2.2.2, 3.3.3.3, 1.1.1.1 }, the next response to a subsequent client request, from a client
    so shares that name server, would is {3.3.3.3, 1. 1.1.1, 2.2.2.2}. Other DNS resolvers and servers reorder the list differently,
    For example the Windows XP DNS cache would reorder a RESP Onse such that any subnetadjacent
    the IP address is returned first.
    This paper does is attempt to provide a canonical list of such issues. It would suffice to say this, for a number of reasons, the
    order in an ordered list of a records cannot is expected to Be preserved. ?
  5. Sites that does not support browser based clients, but does support applications that does not work well with multiple A
    Records (indeed some custom robot type client applications do not), with no other recourse and to use one of these recover
    Methods. ?
  6. The ' Watering it down ' section is part of the original paper. I removed it because I thought it complicated the issue.
    Reviewers suggested that I put it back in. Done. 5/26/04. ?

Why is DNS-based global load balancing (GSLB) not working?

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.