exchange2003/2010 Coexistence Mode Environment migration

Source: Internet
Author: User
Tags configuration settings failover naming convention disk usage

First, our exchange2010 architecture design is based on the central model. and is based on EXCHANGE2010SP3.

Based on the DAG three architecture design, as of May 14, the Beijing bureau based on 2 Dags, the Dalian Bureau based on EXCHANGE2007 deployment, my bureau is the only three-based Dag exchange2010sp3 transformation.

Second, the Exchange virtualization environment is based on Veeam replication technology, and Veeam-based replication technology achieves three-point functionality:

1. Veeam replicated virtual machine state snapshots are based on incremental backups, effectively reducing disk usage. And based on Veeam's iterative data deletion technology, it can realize the effective data consumption of virtual machines.

2, Veeam based on continuous data replication to achieve the virtual machine across the host ESXi server disaster switching.

3. Veeam is able to identify exchange data in a virtual machine and can back up and archive mailboxes and user content separately. Achieve application-level business continuity.

We recommend architectural design based on dagserver odd-numbered stations, and do not recommend architecture based on Dagserver even-numbered stations. The reason is that the failure of the brain crack in the DAG cluster can lead to a reduction in high availability.

The DAG is built on a failover cluster, and the CAS array is based on load balancing, so the DAG and CAS array cannot coexist on a single server! That is, assuming you choose to install Exchange2010 with two servers, Cas,hub and mailbox roles are installed on each server, you cannot implement both a DAG and a CAS Array. Generally we recommend that you configure dags on this topology, using DNS polling to implement the load balancing capabilities of the CAS role. Second, Dags need to consider this feature when calculating mailbox storage space because they need to create a completely identical set of mailbox databases on each mailboxserver. For example, the company has 5,000 mailbox users, each person mailbox space 1G. The storage required space is not about 5T, but at least 10T. Thirdly, the DAG needs to read the configuration of the mailbox database from Active Directory, and there is a replication synchronization problem between the domain controllers. So. It is normal to assume that there is a temporary failure to find a database when a mailbox database is replicated, and that it is possible to retry and replicate normally after 5 minutes. Don't worry about it.

When configuring a DAG, it is best to have two NICs, one for the production environment, and one NIC for replication between the Dags MailboxServer.

For example, as seen in the. The MAPI network card is used for production environments, and a network card is used for DAG replication. The IP of the MAPI network card is the 10.1.1 network segment. The network segment of the DAG replication Nic is the 10.1.2 network segment. It is generally recommended to increase the priority of the DAG Replication network card, but in fact it does not affect the work. Regardless of which NIC's priority is high, it is possible.

Assume that the MailboxServer in a dag group is odd. For example, 3 or 5, there is no need to witness the server; If MailboxServer is an even number, you need to configure a witness server. For arbitration.

Generally we use hubserver as testimony. Assume that the hub and mailbox are installed on the same server. In fact, you can also use a DC as a witness server.

CASHUB1 is the witness server, using the C:\dag01 folder as the witness folder.

Witness server is the server outside the DAG when the number of members of the DAG is even. Use this server to implement and maintain quorum.

When the number of members of a DAG is odd, the witness server is not used. All dags with an even number of members will use the witness server.

The witness Server can be any computer that executes Windows Server.

The Windows server operating system version number of the witness server is not required to match the operating system used by the DAG members.

Quorum is maintained at the cluster level under the DAG. When most members of the DAG are online. And can communicate with other online members of the DAG before the DAG is quorum. This quorum concept is an aspect of the quorum concept in Windows failover clusters. The required aspect related to quorum in a failover cluster is the quorum resource.

The quorum resource is a resource within a failover cluster that provides a quorum method for causing cluster state and membership decisions. The quorum resource also provides a permanent store for storing configuration information. The companion component of the quorum resource is the quorum log, which is the configuration database for the cluster.

The quorum log contains the following information: Which servers are members of the cluster. What resources are installed in the cluster, and the status of those resources (for example,. Online or offline).

Each DAG member should have a consistent view of how to configure the underlying cluster for the DAG. This is crucial. Quorum acts as an authoritative repository for all configuration information related to the cluster. The quorum is also used as a relational disconnect referee to avoid "network partitioning" symptoms.

The network partitioning symptom is a situation that occurs when a DAG member is unable to communicate with each other (but is able to perform properly). It is always required that most DAG members (using the DAG witness server when the DAG members are even) are available and in an interactive state. Enables the DAG to work properly, which prevents network partition symptoms.

Plan for high availability and site recovery


Microsoft Exchange Server 2010 includes a new unified framework for mailbox recovery. The framework contains some new features. such as database availability groups (DAGS) and mailbox database replicas.

While it is possible to deploy these new features at high speed and simply, it is important to plan carefully to ensure that whatever high availability and site recovery solutions that use these features can achieve the intended purpose and meet business requirements.

During the planning phase. System architects, administrators, and other key stakeholders should identify requirements for deployment, especially high availability and site recovery requirements. Deploying these features must meet some general requirements and must also meet hardware, software, and network connectivity requirements.

Double-click to collapse All. "> General Requirements

Before you deploy a DAG and create a mailbox database copy, make sure that you meet the following system-wide recommendations:

    • The Domain Name System (DNS) must be in execution. Theory. DNS server should accept dynamic updates. Assume that DNS server does not accept dynamic updates. You must create a DNS host (A) record for each Exchange server. Otherwise, Exchange does not execute properly.
    • Each mailbox server in the DAG must be a member server in the same domain.
    • Joining Exchange 2010 Mailbox server as the folder server at the same time is not supported to join the DAG.

    • The name assigned to the DAG must be a valid, available, and unique computer name of no more than 15 characters.

Hardware requirements

Typically, there is no special hardware requirement specific to a DAG or mailbox database copy. The server that you use must comply with all of the requirements set forth in the Exchange 2010 Prerequisites and Exchange 2010 System requirements topic.

Double-click to collapse All. "> Software Requirements

Dags are available in both Exchange Edition and Exchange Enterprise Edition. In addition, Dags can contain hybrid servers that perform Exchange Edition and Exchange Enterprise Edition.

Each member of a DAG must also perform the same operating system. Exchange 2010 is supported on both Windows Server 2008 and the Windows Server R2 operating system. All members of a DAG must perform Windows Server 2008 or Windows Server R2. They cannot contain a combination of Windows Server 2008 and Windows Server R2.

In addition to complying with the prerequisites for installing Exchange 2010, you must meet the operating system requirements.

Dags Use Windows Failover clustering technology, so they require the Windows Enterprise version number.

Double-click to collapse All.

"> Network Requirements

Each DAG and each DAG member must meet specific network requirements.

The DAG network is similar to the public, hybrid, and private networks used in the version number of Exchange.

However, unlike the previous version number, using a single network in each DAG member is a supported configuration. In addition, the term has been changed. Each DAG no longer uses a public, private, or mixed network. Instead, a "MAPI network" (other servers, such as other Exchange server, folder server, and so on use the network to communicate with DAG members) and 0 or more "replication networks" (these networks are dedicated to log shipping and seeding).

Although a network is supported, we recommend that you have at least two networks per DAG: One MAPI network and one replication network. This provides redundancy for network and network paths, allowing the system to differentiate between server failures and network failures. Using a single network adapter can prevent the system from distinguishing between these two types of failures.

Attention:

The product documentation in this content area is written with the assumption that each DAG member contains at least two network adapters, each of which is configured with one MAPI network and at least one replication network. And the system can differentiate between network failures and server failures.

When designing your network infrastructure for Dags, consider the following considerations:

  • Each member of a DAG must have at least one network adapter that can communicate with all other DAG members. Assuming that you are using a single network path, we recommend using Gigabit Ethernet. When using a single network adapter in each DAG member, you need to enable the DAG Network for replication, and you should configure it as a MAPI network. Because there is no other network, the system also uses the MAPI network as the replication network. In addition When using a single network adapter in each DAG member, we recommend that you consider a single network adapter and path when designing the overall solution.
  • Use two network adapters in each DAG member to provide a single MAPI network and one replication network. And the following recovery behavior:
      • Assume that the failure affects the MAPI network. A server failover (assuming the ability to activate a healthy mailbox database copy) occurs.
      • When a failure affects the replication network. Assuming that the MAPI network is not affected by a failure, log shipping and seeding operations will revert to using the MAPI network. When you restore a problematic replication network, log shipping and seeding operations are restored to the replication network.
  • Each DAG member must have the same number of networks. For example, suppose you plan to use a single network adapter in a DAG member. Then all members of the DAG must also use a single network adapter.
  • Each DAG must not have more than one MAPI network.

    The MAPI network must provide connectivity to other Exchange servers and other services, such as Active Directory and DNS.

  • Be able to join other replication networks as required. Grouping or similar technologies by using network adapters can also prevent single points of failure for individual network adapters.

    However, even with groups, it is not possible to prevent a single point of failure on the network itself.

  • Every network in a DAG member server must be on its own network subnet. Each server in a DAG can be on a different subnet, but the MAPI and replication networks must be able to route and provide connectivity for:
      • Every network in a DAG member server is on its own network subnet. And the subnet is separated from the subnets used by every other network in the server.
      • Each DAG member server's MAPI network can communicate with each other DAG member's MAPI network.
      • The replication network for each DAG member server can communicate with each other DAG member's replication network.

      • There is no direct route to pass the heartbeat traffic from a replication network on a DAG member server to a MAPI network on a DAG member server (and vice versa), and there is no direct route between multiple replication networks in the DAG.

  • The round-trip network latency between each member must not be greater than 250 milliseconds (ms), regardless of the geographic location of each member of the DAG relative to the other DAG members.

  • For multi-datacenter configurations, round-trip latency requirements may not be the most stringent network bandwidth and latency requirements. You must evaluate the network total load (which includes client access, Active Directory, transport, continuous replication, and other application communications) to determine the network requirements that your environment requires.
  • The DAG network supports Internet Protocol version number 4 (IPV4) and IPv6.

    IPV6 is supported only when using IPV4 at the same time. Pure IPV6 environments are not supported.

    IPv6 and IPV4 are enabled only at the same time on the computer. The use of IPV6 addresses and IP address ranges is supported when the network supports both IP address version numbers.

    Assuming that Exchange 2010 is deployed in this configuration, all server roles can send and receive data in devices, servers, and clients that use IPV6 addresses.

  • Self-Active private IP Addressing (APIPA) is a feature of Microsoft Windows that automatically assigns itself an IP address when it is not available on the network, regardless of the Dynamic Host Configuration Protocol (DHCP) server. APIPA addresses (including addresses that are manually assigned from the APIPA address range) are not supported for use by Dags or Exchange 2010.

Double-click to collapse All. ">DAG Name and IP address requirements

During the creation process. Each DAG is given a unique name and one or more static IP addresses are assigned, or configured to use DHCP.

Whether to use static addresses or dynamically assigned addresses. Whatever IP address assigned to the DAG must be on the MAPI network.

Each DAG requires at least one IP address on the MAPI network. When the MAPI network expands across multiple subnets, the DAG requires additional IP addresses. Illustrates the DAG. All nodes in the DAG have a MAPI network on the same web.

Database availability groups that have a MAPI network on the same subnet

In this demo sample, the MAPI networks in each DAG member are located in 172.19.18. x sub-network. Therefore, the DAG needs to have a single IP address on the subnet.

The next diagram illustrates a DAG with a MAPI network that spans two subnets: 172.19.18. x and 172.19.19. x.

Database availability Group with MAPI network on multiple subnets

In this demo sample, the MAPI network in each DAG member is on a separate subnet. Therefore, the DAG requires two IP addresses, and each subnet on the MAPI network has an address.

Each time the DAG's MAPI network expands across other subnets. You must configure additional IP addresses for the DAG for this subnet. Each IP address that is configured for the DAG is assigned to the DAG's underlying failover cluster and is used by that cluster.

The name of the DAG is also used as the name of the underlying failover cluster.

At any given time, the DAG's cluster will use only one of the assigned IP addresses. When the cluster IP address and Network Name resource come online, the Windows failover cluster will register this IP address in DNS.

In addition to using IP addresses and network names, cluster network objects (CNO) are also created in active Directory. The system will also internally use the cluster's name, IP address, and CNO to protect the DAG for internal communication. Administrators and finally users do not need to dock or connect to the DAG name or IP address.

Attention:

Although the IP address and network name of the cluster are used internally by the system, there is no hard dependency in Exchange 2010 that provides these resources. Even if the underlying cluster's IP address and network Name resources are offline. By using the server name of the DAG member. Internal communication still occurs in the DAG. But. We recommend that you monitor the availability of these resources on a regular basis to ensure that they are offline for no longer than 30 days. Assume that the underlying cluster takes more than 30 days to go offline. The garbage collection mechanism in Active Directory may invalidate the cluster CNO account.

Network adapter configuration for Dags

Each network adapter must be configured correctly for the intended use.

The network adapter that is used for the MAPI network is configured differently than the network adapter that is used to replicate the network. In addition to correctly configuring each network adapter, you must also configure the network connection order in Windows so that the MAPI network is at the top of the connection order.

MAPI Network Adapter Configuration

The network adapter for use by the MAPI network should be configured as described in the following table.

networking function

settings

microsoft network client

qos Packet Scheduler

microsoft network file and Printer sharing

internet protocol version number 6 (TCP/IP V6)

internet protocol version number 4 (TCP/IP v4)

link layer Topology Discovery Mapper I/O driver

enabled

link-layer Topology Discovery responder

The TCP/IP V4 properties of the MAPI network adapter are configured such as the following:

    • Ability to manually assign IP addresses. or configure it to use DHCP.

      Assuming DHCP is used, we recommend that you use persistent reservations for the server's IP address.

    • The MAPI network typically uses the default gateway, although it does not require a gateway.
    • At least one DNS server address must be configured. For redundancy, it is recommended that you use multiple DNS servers.

    • The "Register this connection address in DNS" check box should be selected.

Copy Network adapter Configuration

The network adapter used for the replication network should be configured as described in the following table.

networking function

settings

microsoft network client

qos Packet Scheduler

microsoft network file and Printer sharing

internet protocol version number 6 (TCP/IP V6)

internet protocol version number 4 (TCP/IP v4)

link layer Topology Discovery Mapper I/O driver

enabled

link-layer Topology Discovery responder

The TCP/IP V4 property configuration for the replication network adapter is as follows:

    • You can manually assign an IP address, or configure it to use DHCP. Assume that you use DHCP. We recommend that you use persistent reservations for the server's IP address.
    • Replication networks typically do not have a default gateway. Assuming that the MAPI network has a default gateway, other networks should not have a default gateway. You can use persistent static routes to configure network traffic routing on the replication network, routing network traffic to the corresponding network on other DAG members that use the gateway address. The gateway address has the ability to route between replication networks. All other traffic that does not match this route will be handled by the default gateway that is configured on the adapter for the MAPI network.
    • The DNS server address should not be configured.
    • The "Register this connection address in DNS" check box should not be selected.

Double-click to collapse All.

"> Witness Server Requirements

The witness server is the server outside the DAG. Use this server to implement and maintain quorum when the number of members of a DAG is even.

When the number of members of a DAG is odd. The witness server is not used.

All dags with an even number of members will use the witness server.

The witness Server can be any computer that executes Windows Server.

The Windows server operating system version number of the witness server is not required to match the operating system used by the DAG members.

Quorum is maintained at the cluster level under the DAG.

Dags arbitrate when most members of a DAG are online and can communicate with other online members of the DAG. This quorum concept is an aspect of the quorum concept in Windows failover clusters. The required aspect related to quorum in a failover cluster is the quorum resource. The quorum resource is a resource within a failover cluster that provides a quorum method for causing cluster state and membership decisions. The quorum resource also provides a permanent store for storing configuration information. The companion component of the quorum resource is the quorum log. It is the configuration database for the cluster. The quorum log contains the following information: Which servers are members of the cluster. What resources are installed in the cluster. And the state of those resources, such as. Online or offline).

It is critical that each DAG member have a consistent view of how the underlying cluster of the DAG should be configured.

Quorum acts as an authoritative repository for all configuration information related to the cluster. The quorum is also used as a relational disconnect referee to avoid "network partitioning" symptoms.

The network partitioning symptom is a situation that occurs when a DAG member is unable to communicate with each other (but is able to perform properly). It is always required that most DAG members (using the DAG witness server when the DAG members are even) are available and in an interactive state so that the DAG works properly, preventing network partition symptoms.

Planning for site Recovery

A growing number of business people recognize that daily access to reliable and usable messaging systems is the foundation of their success.

For many organizations, messaging systems are part of a business continuity plan, and site recovery should be considered when designing a messaging service deployment.

Basically, many site recovery solutions involve the deployment of hardware in a second data center.

Finally. The overall design of the DAG, which includes the number of members of the DAG and the number of mailbox database copies, will depend on the recovery service level Agreement (SLA) for each organization that contains the various failure scenarios. During the planning phase, the architects and administrators of the solution will determine the deployment requirements. In particular, site recovery requirements. They determine the location to use and the desired recovery SLA targets. The SLA will identify two specific elements, and these two elements should be the basis for designing high availability and site recovery solutions: Recovery time Objective (RTO) and Recovery point Objective (RPO). Both values are measured in minutes. RTO is the time that is required to restore the service. RPO refers to the new and old extent of data after the recovery operation is complete. It is also possible to define SLAs after resolving problems in the primary datacenter. reverts to the full service.

Solution Architects and administrators will also determine which set of users need site recovery protection and determine whether a multi-site solution is an active/passive or active/active configuration. In the active/passive configuration. The Standby data center does not typically reside in any user. In an active/active configuration, the user resides in two locations at the same time. In this workaround, there is a percentage of the total number of databases in the second datacenter's preferred activity location. When the service of a user in a datacenter fails. These users will be activated in a data center.

Building an appropriate SLA usually requires consideration of the following basic questions:

    • What level of service is required after a primary data center failure?
    • Do you need a data service or just a mail service?
    • What is the extent of the need for data?
    • How many users must I support?
    • How do users access their own data?
    • What is the backup Datacenter Activation Service level agreement (SLA)?
    • How does the service move back to the primary data center?
    • is the resource dedicated to site recovery solutions?

By answering these questions. You have actually started to build the approximate framework for the site recovery design for mail resolution.

The core requirement to recover from a site failure is to create a workaround. Place the necessary message data in an alternate datacenter that hosts the standby mail service.

Namespace planning

When you deploy a site recovery configuration, Exchange 2010 changes the way that the planning namespace is designed.

The correct namespace plan is the key to a successful data center switchover. From the namespace point of view. Each datacenter that is used in the site recovery configuration is considered an active data center. As a result, each datacenter requires its own unique namespace for the various Exchange 2010 services in the site, including Outlook Web App, Outlook Anywhere, Exchange ActiveSync, Exchange Web Services, RPC Client access, Post Office Protocol version number 3 (POP3), Internet Mail Access Protocol version number 4 (IMAP4), and the Simple Mail Transfer Protocol (SMTP) namespace.

In addition, one of the data centers also hosts the Autodiscover namespace.

This design also enables you to perform a single database switchover from the primary datacenter to the second datacenter to validate the configuration of the second datacenter as part of the validation and practice of data center switching.

As a best practice, we recommend that you use split DNS as the Exchange host name used by the client.

Split DNS refers to a DNS server configuration in which the internal DNS server returns the internal IP address of the host name, and the external (Internet-facing) DNS server returns the public IP address of the same host name. Because the same host name can be used internally and externally by splitting DNS, this policy minimizes the number of host names required.

Describes the namespace planning for the site recovery configuration.

Site Recovery DAG deployment Namespaces

As seen above, each datacenter uses a separate, unique namespace, and each namespace contains DNS server in the split DNS configuration for these namespaces. Raymond Data Center (considered as the primary datacenter) has a namespace protocol. contoso.com configured.

The Portland Data center is configured with a namespace protocol. standby.contoso.com. Namespaces can contain alternate flags, as seen in the demo sample, which can be named based on regional naming (for example, protocol. portland.contoso.com) and other naming conventions that fit the needs of the organization.

Regardless of the naming convention, it is critical that each datacenter has its own unique namespace.

"> Certificate Planning

When you deploy a DAG in a single datacenter, there are no unique or special design considerations for the certificate.

However, there are a few detailed considerations for certificates when extending dags across multiple datacenters in a site recovery configuration. In general, certificate design relies on the client being used and the certificate requirements of other applications that use the certificate.

However, for the type of certificate to use and the number of certificates. Some specific recommendations and best practices should be followed.

As a best practice, you should minimize the number of certificates used for client access to server, reverse proxy server, and transport server (margins and hubs). We recommend that you use a single certificate for all of these service endpoints in each data center.

This method will make the required number of certificates to a minimum. Thus reducing the cost and complexity of the solution.

For Outlook Anywhere Client, we recommend that you use a single subject Alternative name (SAN) certificate for each datacenter and include multiple host names in the certificate.

To ensure the connectivity of Outlook Anywhere after a database, server, or datacenter switchover, you must use the same certificate principal name on each certificate. and use Microsoft standard format (MSSTD) for the Outlook Provider configuration object Active Directory to configure the same principal name. Like what. Assuming that you use the certificate principal name mail.contoso.com, you can configure the properties as follows:

Set-outlookprovider expr-certprincipalname "Msstd:mail.contoso.com"

Some applications that are integrated with Exchange have specific certificate requirements. These applications may need to use other certificates. Exchange 2010 can coexist with Office Communications Server (OCS). OCS requires a 1024-bit or higher certificate. These certificates use the OCS server name as the certificate principal name. Using the OCS server name as the certificate principal name prevents Outlook Anywhere from working correctly. Therefore, you need to use a separate certificate in the OCS environment.


Network planning

In addition to the specific network requirements that must be met for each DAG and for each server that belongs to a DAG member. There are also requirements and recommendations specific to the site recovery configuration. Like all dags, no matter whether a DAG member is deployed on a single site or multiple sites. The round-trip return network latency between DAG members must not be greater than 250 milliseconds (ms). In addition, there are some specific configuration settings recommendations for Dags that span multiple site extensions:

    • the MAPI network should be independent of the replication network Windows network Policy, Windows Firewall policy, or router access control List (ACL) should be used to block communication between the MAPI network and the replication network. This configuration is necessary to prevent cross-talk of network detection signals.
    • client-facing DNS records should have a 5-minute time-to-Live (TTL) client experience with downtime that depends not only on the speed of the switchover. Also relies on the speed of DNS replication. And the speed at which the client queries DNS update information. All Exchange Client Services (including Outlook Web apps in internal and external DNS servers, Exchange ActiveSync, Exchange Web Services, Outlook Anywhere, SMTP, POP 3, IMAP4, and RPC Client access) The lifetime of the DNS record is set to 5 minutes.
    • use static routes to configure connections across the replication network to provide network connectivity between each replication network adapter. Please use persistent static routes. When using a static IP address, this is a one-time high-speed configuration performed on each DAG member.

      Suppose you use DHCP to get an IP address for a replication network, you can also use it to assign static routes to replication. This simplifies the configuration process.

General Site Recovery Planning

In addition to the requirements for high availability listed above. There are other recommendations for deploying Exchange 2010 in a site recovery configuration (for example, extending dags across multiple datacenters). In the planning phase, which issues directly affect the success of the site recovery solution. For example, a bad namespace design can cause a certificate to fail, and an incorrect certificate configuration might prevent users from visiting the service.

To minimize the time required to activate a second datacenter, you agree that the second datacenter hosts the service endpoints of the failed datacenter. The proper planning must be completed.

Like what:

    • Service level Agreement (SLA) Objectives for Site recovery solutions must be fully understood and documented.
    • The server in the second datacenter must have sufficient capacity to host a combined user base of two data centers.

    • The second datacenter must enable all services provided in the primary datacenter (unless these services are not included as part of the site recovery SLA). This includes Active Directory, network infrastructure (DNS and TCP/IP, etc.), telephony services (assuming Unified messaging), and site infrastructure (power, cooling, and so on).

    • To enable some services to serve users in the data center where the problem occurs. You must have the correct server certificate configured for these services. Some services do not agree with instantiation (for example, POP3 and IMAP4) and simply agree to use a single certificate.

      In these cases, the certificate must either be a subject Alternative name (SAN) certificate that contains multiple names, or it must be multiple names that are similar so that wildcard certificates can be used (assuming that your organization's security policy agrees to use a wildcard certificate).

    • The necessary services must be defined in the second data center.

      Like what. Assuming that the first datacenter has three different SMTP URLs on different transport servers, you must define an appropriate configuration in the second datacenter. Make at least one (assuming not all three) transport servers host the workload.

    • To support data center switching. The necessary network must already be configured. This could mean. Ensure load balancing is configured and global DNS is configured. and an Internet connection that configures the appropriate routes is enabled.
    • You must understand the DNS change policies that are required to support data center switching. You must define and record specific DNS changes (including their time-to-Live (TTL) settings) to support effective SLAs.
    • You must also establish a strategy for testing the solution and include it in the SLA. Periodically verifying the deployment is the only way to ensure that the quality and practicality of the deployment is not degraded over time.

      After verifying the deployment, we recommend that you understand the configuration part that records directly affects the success of the solution.

      In addition, it is recommended that the change management process be strengthened around these deployment sections.

Double-click to collapse All. "> planning for data Center switching

Proper planning and preparation involves not only the deployment of a second datacenter resource, such as active client access and Hub Transport server, but also the provision of preconfigured resources as part of the datacenter switching operation, thus minimizing the required changes.

Attention:

The second data center requires client access and Hub transport services. This is true even if the mailbox databases in the second datacenter are prevented from actively activating themselves.

These services are required to perform database switchover and to test and validate services and data in a second datacenter.

To better understand how the data center switchover process works, it is useful to understand the basic operations of Exchange 2010 Datacenter Switching.

For example, as you can see, the site recovery deployment includes a DAG that has members in two datacenters.

Database availability groups with members in two data centers

When you extend a DAG across multiple datacenters, you should design it. Make most DAG members in the primary datacenter. Or, make the primary datacenter host the witness server when each datacenter has the same number of members. This design guarantees service in the primary data center, even if there is a problem with the network connection between the two data centers. Just, this also means when a problem occurs in the primary datacenter. The quorum for a member in the second datacenter will be lost.

Some data center failures may also occur.

Assuming that the primary datacenter is blocking effective service and management because of the lack of sufficient functionality, you should perform a datacenter switchover to activate the second data center. The activation process involves the administrator configuring the surviving server for some operational state to stop the service. You can then continue to activate in the second data center. This prevents both groups of services from being tried and manipulated at the same time.

Due to quorum loss. DAG members in the second datacenter will not be able to come online on their own initiative. So. Activating mailbox server in the second datacenter also requires a step to force the DAG member server to create the quorum, which is removed from within the DAG (temporarily removed only) from the server in the failed datacenter. This can provide a stable partial service solution that can experience some level of other failure. And can continue to work normally.

Attention:

One prerequisite for experiencing other failures is that the DAG has at least four members, and the four members are distributed between two Active Directory sites (that is, at least two members per data center).

This is the basic process for creating a Mailbox role feature again in the second datacenter.

Activating other roles in the second datacenter does not involve explicit operations on the affected server in the second datacenter. Instead, the server in the second datacenter becomes the service endpoint for those services that are typically hosted by the primary Data center.

For example, users who typically reside in the primary datacenter can use Https://mail.contoso.com/owa to connect to Outlook Web apps. After a problem occurs in the datacenter, these service endpoints are moved to the endpoints in the second datacenter as part of a switchover operation. During the switchover operation. The service end point of the primary datacenter is once again directed to the alternate IP address of the same service in the second data center. During the switchover process. This reduces the number of changes that must be made to configuration information stored in Active Directory. Typically, you can complete this step in two ways:

    • Update with DNS Records. Or
    • Use global DNS and load balancer (LB) once again to configure enable and disable alternate IP addresses to move services between data centers.

    • A strategy for testing the solution must be established. It must be included in the SLA. Periodically verifying your deployment is the only way to ensure that your deployment does not degrade over time.

Seriously, these planning steps have an immediate effect on successful data center switching.

For example, a bad namespace design could lead to a certificate failure, and an incorrect certificate configuration might prevent users from visiting the service.

After you verify the deployment. We recommend that you understand that the records directly affect all the configuration parts of a successful data center switchover. In addition The change management process should also be enhanced around these deployment sections.


exchange2003/2010 Coexistence Mode Environment migration

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.