Use a Linux cluster for uninterrupted authentication

Source: Internet
Author: User
Tags openldap
Article Title: Use a Linux cluster for uninterrupted authentication. Linux is a technology channel of the IT lab in China. Includes basic categories such as desktop applications, Linux system management, kernel research, embedded systems, and open source.
When an organization adds applications and services, centralized authentication and password services can improve security and reduce management and developers' difficulties. However, merging all services into one server may cause reliability problems. High availability is especially important for the enterprise certification service, because in many cases, the entire enterprise will be suspended when authentication is stopped. This article describes how to use open source software to create a reliable and highly available authentication server.
  
  
Open source software we use
  
We use the LDAp (Lightweight Directory Access protocol) server to provide authentication services that can be subscribed to by various applications. To provide highly available LDAp servers, we use the heartbeat software package for the Linux-HA initiative (www.linux-ha.org. We also provide an example of setting up an Apache web server to use LDAp authentication.
  
Some background knowledge about LDAp
  
We use OpenLDAp software package (www.openldap.org), which is included in several Linux distributions. It is provided together with RedHat 7.1. Currently, the downloadable version is 2.0.11.
  
RFC 2251 and 2253 define the LDAp standard. Several existing LDAp business implementations include University of Michigan and Netscape. The OpenLDAp Foundation was created to "work together to develop robust, commercial-level, fully functional, and open-source LDAp applications and development tools" (see www.openldap.org ). OpenLDAp version 1.0 was released on August 1, August 1998. The current major version is 2.0, which was released in August 31, 2000 and added LDApv3 support.
  
Like any excellent network service, LDAp is designed to run across multiple servers. This article uses two LDAp functions-replication and reference.
  
The reference mechanism allows you to split LDAp namespaces across multiple servers and arrange LDAp servers in a hierarchical manner. For a specific directory namespace, LDAp only allows one master server.
  
Replication is driven by the OpenLDAp replication daemon slurpd. Slurpd runs regularly to check the log files of the master server for any updates. Then, push the update to the slave server. READ requests can be responded by any server, and updates can only be executed by the master server. A reference message is generated for an update request from the slave server, which provides the address of the master server. Tracking references and retrying updates are the responsibility of the client. OpenLDAp does not have a built-in method to distribute queries across replication servers. Therefore, you must use an Ip injector (sprayer)/fanout program, such as balance.
   
To achieve reliability, we connect two servers together to form a cluster. We can use shared storage and a maintained copy of data between servers. However, for the sake of simplicity, we choose to implement non-shared implementations. The LDAp database is usually very small and the update frequency is very low (note: If your LDAp data set is large, consider dividing the namespace into several smaller parts by reference ). When you restart a faulty node, non-shared settings do require a lot of effort: before restarting, any new changes must be added to the database of the faulty node. An example will be displayed later.
  
Cluster Software and configuration
  
First, let's clarify a small confusion. Most HA clusters feature a heartbeat system. The HA software uses heartbeat to monitor the "health" status of nodes in a group. The Linux-HA (www.linux-ha.org) team provides open source cluster software. Their software package is called Heartbeat (currently Heartbeat-0.4.9 ). This can lead to some understandable obfuscation (yes, it sometimes puzzles me ). This article calls the Linux-HA software package "Heartbeat" and the general concept "Heartbeat ".
  
The Linux-HA project started in 1998 and is a product of Linux-ha howto (Haranld Milz. This project is currently led by Alan Robert Tson, and many other code providers are also involved. Version 0.4.9 was released in early 2001.
  
Heartbeat monitors node health through communication media (usually serial devices and Ethernet. It is better to have multiple redundant media so that we can use both serial lines and Ethernet connections. Each node runs a daemon process (called "Heartbeat "). The main daemon derives sub-processes that read and write each heartbeat media and state processes. When a node is detected to be faulty, Heartbeat runs the shell script to start (or stop) services on the secondary node. As designed, these scripts use the same syntax as the system init script (usually in/etc/init. d. Default scripts are provided for file systems, Web servers, and virtual Ip address failover.
  
Suppose there are two matching LDAp servers. we can use several configurations. First, we can perform "cold standby (cold standby )". The master node has a virtual Ip address and a running server. The secondary node is idle. Once the master node fails, the server instance and Ip address will be transferred to the "cold" node. This is easy to implement, but data synchronization between the master server and the secondary server may be a problem. To solve this problem, we can use the active servers on the two nodes to configure the cluster. The master node runs the master LDAp server and the Slave node runs the slave instance. Updates to the master server are immediately pushed to the slave server through slurpd.
  
   
After the master node fails, the secondary node responds to the query, but we cannot update it now. To be updated, we will restart the secondary server and upgrade it to the primary server during failover.
   
This provides us with a complete LDAp service, but adds the gotcha problem-if the update is performed on the secondary server, before the primary server is allowed to restart, we will have to modify it. Heartbeat supports the "good fault response" option, which prevents faulty nodes from re-obtaining resources after failover. we can set it as the first option. This article demonstrates manual restart. Our sample configuration uses the virtual Ip address tool provided by Heartbeat. If you need to support heavy query load, use the Ip injector instead of the virtual Ip address to allocate the query to the master and slave servers. In this case, a reference is generated for the update request from the slave server. The subsequent operations for reference are not automatic; this function must be built into the client application. In addition to copying pseudo commands, the configurations of the master node and Slave node are the same. The master configuration file specifies the location of the log file to be copied (16th rows) and contains a list of slave servers. these servers are copies of objects with credential information. (34th-36 rows ).
  
34 replica host = slave5: 389
35 binddn = "cn = Manager, dc = lcc, dc = ibm, dc = com ";
36 bindmethod = simple credentials = secret
  
The master server is not specified in the configuration file; instead, the creden。 required for replication are listed. (33rd rows)
  
33 updatedn "cn = Manager, dc = lcc, dc = ibm, dc = com"
Heartbeat preparation
  
There are several good examples of available basic Heartbeat configurations (see references at the end of this article ). Below are some related content in our configuration. Our configuration is very simple, so there is not much content. By default, all configuration files are saved in/etc/ha. d.
  
Ha. cf contains the global definition of the cluster. The default value is used for all timeouts.
  
# Timeout intervals
Keepalive 2
# Keepalive cocould be set to 1 second here
Deadtime 10
Initdead 120
# Define our communications
# Serial serialportname...
Serial/dev/ttyS0
Baud 19200
# Ethernet information
Udpport 694
Udp eth1
# And finally, our node id's
# Node nodename... -- must match uname-n
Node slave5
Node slave6
Haresources: this is where failover is configured. Interesting content is located at the bottom of the file.
  
Slave6 192.168.10.51 slapd
Three things are shown here. The primary owner of the resource is the node "slave6" (the name must be consistent with the "uname-n" output of the machine on which you plan to use the node as the master node ). The service address (Virtual Ip) is "192.168.10.51" (This example is completed on a dedicated lab network, so the address 192.168 is used ). The service script is called "slapd ". Hearbeat searches for scripts in/etc/ha. d/resource. d and/etc/init. d.
  
Service script
  
For simple cold backup cases, we can use the standard/etc/init. d/slapd script without making any changes. However, we want to execute special functions, so we have created our own slapd script, which is stored in/etc/ha. d/resource. d. Heartbeat uses this directory as the first one in its search path, so we don't have to worry about running the/etc/init. d/slapd script. However, you should check to make sure that slapd is no longer started at boot (remove any S * slapd files from your/etc/rc. d tree ). First, specify the server load balancer startup configuration file in lines 17th and 18.
  
The script follows the standard init. d syntax, so the startup information is included in the test_start () function starting from row 21st. First, stop all running slapd instances. In row 3, we use the master configuration file to start the master server. Our design will follow this rule: if both the master and secondary nodes are running, start slapd on the master node as the master server, and start slapd on the secondary node as the slave server, and start the replication daemon. If only one node is running, start slapd as the master server. The virtual Ip address depends on the server load balancer master server. To complete this operation, we must know which node is executing the script, and if we are on the master node, we need to know the status of the secondary node. The important content lies in the "start" branch of the script. Because we have already specified the master node in the Heartbeat configuration, we know that when the test_start () function runs, it runs on the master node of Heartbeat (because Heartbeat uses/etc/init. d/script, so all scripts are called using the parameter "start | stop | restart ). When a script is called, Heartbeat sets many environment variables. The following is an environment variable we are interested in:
  
HA_CURHOST = slave6
  
We can use the "HA_CURHOST" value to tell us when it is being executed on the master node (slave6) and when it is performing a failover (the HA_CURHOST value is "slave5" at this time "). Now we need
Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.