Sentinel
First of all, Sentinel,sentinel is a redis comes with a way to implement Ha, if the actual application of this way with less, more popular or keepalived.
The Redis Sentinel system is used to manage multiple Redis servers (instance) that perform the following three tasks:
Monitoring (Monitoring): Sentinel will constantly check whether your primary server and slave server are functioning properly.
Reminder (Notification): When a problem occurs with a Redis server being monitored, Sentinel can send notifications to administrators or other applications through the API.
Automatic failover (Automatic failover): When a primary server fails, Sentinel starts an automatic failover operation that upgrades one of the failed primary servers from the server to the new primary server. And let the other server from the failed master to replicate the new primary server, when the client tries to connect to the failed primary server, the cluster will also return the address of the new primary server to the client, so that the cluster can use the new primary server instead of the failed server.
Redis Sentinel is a distributed system that allows you to run multiple Sentinel processes (progress) in one architecture that uses the gossip protocol (gossip protocols) to receive information about whether the primary server is offline, The Voting Protocol (agreement protocols) is used to determine whether automatic failover is performed and which slave server is selected as the new primary server.
Although Redis Sentinel releases the Redis-sentinel as a separate executable file, it's actually just a Redis server running in a special mode, and you can start R with a given –sentinel option when you start a normal Redis server Edis Sentinel. Get Sentinel
For the Redis-sentinel program, you can use the following command to start the Sentinel system:
Redis-sentinel/path/to/sentinel.conf
For the Redis-server program, you can use the following command to start a Redis server running in Sentinel mode:
Redis-server/path/to/sentinel.conf--sentinel
Both of these methods can start a Sentinel instance.
Launching the Sentinel instance must specify the appropriate configuration file, which will be used to save the current state of Sentinel and state Restore by loading the configuration file when Sentinel restarts.
If the appropriate configuration file is not specified when you start Sentinel, or if the specified profile is not writable (not writable), Sentinel refuses to start.
The Redis source contains a file named Sentinel.conf, which is an example of a sentinel configuration file with detailed comments.
The minimum configuration required to run a Sentinel is as follows:
Sentinel Monitor MyMaster 127.0.0.1 6379 2
Sentinel down-after-milliseconds mymaster 60000
Sentinel Failover-timeout mymaster 180000
Sentinel parallel-syncs mymaster 1
Sentinel monitor resque 192.168.1.3 6380 4< C4/>sentinel down-after-milliseconds resque 10000
Sentinel failover-timeout resque 180000
Sentinel Parallel-syncs Resque 5
The first line of configuration instructs Sentinel to monitor a primary server named MyMaster, which has an IP address of 127.0.0.1 and a port number of 6379, and it requires at least 2 Sentinel consent for the primary server to fail (as long as the Se The number of Ntinel is not met, and automatic failover will not be performed).
Note, however, that no matter how many Sentinel consents you set up to determine a server failure, a Sentinel needs to get support from most (majority) sentinel in the system to initiate an automatic failover and reserve a given configuration era (confi Guration Epoch, a configuration era is the version number of a new master server configuration.
In other words, Sentinel cannot perform an automatic failover if only a few (minority) sentinel processes are functioning properly.
The basic format for the other options is as follows:
Sentinel < option name > < primary server name > < option value >
The features of each option are as follows:
The down-after-milliseconds option specifies the number of milliseconds that Sentinel considers the server to be disconnected.
If the server does not return a reply to the PING command sent by Sentinel or returns an error within a given number of milliseconds, Sentinel marks the server as a subjective downline (subjectively down, abbreviated Sdown).
But only one Sentinel. Marking a server as subjective offline does not necessarily result in automatic server failover: only if a sufficient number of Sentinel servers mark a server as subjective, the server is marked as objective offline (objectively down, abbreviated Odown), the automatic failover is performed.
The number of Sentinel servers required to mark the server as objective is determined by the configuration of the primary server.
The PARALLEL-SYNCS option specifies the maximum number of simultaneous synchronization of a new primary server from the server when a failover is performed, and the smaller the number, the longer it will take to complete the failover.
If the from server is set to allow the use of outdated datasets (see the description of the Slave-serve-stale-data option in the redis.conf file), then you may not want all from the server to send synchronization requests to the new master server at the same time. Because although most of the steps in the replication process do not block the slave server, the server will not be able to process the command request for a period of time from the server when it loads the RDB file from the primary server: If all the new master servers are synchronized from the server together, This can result in all instances of unavailability from the server in a short period of time.
You can ensure that only one from the server is in a state where the command request cannot be processed at a time by setting this value.
The remainder of this document describes the other options for the Sentinel system, and the sample configuration file, Sentinel.conf, also provides a complete comment on the relevant options.subjective downline and objective downline
As mentioned earlier, there are two different concepts about downline (down) in Redis Sentinel:
The subjective downline (subjectively down, or sdown) refers to the offline judgment of the server made by a single Sentinel instance.
Objective offline (objectively down, abbreviated Odown) refers to multiple Sentinel instances in the same server to make Sdown judgment, and through Sentinel is-master-down-by-addr command to communicate with each other, the resulting Server offline judgment. (One Sentinel can ask another Sentinel to see if the given server is offline by sending a Sentinel IS-MASTER-DOWN-BY-ADDR command to the other.) )
If a server does not return a valid reply (valid reply) to the Sentinel that sends a PING to it within the time specified by the Master-down-after-milliseconds option, Sentinel will label the server Recorded as subjective downline.
A valid response from the server to the PING command can be one of the following three types of replies:
Returns +pong.
Returns a-loading error.
Returns a-masterdown error.
If the server returns a reply other than the three responses above, or if the PING command is not answered within the specified time, Sentinel considers the reply returned by the server to be invalid (non-valid).
Note that a server must always return an invalid reply within master-down-after-milliseconds milliseconds to be flagged as a subjective downline by Sentinel.
For example, if the value of the master-down-after-milliseconds option is 30000 milliseconds (30 seconds), the server will still be considered to be in a normal state as long as the server returns at least one valid reply within every 29 seconds.
Switching from a subjective downline to an objective offline state does not use a strict quorum algorithm (strong quorum algorithm), but instead uses a rumor protocol: If Sentinel is within a given timeframe, from other Sentinel Where a sufficient number of primary server Downline reports are received, Sentinel changes the state of the primary server from the subjective downline to the objective downline. If later Sentinel no longer reports that the primary server is offline, then the objective downline status will be removed.
Objective downline conditions apply only to the primary server: for any other type of Redis instance, Sentinel does not need to negotiate before judging them as a downline, so the objective downline condition will never be reached from the server or other Sentinel.
As soon as a Sentinel discovers that a primary server has entered an objective downline, this sentinel may be selected by other Sentinel and perform an automatic failover of the failed primary server.tasks that each sentinel needs to perform on a regular basis
Each Sentinel sends a PING command to the master server it knows, from the server, and from other Sentinel instances, at a frequency of once per second.
If an instance (instance) is closer to the last valid PING command than the value specified by the Down-after-milliseconds option, the instance is flagged by Sentinel as a subjective downline. A valid reply can be: +pong,-loading, or-masterdown.
If a primary server is marked as a subjective downline, then all Sentinel monitoring of this primary server will confirm that the primary server has actually entered a subjective downline state at a frequency of once per second.
If a primary server is marked as a subjective downline, and there is a sufficient number of Sentinel (at least the number specified in the configuration file) to agree to this judgment within a specified time frame, then the primary server is marked as objective offline.
In general, each Sentinel sends an INFO command to all of its known primary and slave servers at a frequency of every 10 seconds. When a primary server is marked as objective by Sentinel, Sentinel sends all the INFO commands from the server to the offline primary server from 10 seconds to once per second.
When there is not enough Sentinel to agree that the primary server is offline, the objective offline status of the primary server is removed. When the primary server re-returns a valid reply to Sentinel's ping command, the primary server's supervisor downline status is removed. automatic discovery of Sentinel and slave servers
One sentinel can connect with multiple sentinel, and each sentinel can check each other's availability and exchange information.
You do not have to set up additional Sentinel addresses for each Sentinel that is running, because Sentinel can automatically discover other Sentinel that is monitoring the same primary server through the Publish and subscribe feature bySentinel: Hello sends a message to implement.
Similarly, you do not have to manually list all the slave servers under the master server, because Sentinel can obtain all the information from the server by asking the master server.
Each Sentinel will be at a frequency of two seconds, through the Publish and subscribe feature, to all of the primary and slave servers it monitorsSentinel: Hello Channel sends a message containing Sentinel's IP address, port number, and run ID (Runid).
Each Sentinel subscribes to all of the master and slave servers it monitors.Sentinel: Hello Channel to find Sentinel (looking for unknown Sentinels) that didn't appear before. When a Sentinel discovers a new Sentinel, it adds a new sentinel to a list that holds all of the other Sentinel that Sentinel has known to monitor the same primary server.
The information that Sentinel sends also includes the complete master server current configuration (config). If one sentinel contains a master server configuration older than another Sentinel sends, this Sentinel will be upgraded to the new configuration immediately.
Before adding a new Sentinel to the list that monitors the master server, Sentinel checks whether the list already contains Sentinel with the same run ID or same address (including IP address and port number) as the Sentinel to be added, and if so, Sentinel will first remove any Sentinel that already exists in the list that has the same run ID or the same address, and then add a new Sentinel.Sentinel API
By default, Sentinel uses TCP port 26379 (the normal Redis server uses 6379).
Sentinel accepts command requests in the REDIS protocol format, so you can use REDIS-CLI or any other Redis client to communicate with Sentinel.
There are two ways to communicate with Sentinel:
The first approach is to query the current state of the monitored Redis server by sending commands directly, as well as the information Sentinel knows about other Sentinel, and so forth.
Another approach is to use the Publish and subscribe feature to receive notifications sent by Sentinel: Sentinel sends the appropriate information when a failover operation is performed, or if a monitored server is judged to be a subjective downline or an objective downline.
Sentinel command
The following are the commands that Sentinel accepts:
PING: Returns PONG.
SENTINEL Masters: Lists all the monitored primary servers and the current status of those primary servers.
SENTINEL Slaves: Lists all the slave servers for a given home server, as well as the current status of those from the server.
SENTINEL get-master-addr-by-name: Returns the IP address and port number of the primary server for the given name. This command returns the IP address and port number of the new primary server if the primary server is performing a failover operation, or if a failover operation has been completed for the primary server.
SENTINEL Reset: Resets all names to the primary server that matches the given pattern pattern. The pattern parameter is a Glob style mode. The reset operation is clear about all the current state of the primary server, including the failover in progress, and removes all the slave servers and Sentinel from the primary server that are currently discovered and associated.
Sentinel failover: When the primary server fails, force an automatic failover to occur without asking other Sentinel comments (although Sentinel initiating failover sends a new configuration to other Sentinel, the other Sentine L will update accordingly according to this configuration).
Publish and subscribe information
The client can treat Sentinel as a Redis server that only provides subscription functionality: You can not use the PUBLISH command to send information to this server, but you can use the SUBSCRIBE command or the Psubscribe command to subscribe to a given channel to get the corresponding Event reminders.
A channel can receive events with the same name as this channel. For example, a channel named +sdown can receive events that all instances enter a subjective downline (Sdown) state.
All event information can be received by executing the psubscribe * command.
The following is the format of the channels and information that clients can receive through subscriptions: The first English word is the name of the channel/event, and the rest is the format of the data.
Note that when the format contains the instance details word, the information that is returned by the channel contains the following to identify the target instance:
<instance-type> <name> <ip> <port> @ <master-name> <master-ip> <master-port >
The content after the @ character is used to specify the primary server, which is optional and is used only if the specified instance of the content before the @ character is not the primary server.
+reset-master: The primary server has been reset.
+slave: A new slave server has been identified and associated with Sentinel.
+failover-state-reconf-slaves: Failover status switched to Reconf-slaves state.
+failover-detected: Another Sentinel started a failover operation, or a transition from a server to a primary server.
+slave-reconf-sent: Sentinel (leader) sends the slaveof command to the instance to set up a new primary server for the instance.
+slave-reconf-inprog: the instance is setting itself as the slave server for the specified primary server, but the corresponding synchronization process is still not completed.
+slave-reconf-done: The synchronization of the new primary server has been successfully completed from the server.
-dup-sentinel: One or more Sentinel monitors for a given primary server have been removed because of duplicate occurrences-this happens when the Sentinel instance restarts.
+sentinel: A new Sentinel that monitors a given primary server has been identified and added.
+sdown: The given instance is now in a subjective downline state.
-sdown: The given instance is no longer in the subjective downline state.
+odown: The given instance is now in an objective offline state.
-odown: The given instance is no longer in the objective downline state.
+new-epoch: The current era (epoch) has been updated.
+try-failover: A new failover operation is in progress and waits for the majority of Sentinel selection (waiting to is elected by the majority).
+elected-leader: Wins the election for the specified era and can perform a failed migration operation.
+failover-state-select-slave: The failover operation is now in the Select-slave state--sentinel is looking for a slave server that can be upgraded to the primary server.
The No-good-slave:sentinel operation could not find a suitable slave server for the upgrade. Sentinel will try again after some time to find a suitable upgrade from the server, or simply abandon the failover operation.
Selected-slave:sentinel successfully found the right slave server for the upgrade.
Failover-state-send-slaveof-noone:sentinel is upgrading the specified slave server to the primary server, waiting for the upgrade feature to complete.
Failover-end-for-timeout: Failover aborted due to timeout, but eventually all slave servers will start copying the new primary server (slaves'll eventually be configured to replicate with The new master anyway).
Failover-end: Failover operation completed successfully. All starting from the server to replicate the new master server.
+switch-master: Configuration changes, the IP and address of the master server have changed. This is a message that most external users are concerned about.
+tilt: Enter tilt mode.
-tilt: Exit Tilt mode.fail over
A single failover operation consists of the following steps:
found that the primary server has entered an objective offline status.
Make a self-increment of our current era (see Raft leader election for details) and try to get elected in this century.
If the election fails, retry the election after twice times the set failover time-out. If successful, perform the following steps.
Select a slave server and upgrade it to the primary server.
Send the slaveof NO one command to the selected slave server to turn it into a primary server.
With the Publish and subscribe feature, the updated configuration is propagated to all other sentinel, and the other sentinel updates their own configuration.
Send the slaveof command from the server to the offline master server to replicate the new primary server.
Lead Sentinel terminates this failover operation when all the slave servers have started to replicate the new primary server.
Whenever a Redis instance is reconfigured (reconfigured)-whether it is set as a master server, from a server, or from a server that is set up as a different master server--sentinel sends a CONFIG REWRITE command to the reconfigured instance to Make sure that these configurations are persisted on the hard drive.
Sentinel uses the following rules to select a new primary server:
From servers that are subordinate to the failed master server, the slave servers that are marked as offline, disconnected, or the last time the PING command is restored will be eliminated from the server for more than five seconds.
From the server under the failed master server, those from the server that are disconnected from the failed primary server are 10 times times longer than the Down-after option specified, and the slave servers will be retired.
After two rounds of elimination. From the server, we select the largest copy offset (replication offset) from the server as the new primary server, if the replication offset is not available, or the replication offset from the server is the same, then with the minimum run ID The one from the server becomes the new primary server.
The consistency trait of Sentinel automatic fault migration
Sentinel automatic failover uses the Raft algorithm to elect leader (leader) Sentinel, ensuring that there is only one lead generation in a given epoch (epoch).
This means that in the same era, no two Sentinel will be selected as the lead at the same time, and each Sentinel will vote for only one leader in the same era.
The higher configuration era is always better than the lower era, so each Sentinel will proactively use the updated era instead of its own configuration.
In a nutshell, we can consider the Sentinel configuration as a state with a version number. A state is propagated to all other Sentinel in the way that the last writer wins (Last-write-wins) (that is, the latest configuration always wins).
For example, when network segmentation occurs, a Sentinel may contain an older configuration, and when this sentinel receives a version update from another Sentinel, Sentinel will make a more partitions configuration New.
If you want to maintain consistency in the presence of a network partition, you should use the Min-slaves-to-write option to have the primary server stop writing when the connection is less than the given number of instances, and at the same time, run on each machine running the Redis master server or slave server Redis Sentinel process.
The persistence of Sentinel state
The status of Sentinel is persisted in the Sentinel configuration file.
Each time Sentinel receives a new configuration, or when the lead Sentinel creates a new configuration for the primary server, this configuration is saved to disk with the configuration era.
This means that stopping and restarting the Sentinel process is safe.
Sentinel re-configures instances in a non-failover scenario
Sentinel attempts to set the current configuration to the monitored instance, even if there is no automatic failover operation. Especially:
According to the current configuration, if a slave server is declared as the primary server, it will replace the original primary server, become the new primary server, and become the original primary server of all replication objects from the server.
The slave servers that are connected to the wrong master server are reconfigured so that the slave servers replicate the correct primary server.
However, after these conditions have been met, Sentinel will still wait a long enough time before reconfiguring the instance to ensure that it can receive configuration updates from other Sentinel sends, thus avoiding unnecessary reconfiguration of the instance itself because it has saved the outdated configuration.Tilt Mode
Redis Sentinel relies heavily on the computer's time function: for example, to determine if an instance is available, Sentinel logs the time of the last corresponding PING for this instance and compares this time to the current time to know how long the instance has not been and Sent Inel for any successful communication.
However, Sentinel may also follow a failure once the computer's time function fails, or if the computer is busy, or if the process is blocked for some reason.
TILT mode is a special protection mode: Sentinel enters TILT mode when Sentinel discovers that something is wrong with the system.
Because Sentinel's time interrupt defaults to execute 10 times per second, we expect the interval between two executions of the time interrupt to be about 100 milliseconds. Sentinel does this by recording the time the last time the interrupt was executed and comparing it to the time the time interrupt was executed:
If the gap between two call times is negative, or is very large (more than 2 seconds), Sentinel enters TILT mode.
If Sentinel has entered TILT mode, sentinel delays exiting the TILT mode.
When Sentinel enters TILT mode, it will still continue to monitor all targets, but:
It no longer performs any operations, such as failover.
When an instance sends the Sentinel IS-MASTER-DOWN-BY-ADDR command to this sentinel, Sentinel returns a negative value: Because the Sentinel's downline judgment is no longer accurate.
If the TILT can last for 30 seconds, Sentinel exits TILT mode. keepalived
Keepalived can also achieve the ha of Redis, when Master is down slave will take over the service, while shutting down the replication function, when master recovers, synchronize data from slave, and then close replication after data synchronization is completed. Master and slave each resume their identities. keepalived Working principle
The
Keepalived provides VRRP and Health-check functionality that can be used only to provide a dual-machine floating VIP (VRRP virtual routing), which simply enables a dual-machine hot-standby high-availability feature. The
Keepalived is a software similar to the Layer3, 4 & 5 switch, which is what we normally call the 3rd, 4th, and 5th layers of exchange. The role of keepalived is to detect the state of a Web server. The layer3,4&5 works in the IP layer, TCP layer, and application layer of the IP/TCP protocol stack, with the following principles:
layer3:keepalived When working in Layer3 mode, keepalived periodically to servers in the server farm The
sends an ICMP packet (both our usual ping program), and if the IP address of a service is found to be inactive, keepalived reports that the server fails and rejects it from the server farm, a typical example of a server being illegally shut down. The Layer3 is based on whether the server's IP address is valid as a standard for the server to function properly or not. This is the way it will be used in this article.
Layer4: If you understand the Layer3 way, Layer4 is easy. The LAYER4 is primarily based on the status of the TCP port to determine whether the server is working properly. If the Web server's service port is typically 80, keepalived will remove the server from the server farm if Keepalived detects that port 80 is not booting.
Layer5:layer5 is working in a specific application layer, more complex than Layer3,layer4, the network occupies a larger bandwidth. Keepalived will check the server program according to user's settings is normal, if the user's settings do not match, then keepalived will remove the server from the server farm. The
VIP is the virtual IP, which is attached to the host network card, that is, the host network card virtual, this IP is still occupied by the network segment of an IP. keepalived Effect
Keepalived is primarily used here as a realserver health check and failover implementation between the LoadBalance host and the backup host. keepalived Introduction Keepalived is a Layer3, 4 & 5 Switch-based software, which we normally call 3rd, 4th, and 5th layers of exchange. The role of keepalived is to detect the state of the Web server, if a Web server freezes, or a work failure occurs, keepalived detects and rejects the failed Web server from the system. When the Web server is working properly, Keepalived automatically joins the Web server to the server farm, all of which are done automatically, without the need for manual intervention, and the only thing that needs to be done manually is to repair the failed Web server. Precautions
The localization policy needs to be turned on both master and slave, otherwise the one party that does not open the localization will empty the data of the other party in the process of auto-switching with each other, causing the data to be completely lost. Installing keepalived
Download keepalived First, the latest version is 1.3.5[root@singlenode software]#
wget http://www.keepalived.org/software/keepalived-1.3.5.tar.gz
--2017-05-02 16:30:23--/ http www.keepalived.org/software/keepalived-1.3.5.tar.gz
Resolving www.keepalived.org ... 37.59.63.157, 2001:41d0:8:7a9d::1
connecting to www.keepalived.org|37.59.63.157|:80 ... connected
. HTTP request sent, awaiting response ... $ OK
length:683183 (667K) [application/x-gzip]
saving to:“keepalived-1.3.5.tar.gzâ€
100%[========= ==================================================================================>] 683,183 21.6K/s In 27s
2017-05-02 16:30:58 (24.7 kb/s)-“keepalived-1.3.5.tar.gzâ€saved [683183/683183]
[Root@singlenode software]# Cp-r keepalived-1.3.5/usr/local/
[Root@singlenode keepalived-1.3.5]#./configure--prefix=/data/keepalived
configure:error:
!!! OpenSSL is not a properly installed on your system. !!!
!!! Can not include OpenSSL headers files. !!!
Depends on OpenSSL
[root@singlenode keepalived-1.3.5]# yum install openssl-devel
Loaded plugins: fastestmirror, refresh-packagekit, security
Loading mirror speeds from cached hostfile
* base: ftp.sjtu.edu.cn
* extras: mirrors.tuna.tsinghua.edu.cn
* updates: mirrors.tuna.tsinghua.edu.cn
Setting up Install Process
Resolving Dependencies
--> Running transaction check
---> Package openssl-devel.x86_64 0:1.0.1e-57.el6 will be installed
--> Processing Dependency: openssl = 1.0.1e-57.el6 for package: openssl-devel-1.0.1e-57.el6.x86_64
--> Processing Dependency: zlib-devel for package: openssl-devel-1.0.1e-57.el6.x86_64
--> Processing Dependency: krb5-devel for package: openssl-devel-1.0.1e-57.el6.x86_64
--> Running transaction check
---> Package krb5-devel.x86_64 0:1.10.3-65.el6 will be installed
--> Processing Dependency: libkadm5(x86-64) = 1.10.3-65.el6 for package: krb5-devel-1.10.3-65.el6.x86_64
--> Processing Dependency: krb5-libs = 1.10.3-65.el6 for package: krb5-devel-1.10.3-65.el6.x86_64
--> Processing Dependency: libselinux-devel for package: krb5-devel-1.10.3-65.el6.x86_64
--> Processing Dependency: libcom_err-devel for package: krb5-devel-1.10.3-65.el6.x86_64
--> Processing Dependency: keyutils-libs-devel for package: krb5-devel-1.10.3-65.el6.x86_64
---> Package openssl.x86_64 0:1.0.1e-15.el6 will be updated
---> Package openssl.x86_64 0:1.0.1e-57.el6 will be an update
---> Package zlib-devel.x86_64 0:1.2.3-29.el6 will be installed
--> Running transaction check
---> Package keyutils-libs-devel.x86_64 0:1.4-5.el6 will be installed
--> Processing Dependency: keyutils-libs = 1.4-5.el6 for package: keyutils-libs-devel-1.4-5.el6.x86_64
---> Package krb5-libs.x86_64 0:1.10.3-10.el6_4.6 will be updated
---> Package krb5-libs.x86_64 0:1.10.3-65.el6 will be an update
---> Package libcom_err-devel.x86_64 0:1.41.12-23.el6 will be installed
--> Processing Dependency: libcom_err = 1.41.12-23.el6 for package: libcom_err-devel-1.41.12-23.el6.x86_64
---> Package libkadm5.x86_64 0:1.10.3-65.el6 will be installed
---> Package libselinux-devel.x86_64 0:2.0.94-7.el6 will be installed
--> Processing Dependency: libselinux = 2.0.94-7.el6 for package: libselinux-devel-2.0.94-7.el6.x86_64
--> Processing Dependency: libsepol-devel >= 2.0.32-1 for package: libselinux-devel-2.0.94-7.el6.x86_64
--> Processing Dependency: pkgconfig(libsepol) for package: libselinux-devel-2.0.94-7.el6.x86_64
--> Running transaction check
---> Package keyutils-libs.x86_64 0:1.4-4.el6 will be updated
---> Package keyutils-libs.x86_64 0:1.4-5.el6 will be an update
---> Package libcom_err.x86_64 0:1.41.12-18.el6 will be updated
--> Processing Dependency: libcom_err = 1.41.12-18.el6 for package: e2fsprogs-libs-1.41.12-18.el6.x86_64
--> Processing Dependency: libcom_err = 1.41.12-18.el6 for package: libss-1.41.12-18.el6.x86_64
--> Processing Dependency: libcom_err = 1.41.12-18.el6 for package: e2fsprogs-1.41.12-18.el6.x86_64
---> Package libcom_err.x86_64 0:1.41.12-23.el6 will be an update
---> Package libselinux.x86_64 0:2.0.94-5.3.el6_4.1 will be updated
--> Processing Dependency: libselinux = 2.0.94-5.3.el6_4.1 for package: libselinux-python-2.0.94-5.3.el6_4.1.x86_64
--> Processing Dependency: libselinux = 2.0.94-5.3.el6_4.1 for package: libselinux-utils-2.0.94-5.3.el6_4.1.x86_64
---> Package libselinux.x86_64 0:2.0.94-7.el6 will be an update
---> Package libsepol-devel.x86_64 0:2.0.41-4.el6 will be installed
--> Running transaction check
---> Package e2fsprogs.x86_64 0:1.41.12-18.el6 will be updated
---> Package e2fsprogs.x86_64 0: