Corosync + Pacemaker build highly available Clusters
I. Overview:1.1 Introduction to AIS and OpenAISAIS application interface specification is a set of open specifications used to define application interface (API). These applications provide an open and highly portable application interface for application services as middleware. It is urgently needed to implement high-availability applications. The Service Availability Forum (SA Forum) is an open Fo
HA cluster high-availability clusters are divided into the following steps:Dot I-->ha cluster basic conceptDot I-->heartbeat implement haDot I-->corosync detailedDot I-->pacemaker detailedDot I-->drbd detailedDot Me--heartbeat resource management based on CRMDot Me--corosync+pacemaker+drbd+mysql to implement a highly available (HA) MySQL clusterDot Me--heartbeat+mysql+nfs to implement a highly available (HA) MySQL clusterThis article is about the firs
In "high concurrency, massive data, distributed, nosql, cloud computing ...... I believe many of my friends have heard of and even often talked about "clusters and Server Load balancer". But not all of them have the opportunity to really come into use these technologies, not everyone really understands these technical terms. The following is a brief explanation.
Cluster)
A cluster is a loosely coupled multi-processor system composed of a group of ind
A cluster is actually a group of tables, which are composed of multiple tables that share the same data block. Combining tables that are frequently used together into clusters can improve processing efficiency; a table in a cluster is called a cluster table.The creation sequence is: Cluster → cluster Table → Cluster Index → data
Cluster creation formatCreate cluster cluster_name
(Column date_type [, column datatype]...)
[PCTUSED 40 | integer] [PCTFRE
synchronize the application, here we also need to use another technology, that is, proxy, proxy server only one, if users want to access the application, only need to access the proxy server can be, However, the task of assigning users to applications on individual servers is done by the agent.With the above foundation, I will formally teach you to deploy clusters and agents.Take the picture above to give an example. Suppose I have a shop application
Experiment Scene:
Xian Lingyun System High-tech Co., Ltd. using IIS to build a Web site, the domain name for nlb.angeldevil.com. As the business gradually increased, the website speed is also more and more slow, and often failure, for the interests of the company brought a lot of inconvenience; the company decided to use two Web sites to provide access to clients. So we use the Network Load Balancing technology. The IP addresses for both servers are: 192.168.1.10 and 192.168.1.20, and the clust
We've got the environment ready, the roles are all installed, and now we're starting to create clusters and configure clusters.
First, we open the Failover Cluster Manager from the Admin tool, turn on both C1 and C2 OK, see common tasks, validate configurations and create clusters, and manage existing clusters, verify
Load Balancing for Openfire clusters using Nginx
Some time ago, we implemented Openfire cluster deployment. If we want to implement the application in the application, we still need a crucial task: load distribution, and the selected load tool is Nginx (the reason is simple: open source, free of charge ).
1. Install nginx (RedHat Enterprise Edition 6.5 64bit environment)
Download the latest upload file to the nginxofficial website. The nginx-1.9.3.tar
initiator name, as follows:8, add the corresponding TestServer01 and TestServer02 IQN in the Access server, click "Next"9. CHAP authentication is not required here, because we use Kerberos authentication, click "Next"10, confirm the corresponding configuration information, click "Create"11, the configuration is complete.12, this time we can see a virtual disk is not connected state13, we add this iSCSI virtual disk on the TestServer01 and TestServer02 server side respectively14. Here we need to
; close immediately. Any ongoing transactions are aborted. This close mode is not recommended, and in some cases it may cause database corruption and requires manual recovery. Gpstop-M Smart => smart close. If an active connection exists, this command fails during warning. This is the default shutdown mode. Gpstop -- Host hostname => disable the segments data node. It cannot be used with-M,-R,-u, or-y.Cluster recovery
Command Parameter Function gprecoverseg-A => quickly restore gprecoverseg-I =>
Three methods for session synchronization in web clusters
After creating a Web Cluster, you will definitely consider session synchronization first, because after using Server Load balancer, access to the same page from the same IP address will be allocated to different servers, if the session is not synchronized, a login user will be logged on at the same time, and will not be logged on at the same time. Therefore, this article provides three differe
nodes to use the swarm function.
(5). You can also view the machine conditions in the cluster on the manager.Docker node ls
4.doc Ker service creation
Service: A long-running docker container that can be deployed on any worker node, it can be connected and consumed by other containers in the remote system or swarm.
Task): The service runs on a container instance.
Replicas): The same service runs on a specified number of worker nodes.
Create a serviceDocker service create -- name web_server htt
; border-left:0px; padding-right:0px "border=" 0 "alt=" image "src=" http://s3.51cto.com/wyfs02/M01/6E/CF/ Wkiom1wizvvaizhyaakjljjfpcq824.jpg "" 593 "height=" 309 "/> Problem analysis This problem was encountered at the earliest deployment time, was proved to be 2012 a bug, Microsoft, in addition to a hotfix to solve, I update the patch to the latest after the thought will automatically update to this patch and solve the problem, but found that the problem remains, unable to resolve the problem.
Configuring the Zookeeper cluster (in Windows environment)1. Extract Three zookeeper directoriesD:\zookeeper\zookeeper-1D:\zookeeper\zookeeper-2D:\zookeeper\zookeeper-32. Create a data and log directory under these 3 directories and create a new myID file under the data directorymyID's file contents are: 1, 2, 33. Copy the Zoo_sample.cfg file to the Zoo.cfg file under the Conf directory4. Modify the Zoo.cfg file# The number of milliseconds of each tickticktime=2000# The number of ticks that init
Mapreduce.input.fileinputformat.split.minsize.per.node16/11/02 11:42:54 INFO Configuration.deprecation:mapred.reduce.tasks is deprecated. Instead, use Mapreduce.job.reduces16/11/02 11:42:54 INFO Configuration.deprecation:mapred.reduce.tasks.speculative.execution is deprecated. Instead, use Mapreduce.reduce.speculativeLogging initialized using configuration in jar:file:/home/hadoop/app/hive-0.12.0/lib/hive-common-0.12.0.jar!/ Hive-log4j.propertiesSlf4j:class path contains multiple slf4j bindings
Recently I have been studying redis and deployed a redis cluster with Master-Slave replication and multi-Master Slave. However, php or node is generally a single point. how can I use php and node for cluster connection, some transactions are involved. Recently I have been studying redis and deployed a redis cluster with Master-Slave replication and multi-Master Slave. However, php or node is generally a single point. how can I use php and node for cluster connection, some transactions are involv
244.43Core i7 930 **** 1764.75 428.00Core i7 860 #### 2004.31 381.97Core i7 3930K 2529.73 746.01Core i7 4820K $$$1 2671.15 892.04Core i7 4820K $$$2 2684.05 895.54Core i7 3930K OC 3112.94 926.92Raspberry Pi is not suitable for high-performance environments in terms of design. but in another scenario, it is still promising to use the arm-based server solution.
Forwarding
1. Raspberry Pi 2 B Cluster
On Weibo, the original po w
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.