availability of access.???? How to achieve:???????? Data replication:???????????? Log based;???????????? Master-slave:mysql\mongodb???????????? Replic Set:mongodb???????? Double write:???????????? Storage layer Multi-master equivalent structure, more flexible, but the cost of the data module layer is high;???? Data backup:???????? Cold backup:???????????? Regular data copying to a storage medium is a traditional means of data protection;????????????
options array (alternative)
$m = new mongoclient ("MONGO db://${username}: ${password} @localhost ", Array (" DB " => " MyDatabase "));
fragmentation (cluster)
$m = new mongoclient ("mongodb://mongos1.example.com:27017,mongos2.example.com:27017"));
Copy
Use the "replicaset" option to specify the name of the duplicate. The same name is represented in a cluster. Multiple servers are separated by commas.
Using multiple servers as the seed list (prefered)
$m = new mongoclient ("
Certificate with the Pre-prepare and 2 F matching PREPARE messages for sequence number n, view V, and request M, if a replic A has reached the English speaking condition, for example is above the italic character description one kind of situation, then we say that the request in this replica state is prepared, the replica has a certificate called prepared certificate. So we can say that this sort of work has been done, the whole network node has reac
the key to the master server to share the master server log files. Go to the main server MySQL interface, command: # mysql-u root-p 111111 //My MySQL account is root, the password is 111111 under the MySQL operator interface, enter the following line: GRANT replic ation slave on * * to ' slave ' @ '% ' identified by ' 111111 '; 3. View the information for the primary server bin log (these two values are recorded after execution Then do not do anyth
decrement Nextindex to bypass all of the conflicting entries in that term; One appendentries RPC would be required for each term with conflicting entries, rather than one RPC per entry. In practice, we doubt this optimization is necessary, since failures happen infrequently and it's unlikely that there wil L be many inconsistent entries. With this mechanism, a leader-does not need-to-take any special the actions to restore log consistency when it comes to power. It just begins normal operation,
. managing collections via the collections API
Create ---
http://localhost:8983/solr/admin/collections?action=CREATEname=mycollectionnumShards=3replicationFactor=4
Name: Collection name
Numshards: specify the number of slices)
Replicationfactor: Number of copies
Maxshardspernode: The default value is 1. Note the following three values: numshards, replicationfactor, and livesolrnode. A normal solrcloud cluster does not allow deploying multiple replic o
-writeone, which uses tradeoff consistency to obtain available, R/W latency, and scalability
A is the most distinctive feature. It only guarantees the lowest consistency, and obtains the highest availability. It only updates any replica, and then completely relies on Anti-entropy to disseminate updates.
B. To reducePropagation latency: The update is asynchronously sent to all copies, which improves the propagation efficiency. The cost is that the R/W latency and scalability are slightly reduced,
Ggsci with the This command:ggsci> dblogin userid ggsci> Add Trandata How to determine if additional logs are open:1> database levelSelect Supplemental_log_data_min, SUPPLEMENTAL_LOG_DATA_PK,supplemental_log_data_ui, force_logging from V$database;2> table levelSELECT * from Dba_log_groupswhere owner= ' andtable_name= ' ---Assume that data is returned, the additional log is already open. Otherwise not openFor a particular table, you can find which columns is part of the Supplemental Log group w
Filestatus class encapsulates filesystem metadata for files and directories, including file length, block size, replic ation, modification time, ownership, and permission information.An important feature of any file system is the directory structure and information that browses and retrieves the stored files and directories. The Filestatus class encapsulates the metadata for files and directories in the file system, including file length, block size,
because replication Shard can be used when searching. Thus improving the concurrency of the data searchWhen index is created, you can set the number of slices and the number of copies, by default, create each index setting 5 shard and a replica, indicating that the index is stored by 5 logical storage units, each logical storage unit has a replication node for disaster preparedness, note that Shard can only be set when the index is created, and the number of shard is stored on which shard the d
] wbclientconn.c Info:start connecting to Host=localhost port=5432 user=hs dbname=replication replic Ation=true Application_name=walbouncerFatal:no pg_hba.conf entry for replication connection from host ":: 1", User "HS"Once these messages are displayed, the system is in a running state and the transaction log flows from master to slave.Now is the time to test:$ psql-h localhost-p 5444 bFatal:database "B" does not existDetail:the database subdirectory
based on log file/home/logs/check_mysql_slave.log
PS: The next script added "when the discovery synchronization is not synchronized" will automatically extract the main library file number, as well as POS, to synchronize the main library, the script reads as follows:
#!/bin/sh #set-X #file is slave_repl.sh #Author by Kevin #date is 2011-11-13 mstool= "/usr/local/mysql-3307/bin/mysql-h 192.168.1.106-uroot-pw!zl7pog27-p 3307 "sltool="/usr/local/mysql-3307/bin/mysql-h 192.168.1.107-uroot-pw!
all the transactions in the publication database transaction log that is marked for replication b UT has not been marked as distributed.Sp_repltransSp_repltrans Returns information about the publication database from which it's executed, allowing you to view T Ransactions currently not distributed (those transactions remaining on the transaction log that has not been sent to the Distributor). The result set displays the log sequence numbers of the first and last records for each transaction. Sp
values(1);
into a2 values(2);
cachedb2> ALTER ACTIVE STANDBY PAIR INCLUDE TABLE a2;
17059forisnotempty
from a2;
cachedb2> ALTER ACTIVE STANDBY PAIR INCLUDE TABLE a2;Objects that can be copied automatically The following objects can be copied automatically when Ddlreplicationlevel = 2 or 3 * Create, ALTER, or drop a user with the Create user, alter USER, or drop U SER statements. * Grant or revoke privileges from a user with the Grant or revoke statements. * Alter a table to add or drop a col
Streaming replication Slots is a pending feature in PostgreSQL 9.4, as part of the logical changeset extraction feature.What is they for, what does need to know, what changes?What is replication slots?Streaming replication slots is a new facility introduced in PostgreSQL 9.4. They was a persistent record of the state of a replica that was kept on the master server even when the replica was OFFL INE and disconnected.They aren ' t used for physical replication by default, so you'll only have deali
(HOSTPORTSTR) Make a secondary sync from the given member Rs.freeze (secs) Make a node I Neligible to become primary for the time specified Rs.remove (HOSTPORTSTR) remove a host from the R Eplica set (disconnects) Rs.slaveok () Allow queries on secondary nodes RS.PRINTREPL Icationinfo () Check oplog size and time range Rs.printslavereplicationinfo () Check Replic A set members and replication lags Db.ismaster () Check who is primary
Master-slave replication:From the server:I/O Thread: Requests binary log information from master and saves it to the relay log;SQL thread: Reads the log information from the relay log and completes the replay locally; Async mode: Async 1, from the server behind the main server, 2, the master-slave data inconsistency; binary log format: SET datetime = Now () 1, line 2 based, statement-based 3. Hybrid configuration process: 1, Master (1) Enable binary log; MY.CNF----> Log_bin=log_bin.log
Contact Us
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.