-x86_64-2.0.1.tar-C/opt/mongodb
Mkdir/opt/mongodb/data
Touch/opt/mongodb/logs
The installation is very simple. The key is the following STARTUP configuration. There are three boot configurations: normal single-host startup, master-slave startup, and replica set startup.
1.Single host Startup Mode
Start
Cd/opt/mongodb/bin
/Opt/mongodb/bin/mongod-f mongodb. conf
# Mongodb. confIs a custom STARTUP configuration file with ParametersAuthIndicates that you
After reading the mongodb authoritative guide, it is more appropriate to use the replica set for read/write splitting, and the general system is enough to use it.
It is easy to find. If you do not describe it, you can only describe the key steps.
The two servers are 201 and 202 nodes, the two are initially configured on 202, and the other is added on 201.
Deploy two nodes first202:./Mongod -- dbpath./db1 -- port 10002 -- replSet blort/Server202: 10001
How to fix the standby database after ORACLE 11G replica uard Failover, and restore uardfailoverProblem scenarios after failover:Due to the failover test, a standby database has been changed to the primary database. How can I change this new primary database (the original standby database) back to the standby database?Both are primary, p1, and p2. How can I set one primary database 1 to p1, And the other primary database p2 to the p1 standby database?
Oracle 11g replica uard creation in ORA-01665 resolution, dataguardora-01665Oracle 11g fails to restore the standby control file when creating data guard. When I mount standby database, tell me that the control file is not a standby control file and the information is ORA-01665: control file is not a standby control file ".The control file is generated using rman in the master database.Backup current controlfile for standby format 'xxxxxx/ctl. stdy ';
Previously there was a replica set to build mongodb3.0, but the admin database that was used directly was the root of the user. Links: http://blog.csdn.net/tuzongxun/article/details/51723259
Recently you want to move data to a non-admin database, and then re-create the user in the non-admin library, the process is roughly the same as before, but there are some problems when creating the user, mainly on the user roles and permissions. I built the proce
Tags: test _id hard Disk Blog Implementation realloc node comm ICATo create a path:mkdir-p/datassd/mongo_20011/{data,conf,log}Example configuration file:#mongo.confdbpath=/datassd/mongo/data/logpath=/datassd/mongo_20011/log/mongo_20011.logpidfilepath=/datassd/mongo_20011/mongo_20011.piddirectoryperdb=truelogappend=truereplSet=testrsport=20011oplogSize=10000fork=truenoprealloc=trueParameter explanation:DBPath: Data Storage DirectoryLogPath: Log Storage pathPidfilepath: Process files, easy to stop
Label:REPMGR_METHOD.C, __repmgr_start_int ()REPMGR_METHOD.C, __repmgr_start_msg_threads ()REPMGR_MSG.C, __repmgr_msg_thread ()Message_loop () while (ret = __repmgr_queue_get () ... __repmgr_queue_get-while (M = available_work (env)) = = NULL), wait on Msg_availProcess_message ()REPMGR_RECORD.C, __rep_process_message_int () For Rep_log messages: REP_LOG.C, __rep_log (). I received a log message to deal with the Rep_log_req: Other site requests a log rec. Rep_page: REP_BACKUP.C, __rep_page (). Ins
already exist* and initialize Egen to 1. If It does exist, we read it when we create* The rep region. We write it immediately before sending our VOTE1 in* an election. That is, if a client has ever sent a vote for any* Election, the file is already going to being updated to reflect a future* Election, should it crash.*/#define REP_EGENNAME "__db.rep.egen" typedef struct {u_int32_t Egen; /* voter ' s election generation. */int Eid; /* voter ' s ID. */} rep_vtally; Rep_elect.c, __rep_tally () *
]...replication_strategy = Mysqlbinlogreplicationreplication_namespace = Trove.guestagent.strategies.replication.mysql_binlogSecond, Gtid modeNot available for MySQL 5.5,mysql 5.5 does not support Gtid replication1. Edit/etc/trove/trove-taskmanager.conf[Default]...template_path =/etc/trove/templates/...2. Copy the MySQL profile template and execute it on the node running Trove-taskmanagerCp-r trove/templates//etc/trove/3. Edit/etc/trove/trove-guestagent.conf[Mysql]...replication_strategy = Mysql
By default: The primary node is responsible for data read and write, the secondary node backs up the data on the primary node, but the arbiter nodeData is not synchronized from the primary nodeArbiter Effect:When the primary node fails, it can select a primary node from the second node and will not participate in the data read and write.MongoDB uses oplog.rs to synchronize data sets between replication sets.This article is from the DBA Sky blog, so be sure to keep this source http://9425473.blog
avoid being dragged down by too many connections.IndexCounters:btree:misses the number of misses of the index, and the ratio of hits to consider whether the index is correctly established. 6:db.currentop (): View the currently executing action> Db.currentop ()
{"Opid": "shard3:466404288", "active": false, "Waitingforlock": False, "OP": "Query", "ns": "Sd.usersemails", "Query ": {}," client_s ":" 10.121.13.8:34473 "," desc ":" Conn "}
Kill if necessary: Db.killop ("shard3:466404288")
1: Download MongoDB 2.6 versionHttps://fastdl.mongodb.org/win32/mongodb-win32-x86_64-2008plus-2.6.9.zip2: UnzipTAR-ZXVF Mongodb-linux-x86_64-2.6.9.zipMV mongodb-linux-x86_64-2.6.9 MongoDB 3: Create data directory and log directory and conf configuration file in MongoDB directoryBoth data and log directories are guaranteed to have read and write access4: Edit config file vi mongo.confAdd logappend=true# to the =/root/software/mongodb/log/mongod# Data Catalog dbpath=/root/software/mongodb/data# l
Overview
You can use a single data server as a shard or a replica set as a shard.
The replica set is a master-slave cluster with the automatic fault recovery function, and its master-slave role can be changed automatically.
Each replica set includes three roles: master server, slave server, and arbitration server.
Deployment Diagram
Deployment process
Based on the "Mongodb cluster configuration (sharding with replica set)" cluster (SEE), try to dynamically Add a shard replica set when the Mongodb cluster is running.
(1) Start the replica set Node
Run the following three batch files to start three Mongod processes: 127.0.0.1: 36000, 127.0.0.1: 36001, and 127.0.0.1: 36002.
Batch File startShardD_0.bat:
Cd d:/mon
command line. initialize.
Open a command line window and connect to the first instance.
Mongo -- port 10000
Enter the configuration information of mongodb replicate sets in the command line, and then execute rs. initiate (rsconf) to initialize the configuration information.
Rsconf = {
_ Id: "rs0 ",
Members :[
{
_ Id: 0,
Host: "
}
]
}
Rs. initiate (rsconf)
In this step, if you forget to add parameters when executing the rs. initialte method, you can re-initialize the configuration file through r
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.