reasonable allocation of system resources, there will not be too much problem.Four, mysql5.5 multi-Instance deployment method4.1 MySQL Multi-instance deployment has 2 methods: one uses the Mysqld_multi configuration, one multi-instance defines the different configuration file separately. Let's talk. Using the Mysqld_multi configuration (mysql-5.5.36.tar.gz, cmake-2.8.8.tar.gz)1. Synchronization Time#yum install-y NTP#ntp-S time.windows.com#hwclock-W2. Install the required dependency package for
) Validate that the candidate disks is being discovered: Select Path from V$asm_disk; Create A new ASM instance SPFile: Sql> create SPFile from Pfile; ADD the new ASM SPFile and listener to the new ASM instance resource: $> 11.2 Grid Infrastructure Oracle home>/bin/srvctl modify ASM- p path>
$> 11.2 Grid Infrastructure Oracle home>/bin/srvctl modify ASM- L LISTENER Validate the Ohas (Oracle Restart) services start as follows: $> 11.2 Grid Infrastructure Oracle home>/bin/crsctl Stop
= 2mwrite_buffer = 2m[mysqlhotCopy]interactive-timeout 8. Configure file permissions for multiple instancesAuthorizing MySQL users and groups to manage multi-instance catalogs/data# Chown-r Mysql.mysql/data9. Configure environment variables for MySQL commandsMethod One:# echo ' Export path=/usr/local/mysql/bin: $PATH ' >>/etc/profile# Source/etc/profileMethod Two:Copy the command below the/usr/local/mysql/bin/to the Global System command path/usr/local/sbin/, or make a link10. Initialize the da
Mac Configuration Hadoop1. Modify/etc/hosts127.0.0.1 localhost2. Download the hadoop2.9.0 and JDK and install the appropriate environmentVim/etc/profileExport hadoop_home=/users/yg/app/cluster/hadoop-2.9.0Export hadoop_conf_dir= $HADOOP _home/etc/hadoopExport path= $PATH: $HADOOP _home/binExport Java_home=/library/java/javavirtualmachines/jdk1.8.0_144.jdk/contents/homeExport path= $PATH: $JAVA _home/bin3. Configure SSH (password-free login)SSH-KEYGEN-T RSA generates public and private keysCat Xx
Tags: same bin set font parameter Description Restart Start log stat1.cd /opt/zookeeper-3.4.52. Enter the Conf subdirectory under the Zookeeper directory to create the zoo.cfg and add the content:Zoo.cfg to copy the zoo_sample.cfg under the Conf, otherwise there will be a coding problem.ticktime=2000initlimit=10Synclimit=5Datadir=/home/files/zookeeper/zkdataDatalogdir=/home/files/zookeeper/zkdatalogclientport=2181server.1=10.45.54.145:2888:3888 (Use your IP here)parameter Description :The base
introduced;First KindTo install the software, you need to enter the update mode, into the update mode is to give up all the software protection; Here are the commands you need to enter.Access: Sadmin BUExit: Sadmin EUThe second creates a trusted directory to create a shared folder, which you can set as a trusted directory.Sadmin Trusted-u \\192.168.1.1\shareRemove Trust directory Sadmin trusted-r \\192.168.1.1\shareDisplay Trust Directory Sadmin trusted-lThe third type of trust certificate adds
Set up hadhoop standalone mode in CentOS
1. Download The hadhoop installation package ghost.[Plain] view plaincopyprint?
Gzip-dhadoop-2.7.0-src.tar.gz
Tar-xvfhadoop-2.7.0-src.tar
Subsequent operations must be performed under root.[SQL] view plaincopyprint?
Suroot
2. Install ssh
First, check whether ssh is installed.
Rpm-qa | grep ssh
If not, use
Yum install sshIt is best to enable the ssh service again
[Plain] view plaincopyprint?
Servicessh
Reference documents:http://blog.csdn.net/inkfish/article/details/5168676http://493663402-qq-com.iteye.com/blog/1515275Http://www.cnblogs.com/syveen/archive/2013/05/08/3068044.htmlHttp://www.cnblogs.com/kinglau/p/3794433.htmlEnvironment: Vmware11 Ubuntu14.04 LTS, Hadoop2.7.1One: Create an account1. Create Hadoop groups and groups under Hadoop users[Email protected]:~$ sudo adduser--ingroup hadoop Hadoop2. Add to Sudoers inside[Email protected]:~$ sudo gedit/etc/sudoers3. Switch to a Hadoop user[E
1) installation and configuration of the Java environment2) install hadoop
Download hadoop-0.20.2.tar.gz from hadoop and decompress tar zxvf hadoop-0.20.0.tar.gz
Add in hadoop-env.shExport java_home =/home/heyutao/tools/jdk1.6.0 _ 20Export hadoop_home =/home/heyutao/tools/hadoop-0.20.2Export Path = $ path:/home/heyutao/tools/hadoop-0.20.2/bin
Test whether hadoop is successfully installed. bin/hadoop
3) Configure hadoop in a single-host environment
A) edit the configuration file
1) Modify CONF/co
Generally, the android applications we develop are based on activity and the Android system manages the life cycle of the program. However, sometimes we also want to control the process of the Program on our own. For example, if you only develop a console application like/system/bin/PM, it is not appropriate to use activity. Here I call this self-controlled process as a standalone Android program.
There is no big difference between the development me
Cocos2dx standalone Mahjong (III)
Mahjong logic 4. Obtain card data
We have saved a one-dimensional array. Similar to a table, we can calculate the number of all cards, but how can we get the cards in our hands?
// Poker Conversion
Byte switchtocarddata (byte cbcardindex [max_index]/* input table for counting the number of all cards */, byte cbcarddata [max_count]/* outgoing hand card data */)
{
// Convert poker
Byte cbposition = 0;
For (byte I = 0;
1. What is mahout?
Mahout is an open-source project (http://mahout.apache.org/) of Apache that provides several classic algorithms in the machine learning field, allowing developers to quickly build machine learning and data mining applications.
Mahout is based on hadoop. The name is also very interesting. hadoop is the name of an elephant, while mahout is like a husband and a viewer. It can be seen that the two are closely related. (This naturally reminds me of sun and eclipse ...)
At this ti
), leading to data communication problems between worker nodes, task Scheduling is affected.
Ii. Kafka performance bottleneck
When Kafka is integrated with storm, the data processing performance is not very good and does not meet the expected requirements. I initially suspected that it was a kafkaspout code problem, but storm external has included it. I don't think the problem is here. Then, let's take a look at kafkaspout implementation and find possible performance bottlenecks. To increase
Install and use Git standalone version
Introduction:
Git is an open-source distributed version control system. It is one of the most popular and advanced version control software today.Git is a distributed version control system. On the contrary, SVN is a centralized version control system. Each time SVN changes a file, it must be submitted to the server for storage, and Git has a complete local version library.
1. Install Git
1. Linux (CentOS)
Shell>
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.