Want to know hadoop cluster configuration best practices? we have a huge selection of hadoop cluster configuration best practices information on alibabacloud.com
1, add the configuration file to the project source directory (SRC)
mapreduce.framework.name
yarn
Read the contents of the configuration file so that the project knows to submit to the cluster to run2, package the project into the Project source directory (SRC) 3, add a sentence in Java code
Hadoop Modes
Pre-install Setup
Creating a user
SSH Setup
Installing Java
Install Hadoop
Install in Standalone Mode
Lets do a test
Install in Pseudo distributed Mode
Hadoop Setup
Hadoop
Reprint: Hadoop Cluster time synchronization
Test environment: 192.168.217.130 Master master.hadoop 192.168.217.131 node1 node1.hadoop 192.168.217.132 node2 node2.hadoopfirst, set the master server timeView local time and time zone [root@master ~]# date Mon Feb 09:54:09 CST 2017 Select time Zone [root@master ~]# tzselect [Root@master ~]# Cp/usr/shar E/zoneinfo/a
Rsa.pub, and then copy it to the master host and append to Authorized_keys. The final configuration succeeded as follows:
6. Installing Hadoop (all machines in a cluster will have Hadoop installed)Cd/usr/local/hadoopwget http://mirror.bit.edu.cn/apache/hadoop/common/
Tags: share port number USR data via SQL database my.cnf Chinese garbled problem MySQL installationMySQL installation for Hadoop---clusterOne: Two: Three: Four: Five: Six: Seven: Eight: Modify database character: Solve Chinese garbled problem, mysql default is latin1, we want to change to Utf-81> 2> Then we do modify:--> first we need to build a folder for MySQL at/etc/--and then copy/usr/sharemysql/my-medium.cof to/
contentsHadoop Fs-tail/user/trunk/test.txt #查看 The last 1000 lines of the/user/trunk/test.txt fileHadoop fs-rm/user/trunk/test.txt #删除/user/trunk/test.txt fileHadoop fs-help ls #查看ls命令的帮助文档Two HDFS deployment The main steps are as follows:1. Configure the installation environment for Hadoop;2. Configure the configuration file for Hadoop;3. Start the HDFs ser
Many new users have encountered problems with hadoop installation, configuration, deployment, and usage for the first time. This article is both a test summary and a reference for most beginners (of course, there are a lot of related information online ).
Hardware environmentThere are two machines in total, one (as a Masters), one machine uses the VM to install two systems (as slaves), and all three system
Hadoop-1.x installation and configuration
1. Install JDK and SSH before installing Hadoop.
Hadoop is developed in Java. MapReduce and Hadoop compilation depend on JDK. Therefore, JDK1.6 or later must be installed first (JDK 1.6 is generally used in the actual production envi
1. Before installing Hadoop, you need to install the JDK and SSH first.Hadoop is developed in Java language, and the operation of MapReduce and the compilation of Hadoop depend on the JDK. Therefore, you must first install JDK1.6 or later (JDK1.6 is generally used in a real-world production environment, because some components of Hadoop do not support JDK1.7 and
Ironfan Introduction
In Serengeti, there are two most important and critical functions: one is virtual machine management, that is, creating and managing required virtual machines for a Hadoop cluster in vCenter; the other is cluster software installation and configuration management, that is,
, there is an. SSH directory
Id_rsa private Key
Id_rsa.pub Public Key
Known_hosts via SSH link to this host, there will be a record here
2. Give the public key to the trusted host (native)
Enter the Ssh-copy-id host name at the command line
Ssh-copy-id Master
Ssh-copy-id slave1
Ssh-copy-id Slave2
The password for the trusted host needs to be entered during replication
3. Verify, enter in command line: SSH Trust host name
SSH Master
SSH slave1
SSH slave2
If you are not prompted to enter a passwor
Apache Hadoop configuration Kerberos Guide
Generally, the security of a Hadoop cluster is guaranteed using kerberos. After Kerberos is enabled, you must perform authentication. After verification, you can use the GRANT/REVOKE statement to control role-based access. This article describes how to configure kerberos in a
first time you log on to this host. Type "yes ". This will add the "recognition mark" of this host to "~ /. Ssh/know_hosts "file. This prompt is no longer displayed when you access this host for the second time.
Note: Authentication refused: bad ownership or modes for directory/Root errors may be caused by permission issues or user groups. refer to the following documents.
Http://recursive-design.com/blog/2010/09/14/ssh-authentication-refused/
Http://bbs.csdn.net/topics/380198627
Host
rsync
Then confirm that you can use SSH to log on to localhost without a password
Enter the SSH localhost command: SSH localhost
Ssh-keygen-t dsa-P "-f ~ /. Ssh/id_dsa
Cat ~ /. Ssh/id_dsa.pub> ~ /. Ssh/authorized_keys
Note:-P is followed by two "" signs, indicating that the password is set to null.
After completing the preceding configuration, decompress the package and configure hadoop.
1. decompress
ticktime=2000
# The number of ticks that the initial
# Synchronization phase can take
initlimit=10
# The number of ticks that can pass between
# Sending a request and getting an acknowledgement
Synclimit=5
# The directory where the snapshot is stored.
Datadir=/home/frank/zookeeperinstall/data
# The port at which the clients'll connect
clientport=2222
server.1=192.168.0.100:2888:3888
server.2=192.168.0.102:2888:3888
server.3=192.168.0.103:2888:3888
Installation
Very simple, download hbase-0.
installed, you can decompress the package directly here. I use the following directory structure, as shown in the following environment variables. After decompression, put the package in/usr/Java. You need to configure the environment variable, VIM/etc/profile.
Export java_home =/usr/Java/jdk1.7.0 _ 60
Export jre_home =/usr/Java/jdk1.7.0 _ 60
Export classpath =.: $ java_home/lib/dt. jar: $ java_home/lib/tools. jar: $ jre_home/lib
Export Path = $ path: $ java_home/bin: jre_home/bin
Then ESC, sav
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.