-1.2.1export PATH=$PATH:$HADOOP_HOME/binexport HADOOP_HOME_WARN_SUPPRESS=13) Make the configuration file effective[[emailprotected] ~]$ source /etc/profilefor more details, please read on to the next page. Highlights : http://www.linuxidc.com/Linux/2015-03/114669p2.htm--------------------------------------Split Line--------------------------------------Ubuntu14.04 Hadoop2.4.1 stand-alone/pseudo-distributed installation configuration tutorial http://www.linuxidc.com/Linux/2015-02/113487.htmCentOS
Setting up Hadoop cluster environment steps under Ubuntu 12.04I. Preparation before setting up the environment:My native Ubuntu 12.04 32bit as Maser, is the same machine that was used in the stand-alone version of the Hadoop environment, http://www.linuxidc.com/Linux/2013-01/78112.htmAlso in the KVM Virtual 4 machines, respectively named:Son-1 (Ubuntu 12.04 32bit
, And the other slave nodes have nodemanger. This is the process for managing the resources of each node, it indicates that the startup is successful, and yarn also provides the Web end. The port is 8088,
Enter hadoop000: 8088 in the browser:
Pay attention to the content circled above
We can start a simple job and test it.
[[emailprotected] hadoop]# pwd/root/app/hadoop/share/
files in the conf directory:(1) core-site.xml
fs.default.namehdfs://Master:9000(2) hadoop-env.sh
Add the following code to the file:
Export JAVA_HOME = (the jdk path you configured, such as/usr/java/jdk1.6.0 _ 25)
(3) hdfs-site.xml
dfs.name.dir/home/hadoop/temp/hadoopdfs.data.dir/home/
the/home/jiaan.gja directory and configure the Java environment variable with the following command:CD ~vim. Bash_profileAdd the following to the. Bash_profile:Immediately let the Java environment variable take effect, execute the following command:source. bash_profileFinally verify that the Java installation is properly configured:Host because I built a Hadoop cluster
Install and configure Sqoop for MySQL in the Hadoop cluster environment,
Sqoop is a tool used to transfer data from Hadoop to relational databases. It can import data from a relational database (such as MySQL, Oracle, and S) into Hadoop HDFS, you can also import HDFS data to a relational database.
One of the highlights
completes the modification of the Hadoop-eclipse-plugin-0.20.203.0.jar.
Finally, copy the Hadoop-eclipse-plugin-0.20.203.0.jar to the plugins directory of Eclipse:
$ CD ~/hadoop-0.20.203.0/lib
$ sudo cp hadoop-eclipse-plugin-0.20.203.0.jar/usr/eclipse/plugins/
5. Configure the plug-in in Eclipse.
First, open Eclipse
nodes, and edit the ". BASHRC" file, adding the following lines:$ vim. BASHRC//Edit the file, add the following lines to export Hadoop_home=/home/hduser/hadoopexport java_home=/usr/lib/jvm/java-8-oraclepath=$ PATH: $HADOOP _home/bin: $HADOOP _home/sbin$ source. BASHRC//source make it effective immediatelyChange the java_home of
password login as Datanode node, and because the current node is both Namenode and datanode because of the deployment of a single node, SSH login with no password is required at this time. Here's how:Su HadoopCd2. Create the. SSH directory, generate the keymkdir. SSHSSH-KEYGEN-T RSA3. Switch to the. SSH directory to view the public and private keysCD. SSHLs4. Copy the public key into the log file. To see if replication succeededCP Id_rsa.pub Authorized_keysLs5. View the contents of the diary fi
Introduction to Hadoop
Hadoop is an open source distributed computing platform owned by the Apache Software Foundation. With Hadoop Distributed File System (Hdfs,hadoop distributed filesystem) and MapReduce (Google MapReduce's Open source implementation) provides the user with a distributed infrastructure that is trans
-t rsa
Copy the public key to each machine, including the local machine, so that ssh localhost password-free login:
[hadoop@master ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop@master[hadoop@master ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop@slave1[hadoop@master ~]$ ssh-co
complete the configurationSecond, establish the Hadoop running accountThat is, for the Hadoop cluster to set up a user group and users, this part is relatively simple, the reference example is as follows:sudo groupadd Hadoop//Set up Hadoop user groupssudo useradd–s/bin/bash
Hadoop version: hadoop-2.5.1-x64.tar.gz
The study referenced the Hadoop build process for the two nodes of the http://www.powerxing.com/install-hadoop-cluster/, I used VirtualBox to open four Ubuntu (version 15.10) virtual machines, build four nodes of the
-env.sh
Add the following environment variables at the beginning. I tried not to add an error indicating that JAVA_HOME could not be found.
Export JAVA_HOME =/home/java/jdk1.7
# Thejava implementation to use.
ExportJAVA_HOME =$ {JAVA_HOME}
In fact, the environment variables can be read. I added them.
Standalone,Repeatedly executed on all machines
You can use hadoop dfsadmin-report to check whether
-connector-java-5.0.8/mysql-connector-java-5.0.8-bin.jar./libTo start hive:$ cd/home/zxm/hadoop/hive-0.8.1;./bin/hiveTest:$./hiveWARNING:org.apache.hadoop.metrics.jvm.EventCounter is deprecated. Please use org.apache.hadoop.log.metrics.EventCounter the log4j.properties files.Logging initialized using configuration in jar:file:/home/zxm/
Masters and slaves separately below
Vim/home/sunnie/documents/hadoop-1.2.1/conf/masters
Remove the localhost from the file and replace it with
Master
Vim/home/sunnie/documents/hadoop-1.2.1/conf/slaves
Remove the localhost from the file and replace it with
Slave1
Slave2
In this way, the
This series of articles describes how to install and configure hadoop in full distribution mode and some basic operations in full distribution mode. Prepare to use a single-host call before joining the node. This article only describes how to install and configure a single node.
1. Install Namenode and JobTracker
This is the first and most critical cluster in full distribution mode. Use VMWARE virtual Ubu
Fully Distributed Hadoop cluster installation in Ubantu 14.04
The purpose of this article is to teach you how to configure Hadoop's fully distributed cluster. In addition to completely distributed, there are two types: Single-node and pseudo-distributed deployment. Pseudo-distribution only requires one virtual machine, and there are relatively few configurations.
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.