Excerpt from: http://www.powerxing.com/install-hadoop-cluster/This tutorial describes how to configure a Hadoop cluster, and the default reader has mastered the single-machine pseudo-distributed configuration of Hadoop, otherwise
initialization, no longer required[Email protected]:~# sbin/start-dfs.sh[Email protected]:~# sbin/start-yarn.shThe command JPS allows you to see the processes initiated by each node.You can see that the master node started the Namenode, Secondrrynamenode, ResourceManager processes.The slave node initiates the Datanode and NodeManager processes.Access to the management interface of Hadoop via http://master:50070/.Shutting down the
configuration file are:
Run the ": WQ" command to save and exit.
Through the above configuration, we have completed the simplest pseudo-distributed configuration.
Next, format the hadoop namenode:
Enter "Y" to complete the formatting process:
Start hadoop!
Start hadoop as follows:
Use the JPS command that comes with Java to query all daemon processes:
Star
Copy an objectThe content of the copied "input" folder is as follows:The content of the "conf" file under the hadoop installation directory is the same.Now, run the wordcount program in the pseudo-distributed mode we just built:After the operation is complete, let's check the output result:Some statistical results are as follows:At this time, we will go to the hadoop Web console and find that we have submit
Copy an object The content of the copied "input" folder is as follows: The content of the "conf" file under the hadoop installation directory is the same. Now, run the wordcount program in the pseudo-distributed mode we just built: After the operation is complete, let's check the output result: Some statistical results are as follows: At this time, we will go to the hadoop Web
HDFs Add Delete nodes and perform HDFs balance
Mode 1: Static add Datanode, stop Namenode mode
1. Stop Namenode
2. Modify the slaves file and update to each node
3. Start Namenode
4. Execute the Hadoop balance command. (This is used for the balance cluster and is not required if you are just adding a node)
-----------------------------------------
Mode 2: Dynamically add Datanode, keep Namenode way
Video materials are checked one by one, clear high quality, and contains a variety of documents, software installation packages and source code! Perpetual FREE Updates!Technical teams are permanently free to answer technical questions: Hadoop, Redis, Memcached, MongoDB, Spark, Storm, cloud computing, R language, machine learning, Nginx, Linux, MySQL, Java EE,. NET, PHP, Save your time!Get video materials and technical support addresses----------------
Wang Jialin's in-depth case-driven practice of cloud computing distributed Big Data hadoop in July 6-7 in Shanghai
Wang Jialin Lecture 4HadoopGraphic and text training course: Build a true practiceHadoopDistributed Cluster EnvironmentHadoopThe specific solution steps are as follows:
Step 1: QueryHadoopTo see the cause of the error;
Step 2: Stop the cluster;
Although I have installed a Cloudera CDH cluster (see http://www.cnblogs.com/pojishou/p/6267616.html for a tutorial), I ate too much memory and the given component version is not optional. If only to study the technology, and is a single machine, the memory is small, or it is recommended to install Apache native cluster to play, production is naturally cloudera
Build a Hadoop Client-that is, access Hadoop from hosts outside the Cluster
Build a Hadoop Client-that is, access Hadoop from hosts outside the Cluster
1. Add host ing (the same as namenode ing ):
Add the last line
[Root @ localho
Datanode nodemanager server: 192.168.1.100 192.168.1.101 192.168.1.102
Zookeeper server cluster (for namenode high-availability automatic failover): 192.168.1.100 192.168.1.101
Jobhistory server (used to record mapreduce logs): 192.168.1.1
NFS for namenode HA: 192.168.1.100
Environment deployment 1. Add the YUM repository to CDH4 1. the best way is to put the cdh4 package in the self-built yum warehouse. For how to build a self-built yum warehou
change the configuration, it is safe and easy to use when you start the cluster daemon to synchronize to all nodes.Export hadoop_slave_sleep = 0.1# Seconds to sleep between slave commands. unset by default. This# Can be useful in large clusters, where, e.g., slave rsyncs can# Otherwise arrive faster than the master can service them.# I don't know how it works, but it seems that at startup, the master node will take a free time to rest for 0.1 seconds
As a matter of fact, you can easily configure the distributed framework runtime environment by referring to the hadoop official documentation. However, you can write a little more here, and pay attention to some details, in fact, these details will be explored for a long time. Hadoop can run on a single machine, or you can configure a cluster to run on a single m
Hadoop consists of two parts:
Distributed File System (HDFS)
Distributed Computing framework mapreduce
The Distributed File System (HDFS) is mainly used for the Distributed Storage of large-scale data, while mapreduce is built on the Distributed File System to perform distributed computing on the data stored in the distributed file system.
Describes the functions of nodes in detail.
Namenode:
1. There is only one namenode in the
I built a Hadoop2.6 cluster with 3 CentOS virtual machines. I would like to use idea to develop a mapreduce program on Windows7 and then commit to execute on a remote Hadoop cluster. After the unremitting Google finally fixI started using Hadoop's Eclipse plug-in to execute the job and succeeded, and later discovered that MapReduce was executed locally and was no
A few days ago, I summarized the hadoop distributed cluster installation process. Building a hadoop cluster is only a difficult step in learning hadoop. More knowledge is needed later, I don't know if I can stick to it or how many difficulties will be encountered in the futu
We know that the Hadoop cluster is fault-tolerant, distributed and so on, why it has these characteristics, the following is one of the principles.
Distributed clusters typically contain a very large number of machines, and due to the limitations of the rack slots and switch ports, the larger distributed clusters typically span several racks, and the machines on multiple racks form a distributed
Perform scala-version and the normal output indicates success.
3. Installing the Hadoop server
Host Name
IP Address
Jdk
User
Master
10.116.33.109
1.8.0_65
Root
Slave1
10.27.185.72
1.8.0_65
Root
Slave2
10.25.203.67
1.8.0_65
Root
Download address for Hadoop: http://hadoop.apache.org/
Configure the Hos
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.