Alibabacloud.com offers a wide variety of articles about hortonworks hadoop installation, easily find your hortonworks hadoop installation information here online.
I. INTRODUCTIONRefer to many tutorials on the web, and eventually install Hadoop in the ubuntu14.04 configuration successfully. The detailed installation steps are described below. The environment I use: two Ubuntu 14.04 64-bit desktops, Hadoop chooses the 2.7.1 version.Two. Prepare for work 2.1 Create a userTo create a user and add root permissions to it, it is
network segment. However, different transmission channels can be defined within the same network segment. 2 Environment
Platform: ubuntu12.04
Hadoop: hadoop-1.0.4
Hbase: hbase-0.94.5.
Topology:
Figure 2 hadoop and hbase Topology
Software Installation: APT-Get 3. in
Rhive is a package that extends R computing power through hive high-performance queries. It can be very easy to invoke HQL in the R environment, and also allows the use of R's objects and functions in hive. In theory, the data processing capacity can be infinitely extended hive platform, with the tool r environment of data mining, is a perfect big data analysis mining work environment.Resource bundle:Http://pan.baidu.com/s/1ntwzeTbInstallationFirst, the inst
authorized_keys of datanode (
192.168.1.107 node ):
A. Copy the id_dsa.pub file of namenode:
$ SCP id_dsa.pub root@192.168.1.108:/home/hadoop/
B. log on to 192.168.1.108 and run $ cat id_dsa.pub>. Ssh/authorized_keys.
Other datanode perform the same operation.
Note: If the configuration is complete and the namenode still cannot access datanode, you can modify
Authorized_keys: $ chmod 600 authorized_keys.
4. Disable the Firewall
$ Sudo UFW disable
Not
Many new users have encountered problems with hadoop installation, configuration, deployment, and usage for the first time. This article is both a test summary and a reference for most beginners (of course, there are a lot of related information online ).
Hardware environmentThere are two machines in total, one (as a Masters), one machine uses the VM to install two systems (as slaves), and all three system
. MapReduce is free to select a node that includes a copy of a shard/block of dataThe input shard is a logical division, and the HDFS data block is the physical division of the input data. When they are consistent, they are highly efficient. In practice, however, there is never a complete agreement that records may cross the bounds of a block of data, and a compute node that processes a particular shard gets a fragment of the record from a block of data Hado
Installing the JDK1 Yum Install java-1.7. 0-openjdk*3 Check Installation: java-versionCreate Hadoop users, set up Hadoop users so that they can password-free ssh to localhost1 su - hadoop 2ssh-keygen" -f ~/. SSH/id_dsa 3cat ~/. SSH/id_dsa.pub>> ~/. ssh/authorized_keys 4 5 cd/home/
Ubuntu version 12.04.3 64-bitHadoop is run on a Java virtual machine, so you'll need to install the Jdk,jdk installation configuration method to install it under another blog post ubuntu12.04 jdk1.7SOURCE Package Preparation:I downloaded the hadoop-1.2.1.tar.gz, this version is relatively stable, can be provided to the official website of the image http://www.apache.org/dyn/closer.cgi/
Now there are a lot of articles on hadoop installation on the network. I also tried to install it according to their methods. Hey, this is not good. If there is no problem, I can only find Gu's teacher one by one, what Mr. Gu provided was messy, and finally it was installed. I wrote this article based on the name of the machine. It is for reference only.
Machine name IP Address
Master 10.64.79.153 namenode
I. Hadoop installation and Considerations1. To install the Hadoop environment, you must have a Java environment in your system.2. SSH must be installed, and some systems will be installed by default, if not installed manually.Can be installed with Yum install-y ssh or RPM-IVH ssh rpm packageTwo. Install and configure the Java environmentHadoop needs to run in a J
libsnappy.lalrwxrwxrwx 1 root root 7 11:56 libsnappy.so libsnappy.so.1.2.1lrwxrwxrwx 1 root root 7 11:56 libsnappy.so.1-libsnappy.so.1.2.1-rwxr-xr-x 1 root root 147758 7 11:56 libsnappy.so.1.2.1If an error is not encountered during the installation and the/usr/local/lib directory has the above file, the installation is successful.4, Hadoop-snappy source
first, system and software environment1. Operating systemCentOS Release 6.5 (Final)Kernel version:2.6.32-431.el6.x86_64master.fansik.com:192.168.83.118node1.fansik.com:192.168.83.119node2.fansik.com:192.168.83.1202.JDK version:1.7.0_753.Hadoop version:2.7.2second, pre-installation preparation1. Turn off firewall and SELinux# Setenforce 0# Service Iptables Stop2. Configuring the host file192.168.83.118 maste
Please refer to the original author, Xie, http://m.blog.itpub.net/30089851/viewspace-2121221/
1. Versionhadoop2.7.2+hbase1.1.5+hive2.0.0kylin-1.5.1kylin1.5 (apache-kylin-1.5.1-hbase1.1.3-bin.tar.gz)2.Hadoop Environment compiled to support snappy decompression LibraryRecompile HADOOP-2.7.2-SRC native to support snappy decompression compression library3. Environmental preparednesshadoop-2.7.2+zookeeper-3.4.6
. Apache. Hadoop. Util. Runjar. Main(Runjar. Java:212) ExceptioninchThread"Main"Java. Lang. Incompatibleclasschangeerror: Found class JLine. Terminal, but interface is expected at JLine. Console. Consolereader.. Java: the) at JLine. Console. Consolereader.. Java:221) at JLine. Console. Consolereader.. Java:209) at Org. Apache. Hadoop. Hive. CLI. Clidriver. Setupconsolereader(Clidriver. Java:787) at Org. Apa
Required before installation
Because of the advantages of hadoop, file storage and task processing are distributed, the hadoop distributed architecture has the following two types of servers responsible for different functions, master server and slave server. Therefore, this installation manual will introduce the two t
//usr/opt/scalaSet PATH for Scala in ~/.BASHRC$ sudo vi ~/.BASHRCExport Scala_home=/usr/opt/scalaExport PATH = $PATH: $SCALA _home/binDownload Spark 1.6 from Apache serverInstall Spark$ mv spark-1.6.0-bin-without-hadoop//opt/sparkSet up environment for Spark$ sudo vi ~/.BASHRCExport Spark_home=/usr/opt/sparkExport PATH = $PATH: $SPARK _home/binADD Entity to Configuration$ cd/opt/spark/conf$ CP spark_env.sh.template spark_env.sh$ VI spark_env.shHadoop_
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.