[Linux] [Hadoop] Run hadoop and linuxhadoop
The preceding installation process is to be supplemented. After hadoop installation is complete, run the relevant commands to run hadoop.
Run the following command to start all services:
ha
Install Hadoop fully distributed (Ubuntu12.10) and Hadoop Ubuntu12.10 in Linux
Hadoop installation is very simple. You can download the latest versions from the official website. It is best to use the stable version. In this example, three machine clusters are installed. The hadoop
:50030 (Web page for MapReduce)http://localhost:50070 (HDFS Web page)Validation examples: Web page for MapReduceWeb pages for HDFsproblems encountered:1. When starting Hadoop, always say Java_home is not configuredWhen I use the shell command in the tutorial to execute bin/start-all.sh in the Hadoop folder, I always report java_home is not set.But I also set the java_home in the shell, and I go to check the
java_home of the above hint is actually to be set in the configuration file.Run vim ~/hadoop-1.2.1/conf/hadoop-env.sh, this folder according to your own path to write, changes such as the following:2.-bash:bin/hadoop:permission deniedIf you are downloading directly from the network under Linux, there should be nothing. Just, suppose you are the same as the autho
People rely on search engines every day to find specific content from the massive amount of data on the Internet. But have you ever wondered how these searches are executed? One method is Apache Hadoop, which is a software framework that can process massive data in a distributed manner. An Application of Hadoop is to index Internet Web pages in parallel. Hadoop i
Ubuntu:
1. install Java.
First install Java. Because openjdk is installed in Ubuntu by default, You can uninstall it first. Enter the command on the terminal: sudo apt-Get purge openjdk *.
1. Download JDK for Linux from the sun homepage. I downloaded jdk-6u37-linux-i586.bin. :Click to open
2. Switch to the root user and the directory to be installed. My installation directory is/usr/lib/JVM/Java. Copy the d
There are currently two versions of the Eclipse plug-in configured, one under Windows recompile, and the other on Linux in reconfiguration.Step through the process of compiling-configuring the Eclipse plug-in under Linux.
environment: Ubuntu 12.04 64bit 3.2.0-29-generic eclipse-jee-luna-r-linux-gtk-x86_64.tar.gz Apache Ant (TM) version 1.8.2 JDK version 1.
Linux executes Hadoop WordCountUbuntu Terminal Access shortcut key: Ctrl + ALT +tHadoop startup command:start-all.shThe normal execution effect is as follows:[Email protected]:~$ start-all.shWarning: $HADOOP _home is deprecated.Starting Namenode, logging to/home/hadoop/hadoop
What is Impala?
Cloudera released real-time query open source project Impala, according to a variety of products measured, it is more than the original based on MapReduce hive SQL query speed increase 3~90 times. Impala is an imitation of Google Dremel, but've seen wins blue on the SQL function.
1. Install JDK
The code is as follows
Copy Code
$ sudo yum install jdk-6u41-linux-amd64.rpm
2. Pseudo-distributed mod
Run Hadoop WordCount. jar in Linux.
Run Hadoop WordCount in Linux
Enter the shortcut key of Ubuntu terminal: ctrl + Alt + t
Hadoop launch command: start-all.sh
The normal execution results are as follows:
Hadoop @
Hadoop begins, according to the Convention, first compile the source code, import into Eclipse, so that later to understand that piece, or that block of problems, directly find the source code.hadoop2.4.1 compilation requires protoc2.5.0 support, so download the PROTOC as well. I downloaded:protobuf-2.5.0.tar.bz2Install a few dependent packages before compiling the PROTOC for the installation: Gcc,gcc-c++,make if it is already installed can be ignored
This article is intended to provide the most basic, can be used in the production environment of Hadoop, HDFS distributed environment of the building, Self is a summary and collation, but also to facilitate the new learning to use.Installation and configuration of the base environment JDKIt is not easy to find JDK7 's installation packages directly to Oracle's official website (http://www.oracle.com/), as it is now officially recommended JDK8. Found a
Setting up Hadoop cluster environment steps under Ubuntu 12.04I. Preparation before setting up the environment:My native Ubuntu 12.04 32bit as Maser, is the same machine that was used in the stand-alone version of the Hadoop environment, http://www.linuxidc.com/Linux/2013-01/78112.htmAlso in the KVM Virtual 4 machines, respectively named:Son-1 (Ubuntu 12.04 32bit
Apache Hadoop Ecosystem installation package: http://archive.apache.org/dist/Software Installation directory: ~/appjdk:jdk-7u45-linux-x64.rpmhadoop:hadoop-2.5. 1-src. Tar . Gzmaven:apache-maven-3.0. 5-bin. Zip protobuf:protobuf-2.5. 0. tar. gz1. Download Hadoopwget http://tar -zxvf hadoop-2.5. 1-src. TarThere is a BUILDING.txt file under the extracted
I. New Hadoop group and user (password: hadoop)
[
Email protected] home]# groupadd Hadoop[
[email protected] home]# useradd-g HadoopHadoop
[[ Email protected] home]# passwd hadoop
changing password for user Hadoop.
New Password: Bad
Password:it a
Install Hadoop in Linux (pseudo distribution mode) before writing: when installing Hadoop in Linux, pay attention to permission issues and Grant hadoop permissions to non-root users. This article does not cover how to create a new user in
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.