/download/lzo-2.04.tar.gz
Tar-zxvf lzo-2.04.tar.gz
./Configure --
Enable-Shar
Ed
Make
Make install
Library files are installed in the/usr/local/lib directory by default.
Any of the following operations is required:
A. Copy the lzo library in the/usr/local/lib directory to/usr/lib [/usr/lib64] According to the system's decision.
B. Create the lzo. conf file under the/etc/ld. so. conf. d/directory, write the path of the file into the database
Hadoop introductory articles. The first article is "Hadoop cluster practice (0) complete architecture design".In the previous series of articles, I also explained some of the concepts of Hadoop, mainly aiming at some questions I have encountered.At the same time, in the previous series of articles, I also listed some
Hadoop version: hadoop-2.5.1-x64.tar.gz
The study referenced the Hadoop build process for the two nodes of the http://www.powerxing.com/install-hadoop-cluster/, I used VirtualBox to open four Ubuntu (version 15.10) virtual machine
By building a Hadoop cluster (ii), we have been able to run our own WordCount program smoothly.Learn how to create your own Java applications, run on a Hadoop cluster, and debug with Debug.How many kinds of debug methods are there?How Hadoop is debug on eclipseIn general, th
-alternatives -- config javac
4Modify the machine name (this step can be omitted)When ubuntu is successfully installed, the default machine name is ubuntu. However, to make it easy to distinguish servers in the cluster, you need to give different names to each machine. The machine name is determined by the/etc/hostname file.1. Open the/etc/hostname file;$ Sudo gedit/etc/hostname2. Change ubuntu in the/etc/hostname file to the corresponding machine, s
view the status.
PS:
Why is there no final content! During the operation, I accidentally ssh slave1, formatted the namenode in this case, and started it. It just collapsed !! In this case, there is actually a solution.
Delete all the four folders and recreate them. Alas, don't talk about it.
You may also like the following articles about Hadoop:
Tutorial on standalone/pseudo-distributed installation and configuration of Hadoop2.4.1 under Ubuntu14.04
Hadoop, commonly known as distributed computing, was initially an open-source project and originally originated from Google's two white papers. However, just like Linux a decade ago, although Hadoop was initially very simple, with the rise of big data in recent years, it has also gained a stage to fully reflect the value. This is exactly why Hadoop is widely used
distributed programs without knowing the underlying details of the distribution. Take advantage of the power of the cluster to perform high-speed operations and storage. The core design of the Hadoop framework is HDFS and MapReduce. HDFS provides storage for massive amounts of data, and MapReduce provides calculations for massive amounts of data.BuildTo build a cluster
environment when I first learned about Hadoop, and how to compile a Hadoop eclipse plugin myself. And how to build a Hadoop programming environment in Eclipse. If you need it, you can click on the link to the top three articles I listed in the preface.The purpose of this time is to use VMware to build a Hadoop
Virtual machine to build Hadoop all distributed cluster-in detail (1)
Virtual machine to build Hadoop all distributed cluster-in detail (2)
Virtual machine to build Hadoop all distributed cluster-in detail (3)
In the above three b
install ganglia-monitor.
#SudoApt-GetInstallGanglia-webfrontend ganglia-Monitor
Link the ganglia file to the default directory of Apache.
#Sudo Ln-S/usr/share/ganglia-webfront/var/www/Ganglia
Ganglia-webfrontend is equivalent to gmetad and ganglia-Web mentioned above. It also automatically installs apache2 and rrdtool for you, which is very convenient. 3.3 ganglia Configuration
You must configure/etc/gmond. conf on each node. The configuratio
Build a Hadoop distributed cluster (Environment: Linux virtual machine)
1. Preparations: (Plan the host name, ip address, and usage. Set up three hosts first, and add four hosts dynamically.
In the usage column, you can also set namenode, secondaryNamenode, and jobTracker
Separate deployment, depending on actual needs, not unique)
Host Name machine ip usage
Cloud01 192.168.1.101 namenode/secondaryNamenode/j
the version is too old use the following command to ensure that three machines have SSH service)[Email protected]:~# sudo apt-get install SSHGenerate Master's public key:[Email protected]:~# cd ~/.ssh[Email protected]:~# ssh-keygen-t RSA # always press ENTER to save the generated key as. Ssh/id_rsaThe master node needs to be able to have no password SSH native, this step is performed on the master node:[Email protected]:~# cat ~/.ssh/id_rsa.pub >> ~/
Originally thought to build a local programming test Hadoop program Environment is very simple, did not expect to do a lot of trouble, here to share steps and problems encountered, I hope everyone smooth.I. To achieve the purpose of connecting a Hadoop cluster and being able to encode it requires the following preparation:1. Remote
Configuration requirements
Host Memory 4GB.
Disk more than GB.
HOST Machine installs common Linux distributions.
Linux Container (LXD)Take the host Ubuntu 16.04 as an example.
Install LXD.sudo Install sudo lxd init
To view the available image sources, if you use the default image, you can skip the next two steps and go directly to the back of the launch.$ LXC Remote List
Selec
Hadoop cluster Construction
I. Purpose
This article describes how to install, configure, and manage Hadoop clusters with practical significance. The scale of a Hadoop cluster can be from a small
there are additional machines in the cluster. Finally, the last generated Authorized_keys is copied to the. SSH directory of each computer in the cluster, overwriting the previous authorized_keys.10. After completing the Nineth step, you can login to the other computer with password-free SSH on any computer in the cluster.2.6 Time SynchronizationIn the networked
This article assumes the user basic understanding Docker, grasps the Linux basic Use command, understands Hadoop's general installation and the simple configuration
Experimental environment: Windows10+vmware WorkStation 11+linux.14.04 server+docker 1.7
Windows 10 as a solid machine operating system, the network segment is: 10.41.0.0/24, virtual machine using NAT network, subnet for 192.168.92.0/24, gateway for 192.168.92.2,linux 14.04 as a virtual system, as a container host, IP is 192.168.92.12
Remote connection
Xshell
Hadoop ecosystem
Hadoop-2.6.0-cdh5.4.5.tar.gzHbase-1.0.0-cdh5.4.4.tar.gzHive-1.1.0-cdh5.4.5.tar.gzFlume-ng-1.5.0-cdh5.4.5.tar.gzSqoop-1.4.5-cdh5.4.5.tar.gzZookeeper-3.4.5-cdh5.4.5.tar.gz
This article is to build CDH5 cluster environment, the above software can be downloaded from this website
thi
/home/large.zip testfile.zip
Copy the local file large.zip to the root directory of HDFS/user/hadoop/. The file name is testfile.zip. view the existing files:
[Hadoop @ hadoop1 hadoop] $ sbin/hadoop dfs-ls
9. hadoop online update node:
Add nodes:
1).
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.