"Source" self-learning from zero Hadoop (02): Environment Preparation

Source: Internet
Author: User

Read Catalogue
    • Cause
    • Virtual machines
    • Linux
    • System Installation
    • Series Index

This article is copyright Mephisto and Blog Park is shared, welcome reprint, but must retain this paragraph statement, and give the original link, thank you for your cooperation.

The article is written by elder brother (Mephisto), Sourcelink

Cause

We have a rudimentary understanding of Hadoop, with Namenode,datanode,namenode and DataNode that can be on a machine, but that doesn't work well. Because my machine only 8G memory, so here Create 4 virtual machine A dedicated to Ambari use, one to Namenode, the other two to Datanode.

We started to take the first step.

Virtual machines

Here we use VMware Workstation 10 as the hardware production environment.

The installation of VMware does not have to say much, many places can be found.

I've disabled VMware services here, turned on when they're in use, wrote a batch script, and ran it as an administrator to start the VMware service.

Script Address: GitHub

Linux

Here we choose CentOS-6.4 64 bit as our system production environment.

Below we began to install 4 virtual machines, so do not use clone to quickly create, afraid of ip,mac,uuid and so on we have to go to modify the place, to prevent some of the problems may arise, etc. after proficiency can be used in the way of clone installation.

System Installation
One: Virtual machine Ready II: File--New virtual machine, select Custom three: Select WorkStation10

Four: Choose to install the system later five: select Linux->centos bit six: Enter the virtual machine name

The way I'm named here is H+ip.

Seven: Select the number of processors, the number of cores

Here appropriate to the line, I configured very general, on each to a nuclear. Given more, the actual OS also have to preempt, meaningless.

Eight: Select Memory size

Here Ambari because I am alone as a server, so this can be less to point, Namenode and Datanode can be more distribution points. In fact, I have only assigned 1G, test environment only.

Nine: Select the network type

Do not understand here or choose Bridge Network, avoid some redundant configuration out. Of course, master another talk.

Ten: Back directly next Next

To the development of disk size, in addition to Ambari other recommended 30G above. Next step is complete

11: Right mouse button H30, etc. settings->cd/dvd-> change to use iOS image 12: Start H30

Start the installation of CentOS, start the H30 point reboot first

13: Select first: Install or upgrad an exist system 14: Skip Media Test 15: Select English

English can be direct English. The keyboard or default American English is not modified.

16: Select Basic storage device 17: Ignore All data

Originally is to do system, the original data of course not.

18: Set hostname 19: Configure Network--edit tick Auto Connect IPV4 settings: Manual, Add. ip:192.168.1.30 Mask: 255.255.255.0 gateway: 192.168.1.1 dns:192.168.1.1 Here automatic connection must be checked, lest one go in to say that Ping does not pass, uncheck the network default is not open, you have to change the configuration.  The DNS gateway still has to be set up, or not access the extranet. All right, direct. Application 20: Setting the root password is too simple to prompt, click to ignore, but in the formal environment still have to set a complex.  21: Default Next, select Write 22: Select Minimum installation, custom package points are now custom. You don't have to choose desktop or anything else here. Desktop will be loaded with a lot of things, and waste of memory, choose Desktop is not as good as to play windows.  23: Custom Repository In many experiments, the individual think that the basic system of multiple selection of the base is enough, like what vim,yum,ssh have, basically no longer mount the image and then install.  May decorate other libraries, but the convenience is more suitable for beginners like us.  Will play can only choose Vim,yum,ssh this commonly used. 24: Library Installation 25: Installation complete, reboot.
Series Index

"Source" Self-Learning Hadoop series index from zero

This article is copyright Mephisto and Blog Park is shared, welcome reprint, but must retain this paragraph statement, and give the original link, thank you for your cooperation.

The article is written by elder brother (Mephisto), Sourcelink

"Source" self-learning from zero Hadoop (02): Environment Preparation

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.