After the Cloudera Manager server and Agent are started, you can configure the CDH5 installation.
You can then test the 7180 port of the master node through the browser (since the boot of the CM server takes some time, this may take a while to access), and the default user name and password are admin.
making a local source
Download CDH to local http://archive-primary.cloudera.com/cdh5/parcels/5.3.4/first,
Here are three things to download,
The first is the parcel package that corresponds to your system version, followed by the Manifest.json file.
CDH-5.2.0-1.cdh5.2.0.p0.12-el6.parcel、CDH-5.2.0-1.cdh5.2.0.p0.12-el6.parcel.sha1、manifest.json
When the download is complete, place the two files under the master node's/opt/cloudera/parcel-repo (the directory is already generated when you install Cloudera Manager 5 o'clock), and note that the directory is not a single word error.
[[email protected] Parcel-repo]# pwd/opt/cloudera/parcel-repo[[email protected] Parcel-repo]# llTotal dosage1533188-rw-r-----.1Root root1569930781 6Month - One: thecdh-5.3. 4-1.Cdh5. 3. 4. P0. 4-el6. Parcel-rw-r--r--.1Root root A 6Month - One: thecdh-5.3. 4-1.Cdh5. 3. 4. P0. 4-el6. Parcel. Sha-rw-r--r--.1Root root42475 6Month - Ten: -Manifest. JSON
Next open the Manifest.json file, which is the JSON format configuration, we need is the system version corresponding to the hash code, because we use the Centos6.5, so find the following location:
At the bottom of the curly brace, find the value corresponding to "hash".
Copy the value of "hash", then, Change the Cdh-5.2.0-1.cdh5.2.0.p0.12-el6.parcel.sha1 file name to Cdh-5.2.0-1.cdh5.2.0.p0.12-el6.parcel.sha, replace the copied hash value with the hash value in the text, supposedly a of the cause. Save well, so that our local source production is done.
Then the operation is the console follow the steps to install.
Installing CDH
Open http://hadoop1:7180, login console, the default account and password are admin, select the free version of the installation, after the CM5 support for Chinese is very strong, follow the prompts to install, if there is any problem in the installation of the system configuration will be prompted, Follow the prompts to install the components to the system.
Login Interface
Choose an installation version
Specify the installation host
Select Local Parcel Package
Next, the following package name appears, stating that the local parcel package is configured correctly and the direct point continues.
cluster Installation
Check Host correctness
Next is the server check and you may experience the following issues:
Cloudera 建议将 /proc060。使用 sysctl 命令在运行时更改该设置并编辑 /etc/sysctl.conf 以在重启后保存该设置。您可以继续进行安装,但可能会遇到问题,ClouderaManager 报告您的主机由于交换运行状况不佳。以下主机受到影响:···
The command can be resolved on the host that will be affected echo 0 > /proc/sys/vm/swappiness
.
Select Installation Services
Cluster Role Assignment
In general, it is possible to keep the default (Cloudera Manager will automatically configure the machine according to the configuration, if you need special adjustments, you can set it yourself).
Cluster Database Settings
Cluster Review Changes
If no other requirement remains the default configuration.
Finally to the installation of various services in the place.
Note that there may be an error when installing hive, because we use MySQL as the hive's metadata store, and hive does not have a MySQL driver by default and copies one via the following command:
cp /opt/cm-5.3.4/share/cmf/lib/mysql-connector-java-5.1.25-bin.jar /opt/cloudera/parcels/CDH-5.3.4-1.cdh5.3.4.p0.12/lib/hive/lib/
You will not encounter any problems after you continue with the installation.
After a long wait, the installation of the service is complete:
After the installation is complete, you can go to the cluster interface to see the current situation of the cluster.
Test
[[email protected]/]# su hdfs[[email protected]/]$ yarn jar/opt/cloudera/parcels/cdh/lib/hadoop-mapreduce/ Hadoop-mapreduce-examples.jar Pi - +Number ofMaps = -Samples perMap= +Wrote input for Map#0Wrote input for Map#1Wrote input for Map#2Wrote input for Map#3Wrote input for Map#4Wrote input for Map#5Wrote input for Map#6Wrote input for Map#7Wrote input for Map#8Wrote input for Map#9Wrote input for Map#Ten···· the/ ./ - A: $: -INFO MapReduce. Job:Map -% reduce0% the/ ./ - A: $:xxINFO MapReduce. Job:Map -% reduce -% the/ ./ - A: $: onINFO MapReduce. Job:job JOB_1435378145639_0001 completed successfully the/ ./ - A: $: onINFO MapReduce. Job:counters: the Map-reduce FrameworkMapInput records= - MapOutput records= $ MapOutput bytes=1800 MapOutput materialized bytes=3400Input Split bytes=14490Combine input records=0Combine Output records=0Reduce input groups=2Reduce Shuffle bytes=3400Reduce input records= $Reduce Output records=0Spilled records= -Shuffled Maps = -Failed shuffles=0MergedMapoutputs= -Gc TimeElapsed (ms) =3791Cpu TimeSpent (ms) =134370Physical memory (bytes) snapshot=57824903168Virtual memory (bytes) snapshot=160584515584Total committed heap usage (bytes) =80012115968Shuffle Errors bad_id=0connection=0Io_error=0Wrong_length=0wrong_map=0Wrong_reduce=0 FileInput Format Counters Bytes read=11800 FileOutput Format Counters Bytes written= theJob finishedinch 50.543secondsestimated value ofPi is 3.14120000000000000000
View MapReduce Jobs
Check Hue
For the first time, the hue will let you set a preliminary initial user name and password, set up, login to the background, will do a check, all normal will be prompted.
Here is a sign that our cluster is ready to use.
Offline installation Cloudera Manager5.3.4 and CDH5.3.4 (ii)