1. What is CDHHadoop is an open source project for Apache, so many companies are commercializing this foundation, and Cloudera has made a corresponding change to Hadoop. Cloudera Company's release version of Hadoop, we call this version CDH (Cloudera distribution Hadoop).Provides the core capabilities of Hadoop– Scalable Storage– Distributed ComputingWeb-based user interfaceAdvantages of CDH:? Clear Version
Encounter a problem, because the default is to install CDH System/var/log directory, because it is a virtual instance, the system disk smaller only 50G, is the use of the system cm will be alerted to the alarm log directory space is not enough, if the script is deleted periodically, although it can solve the current problem, but not a good way. The other is to directly modify the configuration file, all the/var/log/* manually changed to/home/var/log/*
1, it is in the installation of CDH can not be installed successfully, only restart, the following to share an artifact, according to this script should be almost able to uninstall clean, and then reinstall, write a script, the content is as follows, life-saving artifact:#!/bin/Bashsudo/usr/share/cmf/uninstall-cloudera-Manager.shsudo Service Cloudera-scm-Server Stopsudo Service Cloudera-scm-server-db Stopsudo Service Cloudera-scm-Agent Stopsudo Yum re
Tags: CDH mysql5.7 binarycdh-cdh5.8.3 offline installation--mysql5.7 binary deployment1. Check whether the system has installed MySQL, need to uninstall clean#rpm-qa|grep-i MySQLMysql-server-5.1.71-1.el6.x86_64Mysql-5.1.71-1.el6.x86_64Mysql-devel-5.1.71-1.el6.x86_64Qt-mysql-4.6.2-26.el6_4.x86_64Mysql-libs-5.1.71-1.el6.x86_64Perl-dbd-mysql-4.013-3.el6.x86_64#rpm-E mysql-server-5.1.71-1.el6.x86_64--nodeps#rpm-E mysql-5.1.71-1.el6.x86_64--nodeps#rpm-E my
CDH: Full name Cloudera ' s distribution including Apache HadoopCDH version-derived Hadoop is an open source project, so many companies are commercializing this foundation, and Cloudera has made a corresponding change to Hadoop.Cloudera Company's release, we call this version CDH (Cloudera distribution Hadoop). So far, there are 5 versions of CDH, of which the f
I. risks are classified into internal and externalFirst, internal:During the deployment of CDH Big Data clusters, users named after services are automatically created,Username (login_name): Password location (passwd): User ID (UID): User Group ID (GID): annotation description (users): Home directory ): log on to Shell)CAT/etc/shadowThe format of the second column in the shadow file. It is the encrypted password. This column is "!! ", That is ":!! : ",
Recent projects need to use Oozie Workflow scheduling hivesql, found unable to execute query statements, see: https://community.cloudera.com/t5/Batch-Processing-and-Workflow/ oozie-hive-action-failed-with-wrong-tmp-path/td-p/37443 this, the culprit is CDH bug, need to upgrade the version.Upgrade steps:1. Querying a service on a single nodeService--status-allFound only cloudera-scm-agent, no cloudera-scm-server, indicating that this is not the primary
We know that Namenode's single-machine failure is cumbersome, and CDH offers high-availability options.The operation is as follows:Click on "HDFS"Select NamenodeClick "Action" and select:Set your own name.Click "Continue"Click "Continue"This keeps the default and then continues with the problemReturn, write a valueGo onIndicates that the operation is being processed,Start successfully!Go back and look at the Overview interface:You can see that Seconda
Recently CDH cluster frequent alarm, because some host frequent swapping, greatly affected the performance of the cluster.Later found a setting (/proc/sys/vm/swappiness) needs to be modified, the default value of 60Setting the vm.swappiness Linux Kernel Parametervm.swappinessis a Linux Kernel Parameter This controls how aggressively memory pages are swapped to disk. It can set to a value between 0-100; The higher the value, the more aggressive the ker
Error when executing hive count query:Error:java Heap SpaceThe solution is set io.sort.mb=10;Error when executing Hadoop exeample, also Java heap space problemDiagnostic Messages for this Task:Error:java Heap SpaceFailed:execution Error, return code 2 from Org.apache.hadoop.hive.ql.exec.mr.MapRedTaskMapReduce Jobs Launched:Stage-stage-1: map:1 reduce:1 hdfs read:0 HDFs write:0 FAILTotal MapReduce CPU time spent:0 msecHive execution hql prompt error Error:java heap spaceJava Heap Space
CM Add hive Service after installing CDH, error message appearsWhen adding a service, hive is configured as follows: Error message: Error log:Xec/opt/cloudera/parcels/cdh-5.4.7-1.cdh5.4.7.p0.3/lib/hadoop/bin/hadoop jar/opt/cloudera/parcels/ Cdh-5.4.7-1.cdh5.4.7.p0.3/lib/hive/lib/hive-cli-1.1.0-cdh5.4.7.jar Org.apache.hive.beeline.HiveSchemaTool- Verbose-dbtype My
1. Create the lib121 directory under the hive0.13.1 version
Cd/opt/cloudera/parcels/cdh/lib/hive;mkdir lib1212. Download the hive1.2.1 version and copy all files from this version of Lib to lib121
3. Modify the Hive_lib variable in/opt/cloudera/parcels/cdh/lib/hive/bin/hive
hive_lib=${hive_home}/lib121
4. Update the JLine jar package on Hadoop and remove the old Jlien jar package
RM-RF Jline-0.9.94.jar
configuring CDH and Managing servicesTuning of HDFs before closing DatanodeRole requirements: Configurator, Cluster Administrator, full Administratorwhen a datanode is closed, Namenode ensures that each block in each Datanode is still available based on the replication factor (the replication factor) across the cluster. This process involves the block duplication of small batches between datanode. In this case, a datanode has thousands of blocks, and
I. Installation PROTOBUFUbuntu system1 Create a file in the/etc/ld.so.conf.d/directory libprotobuf.conf write the content/usr/local/lib otherwise the error will be reported while loading shared libraries:libprotoc.so .8:cannot Open Shared obj2../configure Makemake Install2. Verify that the installation is completeProtoc--versionLibprotoc 2.5.0Two. Install the Snappy local libraryHttp://www.filewatcher.com/m/snappy-1.1.1.tar.gz.1777992-0.htmlDownload snappy-1.1.1.tar.gzUnzip./configuremake Makein
How to do integration, in fact, especially simple, online is actually a tutorial.http://blog.csdn.net/fighting_one_piece/article/details/40667035 look here.I'm using the first integration. When you do, there are a variety of problems. Probably from from 2014.12.17 5 o'clock in the morning to 2014.12.17 night 18 o'clock 30 summed up in fact very simple, but do a long time AH Ah!!! This kind of thing, a fall into your wit. Question 1, need to refer to a variety of packages, these packages to bre
1Create a collection
SSH connects remotely to the CDH node that has SOLR installed.
Running the solrctl instancedir--generate/solr/test/gx_sh_tl_tgryxx_2015 command generates the default configuration for the Gx_sh_tl_tgryxx_2015 collection.
Enter the/solr/test/gx_sh_tl_tgryxx_2015/conf directory, first edit the Schema.xml configuration field information, the specific online search one piece.
Solrconfig.xml file in the other
The following
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.