Installing HBase tutorials under WIN10

Source: Internet
Author: User
Tags xsl

Work needs, now began to big data development, through the following configuration steps, you can deploy a set of hadoop+hbase in the WIN10 system, convenient for single-machine test debugging development.

Preparation information:

1.

hadoop-2.7.2:

https://mirrors.tuna.tsinghua.edu.cn/apache/hadoop/common/stable/

2.

Hadoop-common-2.2.0-bin-master:

Https://github.com/srccodes/hadoop-common-2.2.0-bin/archive/master.zip

3.

hbase-1.2.3:

http://apache.fayea.com/hbase/stable/

4.

jdk1.8:

Http://dl-t1.wmzhe.com/30/30118/jdk_1.8.0.0_64.exe

If the above compression packet WinRAR decompression fails, please download the installation Cygwin and then use it under the command prompt input: TAR-ZXVF hadoop-2.7.2.tar.gz

Unzip the downloaded 3 compressed packets separately to

D:\HBase\hadoop-2.7.2

D:\HBase\hadoop-common-2.2.0-bin-master

D:\HBase\hbase-1.2.3

Copy D:\HBase\hadoop-common-2.2.0-bin-master\bin 7 files (Note: Copy only these 7) to D:\HBase\hadoop-2.7.2\bin

Hadoop.dll, Hadoop.exp, Hadoop.lib, hadoop.pdb, Libwinutils.lib, Winutils.exe, winutils.pdb

Configure Hadoop:

D:\HBase\hadoop-2.7.2\etc\hadoop

Hadoop-env.cmd content is as follows:

@echo Off@rem Licensed to the Apache software Foundation (ASF) under one or More@rem contributor license agreements. See the NOTICE file distributed With@rem This work for additional information regarding copyright ownership. @rem the ASF l Icenses this file to you under the Apache License, Version 2.0@rem (the "License");  Except in compliance With@rem the License. Obtain a copy of the License At@rem@rem Http://www.apache.org/licenses/license-2.0@rem@rem unless required by  Applicable law or agreed to in writing, Software@rem distributed under the License are distributed on a "as is" BASIS, @rem Without warranties or CONDITIONS of any KIND, either express OR implied. @rem See the License for the specific language go Verning Permissions And@rem Limitations under the License. @rem Set hadoop-specific Environment variables here. @rem the onl  Y Required environment variable is java_home.  All others Are@rem optional. When running a distributed configurationIt is best To@rem set java_home in this file, so it is correctly defined On@rem remote nodes. @rem the JAVA Implementa  tion to use. Required.set java_home=%java_home% @rem The JSVC implementation to use. Jsvc is required to run secure Datanodes. @rem Set jsvc_home=%jsvc_home% @rem set hadoop_conf_dir= @rem Extra Java CLASSPATH  Elements. Automatically insert Capacity-scheduler.if exist%hadoop_home%\contrib\capacity-scheduler (if not defined Hadoop_ CLASSPATH (set Hadoop_classpath=%hadoop_home%\contrib\capacity-scheduler\*.jar) Else (set Hadoop_classpath=%hado Op_classpath%;%hadoop_home%\contrib\capacity-scheduler\*.jar)) @rem The maximum amount of heap to use, in MB.  Default is @rem set hadoop_heapsize= @rem set hadoop_namenode_init_heapsize= "" @rem Extra Java runtime options. Empty by default. @rem Set hadoop_opts=%hadoop_opts%-djava.net.preferipv4stack=true@rem Command specific options Appended to hadoop_opts while SPECIFIEDIF not defined Hadoop_security_logger (setHADOOP_SECURITY_LOGGER=INFO,RFAS) if not defined Hdfs_audit_logger (set Hdfs_audit_logger=info,nullappender) set hadoop_namenode_opts=-dhadoop.security.logger=%hadoop_security_logger%-dhdfs.audit.logger=%hdfs_audit_logger%% Hadoop_namenode_opts%set Hadoop_datanode_opts=-dhadoop.security.logger=error,rfas%HADOOP_DATANODE_OPTS%set hadoop_secondarynamenode_opts=-dhadoop.security.logger=%hadoop_security_logger%-Dhdfs.audit.logger=%HDFS_AUDIT _logger%%hadoop_secondarynamenode_opts% @rem The following applies to multiple commands (FS, DFS, fsck, distcp etc) Set had oop_client_opts=-xmx512m%hadoop_client_opts% @rem Set hadoop_java_platform_opts= "-xx:-useperfdata%HADOOP_JAVA_ platform_opts% "@rem on secure datanodes, user to run the Datanode as after dropping Privilegesset hadoop_secure_dn_user=%h  adoop_secure_dn_user% @rem Where log files are stored. %hadoop_home%/logs by default. @rem Set hadoop_log_dir=%hadoop_log_dir%\%username% @rem Where LOG files is stored in the SE Cure Data environment.seT hadoop_secure_dn_log_dir=%hadoop_log_dir%\%hadoop_hdfs_user% @rem the directory where PID files are stored. /tmp by default. @rem Note:this should is set to a directory that can is only is written to by @rem the user that would  Run the Hadoop daemons. Otherwise there is the@rem potential for a symlink attack.set hadoop_pid_dir=%hadoop_pid_dir%set hadoop_secure_dn_pi d_dir=%hadoop_pid_dir% @rem A String representing this instance of HADOOP. %USERNAME% by Default.set Hadoop_ident_string=%username%set Java_home=d:\java\jdk1.8.0_31set HADOOP_HOME=D:\HBase\ Hadoop-2.7.2set Hadoop_prefix=d:\hbase\hadoop-2.7.2set Hadoop_conf_dir=%hadoop_prefix%\etc\hadoopset YARN_CONF_ Dir=%hadoop_conf_dir%set Path=%path%;%hadoop_prefix%\bin

  

Core-site.xml content is as follows:

<?xml version= "1.0" encoding= "UTF-8"? ><?xml-stylesheet type= "text/xsl" href= "configuration.xsl"? ><! --  Licensed under the Apache License, Version 2.0 (the "License");  You are not a use of this file except in compliance with the License.  Obtain a copy of the License at    http://www.apache.org/licenses/LICENSE-2.0  unless required by applicable l AW or agreed to writing, software  distributed under the License are distributed on a "as is" BASIS,  without WAR Ranties or CONDITIONS of any KIND, either express OR implied.  See the License for the specific language governing permissions and  limitations under the License. See accompanying LICENSE file.--><!--Put Site-specific property overrides in this file. --><configuration><property><name>fs.default.name</name><value>hdfs:// 0.0.0.0:19000</value></property></configuration>

Hdfs-site.xml content is as follows:

<?xml version= "1.0" encoding= "UTF-8"? ><?xml-stylesheet type= "text/  XSL "href=" configuration.xsl "?><!--Licensed under the Apache License, Version 2.0 (the" License ");  You are not a use of this file except in compliance with the License. Obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 unless required by applicable law O  R agreed to writing, software distributed under the License are distributed on a "as is" BASIS, without warranties OR  CONDITIONS of any KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. See accompanying LICENSE file.--><!--Put Site-specific property overrides in this file. --><configuration><property><name>fs.default.name</name><value>hdfs:// 0.0.0.0:19000</value></property></configuration> 

Create the Mapred-site.xml content as follows: The Chinese name is changed to your current account name in Windows

<?xml version= "1.0"? ><?xml-stylesheet type= "text/xsl" href= "configuration.xsl"?><configuration>  <property>      <name>mapreduce.job.user.name</name>      <value> Feng Minggang </value>  </property>  <property>      <name>mapreduce.framework.name</name>      < value>yarn</value>  </property>  <property>      <name>yarn.apps.stagingdir </name>      <value>/user/Feng Minggang/staging</value>  </property>  <property>      <name>mapreduce.jobtracker.address</name>      <value>local</value>  </ Property></configuration>

  

Yarn-site.xml content is as follows:
<?xml version= "1.0"?><configuration> <property> <name>yarn.server.resourcemanager.address </name> <value>0.0.0.0:8020</value> </property> <property> <name>yarn.serv  Er.resourcemanager.application.expiry.interval</name> <value>60000</value>    </property> <property> <name>yarn.server.nodemanager.address</name> <value>0.0.0. 0:45454</value>   </property>  <property> <name> Yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> < Property> <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name> <value>org.apa che.hadoop.mapred.shufflehandler</value> </property>    <property> <name> Yarn.server.nodemanager.remote-app-log-dir</name> <value>/app-logs</value>    </property> <property> <name> Yarn.nodemanager.log-dirs</name> <value>/dep/logs/userlogs</value> </property> < Property> <name>yarn.server.mapreduce-appmanager.attempt-listener.bindAddress</name> <value&gt ;0.0.0.0</value>    </property> <property> <name> Yarn.server.mapreduce-appmanager.client-service.bindaddress</name> <value>0.0.0.0</value> < /property> <property> <name>yarn.log-aggregation-enable</name> <value>true</value > </property> <property> <name>yarn.log-aggregation.retain-seconds</name> <value >-1</value>    </property> <property> <name>yarn.application.classpath& Lt;/name> <value>%hadoop_conf_dir%,%hadoop_common_home%/share/hadoop/coMmon/*,%hadoop_common_home%/share/hadoop/common/lib/*,%hadoop_hdfs_home%/share/hadoop/hdfs/*,%hadoop_hdfs_home %/share/hadoop/hdfs/lib/*,%hadoop_mapred_home%/share/hadoop/mapreduce/*,%hadoop_mapred_home%/share/hadoop/ Mapreduce/lib/*,%hadoop_yarn_home%/share/hadoop/yarn/*,%hadoop_yarn_home%/share/hadoop/yarn/lib/*</value >    </property> </configuration>

  

Cond... Remaining hbase configuration and how to start the shutdown operation

Installing HBase tutorials under WIN10

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.