A. Common Hadoop commands1. The FS command for Hadoop#查看hadoop所有的fs命令Hadoop FS#上传文件 (both put and copyfromlocal are upload commands)Hadoop fs-put jdk-7u55-linux-i586.tar.gz hdfs://hucc01:9000/jdkhadoop fs-copyfromlocal jdk-7u55-li
Fsck commands in Hadoop
The fsck command in Hadoop can check the file in HDFS, check whether there is upt corruption or data loss, and generate the overall health report of the hdfs file system. Report content, including:Total blocks (Total number of blocks), Average block replication (Average number of copies), upt blocks, number of lost blocks,... and so on.---
1. Test the speed of Hadoop writesWrite data to the HDFs file system, 10 files, 10MB per file, files stored in/benchmarks/testdfsio/io_dataHadoop jar Share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.6.0-tests.jar Testdfsio-write-nrfiles 10- FileSize 10MB2. Test the speed of Hadoop read filesRead 10 files in t
Preface: Well, it's just a little bit more comfortable without writing code, but we can't slack off. The hive operation's file needs to be loaded from hereSimilar to the Linux commands, the command line begins with the Hadoop FS -(dash) LS / list file or directory cat Hadoop FS -cat ./hello.txt/opt/old/ Htt/hello.txt View files can dump directories or
/ci_cuser_20141231141853691/* ' >ci_cusere_20141231141853691.csv echo $?~/.bash_profile: Each user can use this file to enter shell information dedicated to their own use, when the user logs on, theThe file is only executed once! By default, he sets some environment variables to execute the user's. bashrc file.Hadoop fs-cat ' $1$2/* ' >$3.csvMV $3.csv/home/ocdc/cocString command = "CD" + Ciftpinfo.getftppath () + "" +hadooppath+ "Hadoop fs-cat '/user
I sorted it out by referring to a person's blog on the Internet:
Start HadoopGo to the HADOOP_HOME directory. Run bin/start-all.sh close Hadoop go to HADOOP_HOME directory execute bin/stop-all.sh
1. view the content in the specified directory
Hadoop dfs-ls [file directory]
Eg: hadoop dfs-ls/user/wangkai.pt
2. Open an existing file
start HadoopEnter the Hadoop_home directory. Execute SH bin/start-all.sh
Turn off HadoopEnter the Hadoop_home directory. Execute SH bin/stop-all.sh
1. View content in the specified directory
Hadoop dfs–ls [File directory]
Eg:hadoop dfs–ls/user/wangkai.pt
2. Open an existing file
Hadoop Dfs–cat [File_path]
Eg:hadoop Dfs-cat/user/wangkai.pt/data.txt
3. Store local files to
success:mysql-h172.16.77.15-uroot-p123 mysql-h host address-u user name-P user PasswordView Character SetsShow variables like '%char% ';To Modify a character set:VI/ETC/MY.CNF add Default-character-set=utf8 under [client]create sudo without password loginTo set the Aboutyun user with no password sudo permissions: Chmode u+w/etc/sudoersaboutyun all= (root) nopasswd:allchmod u-w/etc/sudoers test: sudo ifconfigUbuntu View Service List codesudo service--status-allsudo initctl listTo view the file s
1.hadoopView the directory on HDFs: hadoop fs-ls/ Create a directory on HDFs: -mkdir/jiatest upload the file to HDFs Specify directory: -put test.txt /Jiatest upload jar package to Hadoop run: hadoop jar maven_test-1.0-snapshot.jar org.jiahong.test.WordCount/ jiatest/jiatest/Output View result: -cat/jiatest/output/part-r-000002.linuxU
Hadoop Namenode-format formatted Distributed File systemstart-all.sh Start all Hadoop daemonsstop-all.sh Stop all Hadoop daemonsstart-mapred.sh Start the Map/reduce daemonstop-mapred.sh Stop Map/reduce DaemonStart-dfs.sh starting the HDFs daemonstop-mapred.sh Stop HDFs Daemonstart-balancer.sh HDFS data Block load BalancingFS in the following command can also be w
1.1)vim/etc/udev/rules.d/ --persistent-Net.rulesVI/etc/sysconfig/network-scripts/ifcfg-Eth0type=Ethernetuuid=57d4c2c9-9e9c-48f8-a654-8e5bdbadafb8onboot=yesnm_controlled=YesBootproto = staticDefroute=Yesipv4_failure_fatal=Yesipv6init=NoNAME="System eth0"HWADDR=xx: 0c: in: -: E6:ecipaddr =172.16.53.100PREFIX= -gateway=172.16.53.2Last_connect=1415175123dns1=172.16.53.2The virtual machine's network card is using the virtual network cardSave Exit X or Wq2)Vi/etc/sysconfig/networkNetworking=yesHostnam
Sudo addgroup hadoop # Add a hadoop GroupSudo usermod-a-g hadoop Larry # Add the current user to the hadoop GroupSudo gedit ETC/sudoers # Add the hadoop group to sudoerHadoop all = (all) All after root all = (all) All
Modify hadoop
From:http://www.2cto.com/database/201303/198460.htmlHadoop HDFs Common CommandsHadoop common commands:Hadoop FSView all commands supported by Hadoop HDFsHadoop fs–lslisting directory and file informationHadoop FS–LSRLoop lists directories, subdirectories, and file informationHadoop fs–put Test.txt/user/sunlightcsCopy the test.txt of the local file system to the/user/sunlightcs directory of the HDFs file sys
Add a Hadoop group
sudo addgroup Hadoop
Add the current user Larry to the Hadoop groupsudo usermod-a-G Hadoop Larry
Add Hadoop Group to Sudoersudo gedit etc/sudoersHadoop all= (All) after Root all= (all)
Modify the permissions for the H
, soHDFs has a high degree of fault tolerance.3. High data throughput HDFs uses a "one-time write, multiple read" This simple data consistency model, in HDFS , once a file has been created, written, closed, generally do not need to modify, such a simple consistency model, to improve throughput.4. Streaming data access HDFS has a large scale of data processing, applications need to access a large amount of information at a time, and these applications are generally batch processing, rather than
Create a table of contents
Hadoop dfs-mkdir/homeUploading files or directories to HDFs
Hadoop dfs-put Hello/Hadoop dfs-put hellodir//View Table of Contents
Hadoop Dfs-ls/Create an empty file
Hadoop Dfs-touchz/361wayDelete a file
Had
Summary of common Hadoop and Ceph commandsIt is very practical to summarize the commonly used Hadoop and Ceph commands.HadoopCheck whether the nm is alive. bin/yarn node list deletes the directory and hadoop dfs-rm-r/directory.Hadoop classpath allows you to view the paths of all classes.Hadoop leave safe mode method: hadoop
Hadoop fs-mkdir/tmp/input new folder on HDFs
Hadoop fs-put input1.txt/tmp/input The local file input1.txt to the/tmp/input directory in HDFs
Hadoop fs-get input1.txt/tmp/input/input1.txt to pull HDFs files to localHadoop fs-ls/tmp/output lists a directory for HDFsHadoop fs-cat/tmp/ouput/output1.txt viewing files on HDFs
H
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.