Fsck commands in Hadoop
The fsck command in Hadoop can check the file in HDFS, check whether there is upt corruption or data loss, and generate the overall health report of the hdfs file system. Report content, including:Total blocks (Total number of blocks), Average block replication (Average number of copies), upt blocks, number of lost blocks,... and so on.---
1. Test the speed of Hadoop writesWrite data to the HDFs file system, 10 files, 10MB per file, files stored in/benchmarks/testdfsio/io_dataHadoop jar Share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.6.0-tests.jar Testdfsio-write-nrfiles 10- FileSize 10MB2. Test the speed of Hadoop read filesRead 10 files in t
Preface: Well, it's just a little bit more comfortable without writing code, but we can't slack off. The hive operation's file needs to be loaded from hereSimilar to the Linux commands, the command line begins with the Hadoop FS -(dash) LS / list file or directory cat Hadoop FS -cat ./hello.txt/opt/old/ Htt/hello.txt View files can dump directories or
success:mysql-h172.16.77.15-uroot-p123 mysql-h host address-u user name-P user PasswordView Character SetsShow variables like '%char% ';To Modify a character set:VI/ETC/MY.CNF add Default-character-set=utf8 under [client]create sudo without password loginTo set the Aboutyun user with no password sudo permissions: Chmode u+w/etc/sudoersaboutyun all= (root) nopasswd:allchmod u-w/etc/sudoers test: sudo ifconfigUbuntu View Service List codesudo service--status-allsudo initctl listTo view the file s
/ci_cuser_20141231141853691/* ' >ci_cusere_20141231141853691.csv echo $?~/.bash_profile: Each user can use this file to enter shell information dedicated to their own use, when the user logs on, theThe file is only executed once! By default, he sets some environment variables to execute the user's. bashrc file.Hadoop fs-cat ' $1$2/* ' >$3.csvMV $3.csv/home/ocdc/cocString command = "CD" + Ciftpinfo.getftppath () + "" +hadooppath+ "Hadoop fs-cat '/user
1.hadoopView the directory on HDFs: hadoop fs-ls/ Create a directory on HDFs: -mkdir/jiatest upload the file to HDFs Specify directory: -put test.txt /Jiatest upload jar package to Hadoop run: hadoop jar maven_test-1.0-snapshot.jar org.jiahong.test.WordCount/ jiatest/jiatest/Output View result: -cat/jiatest/output/part-r-000002.linuxU
Hadoop Namenode-format formatted Distributed File systemstart-all.sh Start all Hadoop daemonsstop-all.sh Stop all Hadoop daemonsstart-mapred.sh Start the Map/reduce daemonstop-mapred.sh Stop Map/reduce DaemonStart-dfs.sh starting the HDFs daemonstop-mapred.sh Stop HDFs Daemonstart-balancer.sh HDFS data Block load BalancingFS in the following command can also be w
1.1)vim/etc/udev/rules.d/ --persistent-Net.rulesVI/etc/sysconfig/network-scripts/ifcfg-Eth0type=Ethernetuuid=57d4c2c9-9e9c-48f8-a654-8e5bdbadafb8onboot=yesnm_controlled=YesBootproto = staticDefroute=Yesipv4_failure_fatal=Yesipv6init=NoNAME="System eth0"HWADDR=xx: 0c: in: -: E6:ecipaddr =172.16.53.100PREFIX= -gateway=172.16.53.2Last_connect=1415175123dns1=172.16.53.2The virtual machine's network card is using the virtual network cardSave Exit X or Wq2)Vi/etc/sysconfig/networkNetworking=yesHostnam
From:http://www.2cto.com/database/201303/198460.htmlHadoop HDFs Common CommandsHadoop common commands:Hadoop FSView all commands supported by Hadoop HDFsHadoop fs–lslisting directory and file informationHadoop FS–LSRLoop lists directories, subdirectories, and file informationHadoop fs–put Test.txt/user/sunlightcsCopy the test.txt of the local file system to the/user/sunlightcs directory of the HDFs file sys
, soHDFs has a high degree of fault tolerance.3. High data throughput HDFs uses a "one-time write, multiple read" This simple data consistency model, in HDFS , once a file has been created, written, closed, generally do not need to modify, such a simple consistency model, to improve throughput.4. Streaming data access HDFS has a large scale of data processing, applications need to access a large amount of information at a time, and these applications are generally batch processing, rather than
Modify permissions for a file or directory
Chown
chown [ options ] User [ . Group ] file/dir
Modify the owner of a file
Chgrp
Chgrp [-R] Group name Dir/file
Modify the owning group of a file
Iv. Systems and Networks
Option name
Meaning
passwd xxx
Change Password
Df-ah
View disk space
Ps-ef |grep
View process
Kill-9
Kill the process
mytable from Database mydb to the E:\MySQL\mytable.sql file.
c:\> mysqldump-h localhost-u root-p mydb mytable>e:\mysql\mytable.sql
Example 3: Export the structure of the database mydb to a e:\MySQL\mydb_stru.sql file.
c:\> mysqldump-h localhost-u root-p mydb--add-drop-table >e:\mysql\mydb_stru.sql
Note :-h localhost can be omitted, it is generally used on the virtual host.3) Export data structure onlyformat : mysqldump-u [Database user name]-p-t [the name of the data
localhost-u root-p mydb >e:\mysql\mydb.sql
Then enter the password, wait for an export to succeed, you can check the target file for success. Example 2: Export mytable from Database mydb to the E:\MySQL\mytable.sql file.
c:\> mysqldump-h localhost-u root-p mydb mytable>e:\mysql\mytable.sql
Example 3: Export the structure of the database mydb to a e:\MySQL\mydb_stru.sql file.
c:\> mysqldump-h localhost-u root-p mydb--add-drop-table >e:\mysql\mydb_stru.sql
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.