HDFS Java API access method instance code, hdfsapi

Source: Internet
Author: User

HDFS Java API access method instance code, hdfsapi

This article focuses on the Java API access method of HDFS. The specific code is as follows, with detailed comments.

The pace is a little fast recently. encapsulate it when you are free.

Package for code import:
import java.io.IOException;import java.net.URI;import java.net.URISyntaxException;import org.apache.hadoop.conf.Configuration;import org.apache.hadoop.fs.BlockLocation;import org.apache.hadoop.fs.FileStatus;import org.apache.hadoop.fs.FileSystem;import org.apache.hadoop.fs.FileUtil;import org.apache.hadoop.fs.Path;import org.apache.hadoop.hdfs.DistributedFileSystem;import org.apache.hadoop.hdfs.protocol.DatanodeInfo;
Entity method:
/*** Get HDFS File System * @ return * @ throws IOException * @ throws URISyntaxException */public static FileSystem getFileSystem () throws IOException, URISyntaxException {// read config fileConfiguration conf = new Configuration (); // return the default file system. // if the file system runs in a Hadoop cluster, you can directly obtain the default file system // FileSystem fs = FileSystem. get (conf); // the specified file system address URI uri = new URI ("hdfs: // hy: 9000 "); // return the specified file system // if the test is performed locally, you need to use this method to obtain the file system FileSystem fs = FileSystem. get (uri, conf); return fs;}/*** create a file directory * @ throws Exception */public static void mkdir () throws Exception {// obtain the file system FileSystem fs = getFileSystem (); // create a file directory fs. mkdirs (new Path ("hdfs: // hy: 9000/hy/weibo"); // release the resource fs. close ();}/*** delete a file or file directory * @ throws Exception */public static void rmdir () throws Exception {// obtain the file system FileSystem fs = getFileSystem (); // delete a file or file directory fs. delete (new Path ("hdfs: // hy: 9000/hy/weibo"), true); // release the resource fs. close ();}/*** get all files in the directory * @ throws Exception */public static void listAllFile () throws Exception {// obtain the file system FileSystem fs = getFileSystem (); // list the directory content FileStatus [] status = fs. listStatus (new Path ("hdfs: // hy: 9000/hy/"); // obtain the Path of all files in the directory. Path [] listedPaths = FileUtil. stat2Paths (status); // read each file cyclically for (Path path: listedPaths) {System. out. println (path);} // releases the resource fs. close ();}/*** upload the file to HDFS * @ throws Exception */public static void copyToHDFS () throws Exception {// obtain the file object FileSystem fs = getFileSystem (); // The source file Path is Path srcPath = new Path ("/home/hadoop/temp in Linux. jar "); // If You Need To test in windows, you need to change it to the path in Windows, such as E: // temp. jarPath srcPath = new Path ("E: // temp. jar "); // destination Path dstPath = new Path (" hdfs: // hy: 9000/hy/weibo "); // implement File Upload fs. copyFromLocalFile (srcPath, dstPath); // release the resource fs. close ();}/*** download the file from HDFS * @ throws Exception */public static void getFile () throws Exception {// obtain the file system FileSystem fs = getFileSystem (); // Source File Path srcPath = new Path ("hdfs: // hy: 9000/hy/weibo/temp. jar "); // target path. The default path is Linux. // If you test the path in Windows, you need to change it to the path in Windows, such as C: // User/andy/Desktop/Path dstPath = new Path ("D: //"); // download the file fs on HDFS. copyToLocalFile (srcPath, dstPath); // release the resource fs. close ();}/*** get HDFS cluster point information * @ throws Exception */public static void getHDFSNodes () throws Exception {// obtain the file system FileSystem fs = getFileSystem (); // obtain the Distributed File System DistributedFileSystem hdfs = (DistributedFileSystem) fs; // retrieve all nodes DatanodeInfo [] dataNodeStats = hdfs. getDataNodeStats (); // loop ratio traversal for (int I = 0; I <dataNodeStats. length; I ++) {System. out. println ("DataNote _" + I + "_ Name:" + dataNodeStats [I]. getHostName ();} // release resource fs. close ();}/*** find the location of a file in the HDFS cluster * @ throws Exception */public static void getFileLocal () throws Exception {// obtain the file system FileSystem fs = getFileSystem (); // File Path path Path = new Path ("hdfs: // hy: 9000/hy/weibo/temp. jar "); // obtain the file directory FileStatus fileStatus = fs. getFileStatus (path); // obtain the file block location list BlockLocation [] blockLocations = fs. getFileBlockLocations (fileStatus, 0, fileStatus. getLen (); // loop output block information for (int I = 0; I <blockLocations. length; I ++) {String [] hosts = blockLocations [I]. getHosts (); System. out. println ("block _" + I + "_ location:" + hosts [0]);} // release the resource fs. close ();}
Summary

The above is all the content of the HDFS Java API access method instance code in this article, I hope to help you. If you are interested, you can continue to refer to other related topics on this site. If you have any shortcomings, please leave a message. Thank you for your support!

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.