HDFs Java Access interface

Source: Internet
Author: User
Keywords Nbsp; dfs we say Java
I. Build HADOOP development environment

The various code that we have written in our work is run on the server, and the HDFS operation code is no exception. During the development phase, we used eclipse under Windows as the development environment to access the HDFs running in the virtual machine. That is, accessing HDFs in remote Linux through Java code in local eclipse.
To access the HDFS in the client computer using Java code from the host, you need to ensure the following:
(1) Ensure that the host and client network is interoperable
(2) To ensure that the host and client firewall is closed, because many ports need to pass, in order to reduce the firewall configuration, direct shutdown.
(3) Ensure that the host computer is in accordance with the JDK version used by the client. If the client is JDK6 and the host is Jdk7, the code runs with an unsupported version error.
(4) The host's login username must be the same as the client's username. For example, we are using the root of Linux, Windows will also use the root user, otherwise it will report permission exception
&http://www.aliyun.com/zixun/aggregation/37954.html >nbsp; overwrite Hadoop in Eclipse project The Checkreturnvalue method of the Org.apache.hadoop.fs.FileUtil class, as shown in Figure 1.1, is intended to avoid permission errors.

Figure 1.1

If the reader has issues with permissions during the development process, follow the prompts in this section to check your environment.

II, using the FileSystem API to read and write data in the HDFs operation of Hadoop, there is a very important API, is Org.apache.hadoop.fs.FileSystem, this is our user code operation HDFS Direct Access, This class contains various methods of manipulating HDFs, similar to the direct entry of the operating database in JDBC is the connection class.

So how do we get a filesystem object?

View Code

In the above code, note that the call is filesystem static method get, passing two values to the form parameter, the first access HDFs address, the address of the protocol is Hdfs,ip is 10.1.14.24, the port is 9000. The complete information for this address is specified in the configuration file Core-site.xml, and readers can use the settings in their environment's configuration file. The second parameter is a configuration object.

1. Create a folder

Use the HDFs shell command to look at the file situation in the root directory, as shown in Figure 2.1.

Figure 2.1

We create a folder in the root directory of HDFs, with the following code

------------------------------------------------------------------------------------------------------


Final String pathstring = "/D1";
Boolean exists = fs.exists (new Path (pathstring));
if (!exists) {
Boolean result = Fs.mkdirs (new Path (pathstring));
SYSTEM.OUT.PRINTLN (result);
}

------------------------------------------------------------------------------------------------------


The above code is placed in the main function,

The first line determines that the full path to the folder you create is/d1. The second line of code is to use method exitst to determine whether a folder exists, or to perform a create operation if it does not exist. The third line creates a folder that calls the Mkdirs method, the return value is a Boolean value, and if true, indicates a successful creation, or false, which indicates that the creation failed.

Now look at the success, as shown in figure 3.2,3.3 visible creation succeeded.


Figure 3.2

Figure 3.3

2. Write File


We can write the file to HDFs, the code is as follows:

-----------------------------------------------------------------------------------------------------


Final String pathstring = "/D1/F1";
Final Fsdataoutputstream Fsdataoutputstream = fs.create (new Path (pathstring))
Ioutils.copybytes (New Bytearrayinputstream ("My name is Sunddenly". GetBytes ()),
Fsdataoutputstream, Conf, true);

------------------------------------------------------------------------------------------------------


the first line of code indicates that the file created is a file F1 under the D1 folder you just created;


the second line is to call the Create method to build an output stream leading to the HDFs;


the third line is to send a string to the output stream by calling a static method of a tool class ioutils of Hadoop copybytes.

The static method has four parameters:

The first parameter input stream. The second parameter is the output stream. The third parameter is the configuration object. The fourth argument is a Boolean value, and if True indicates that the stream is closed after the data transfer has completed.

Now look at the success of the creation, as shown in Figure 3.4.

Figure 3.4

3. Read File

Now we read the file "/d1/f1" just written to HDFs, and the code reads as follows:

------------------------------------------------------------------------------------------------------
Final String pathstring = "/D1/F1";
Final Fsdatainputstream Fsdatainputstream = Fs.open (new Path (pathstring))/Read in
Ioutils.copybytes (Fsdatainputstream, System.out, Conf, true);

-------------------------------------------------------------------------------------------------------

The first line specifies the path of the read file. The second line indicates that the calling method open opens a specified file, the return value is an input stream leading to the file, and the third row or call ioutils.copybytes method, the output destination is the console.

See Figure 3.5


Figure 3.5

4. View directory listings and file details

We can display all the files and directories in the root directory, the following code

--------------------------------------------------------------------------------------------------------
Final String pathstring = "/";
Final filestatus[] Liststatus = fs.liststatus (new Path (pathstring));
for (Filestatus filestatus:liststatus) {
Final String type = Filestatus.isdir ()? Directory ":" File;
Final short replication = Filestatus.getreplication ();
Final String Permission = Filestatus.getpermission (). toString ();
Final Long len = Filestatus.getlen ();
Final path Path = Filestatus.getpath ();
System.out.println (type+ "T" +permission+ "T" +replication+ "\ T" +len+ "T" +path);
}

-----------------------------------------------------------------------------------------------------------
Calling the Liststatus method will get all the files and folders under a specified path, each represented by a filestatus. We use a For loop to display each Filestatus object. The Filestatus object represents the details of the file, which contains a lot of information about the type, number of replicas, permissions, lengths, paths, and so on. The result is shown in Figure 3.6.


Figure 3.6

5. Delete a file or directory

We can delete a file or path with the following code

-----------------------------------------------------------------------------------------------------
Final String pathstring = "/D1/F1";
Fs.delete (New Path ("/d1"), true);
Fs.deleteonexit (New Path (pathstring));

-----------------------------------------------------------------------------------------------------
The third line of code indicates the deletion of the file "/d1/f1", and the second line of code that represents the recursive deletion of the directory "/d1" and all of the contents below. In addition to the FS approach listed above, there are a number of ways that readers can consult the API themselves.

Original link: http://www.cnblogs.com/sunddenly/p/3983090.html

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.