Hadoop accesses HDFs through the C language API

Source: Internet
Author: User
Tags readfile

Hadoop provides us with an API to access HDFs using C language , which is briefly described below:

Environment:ubuntu14.04 hadoop1.0.1 jdk1.7.0_51

AccessHDFsfunction is primarily defined in theHdfs.hfile, the file is located in thehadoop-1.0.1/src/c++/libhdfs/folder, and the corresponding library file is located in the hadoop-1.0.1/c++/linux-amd64-64/lib/directory.libhdfs.so, in addition to accessHDFsalso need to rely onJDKthe relatedAPI, the header file directory includesjdk1.7.0_51/include/and jdk1.7.0_51/include/linux/, the library file is in the jdk1.7.0_51/jre/lib/amd64/server/directorylibjvm.so, the libraries and the containing directories are given when the connection is compiled. The following is a simple source programmain.c:

1#include <stdio.h>2 3#include <stdlib.h>4 5#include <string.h>6 7#include"Hdfs.h"8 9  Ten  One intMainintargcChar**argv) A  - { -  the     /* -  - * Connection to HDFs. -  +      */ -  +Hdfsfs fs = Hdfsconnect ("127.0.0.1",9000); A  at     if(!FS) -  -     { -  -fprintf (stderr,"Failed to connect to hdfs.\n"); -  inExit (-1); -  to     } +  -     /* the  * * Create and open a file in HDFs. $ Panax Notoginseng      */ -  the     Const Char* Writepath ="/user/root/output/testfile.txt"; +  AHdfsfile WriteFile = Hdfsopenfile (FS, Writepath, o_wronly| O_creat,0,0,0); the  +     if(!WriteFile) -  $     { $  -fprintf (stderr,"Failed to open%s for writing!\n", Writepath); -  theExit (-1); - Wuyi     } the  -     /* Wu  - * Write data to the file. About  $      */ -  -     Const Char* Buffer ="Hello, world!."; -  ATsize num_written_bytes = Hdfswrite (FS, WriteFile, (void*) buffer, strlen (buffer) +1); +  the   -  $     /* the  the * Flush buffer. the  the      */ -  in     if(Hdfsflush (FS, WriteFile)) the  the     { About  thefprintf (stderr,"Failed to ' flush '%s\n", Writepath); the  theExit (-1); +  -     } the Bayi   the  the     /* -  - * Close the file. the  the      */ the  the hdfsclosefile (FS, WriteFile); -  the   the  theunsigned buffersize=1024x768;94  the     Const Char* Readpath ="/user/root/output/testfile.txt"; the  theHdfsfile readFile = Hdfsopenfile (FS, Readpath, O_rdonly, buffersize,0,0);98  About     if(!readFile) { - 101fprintf (stderr,"couldn ' t open file%s for reading\n", Readpath);102 103Exit (-2);104  the     }106 107     //data to is written to the file108 109     Char* Rbuffer = (Char*) malloc (sizeof(Char) * (buffersize+1)); the 111     if(Rbuffer = =NULL) { the 113         return-2; the  the     } the 117  118 119     //read from the file - 121Tsize cursize =buffersize;122 123      for(; cursize = =buffersize;) {124  theCursize = Hdfsread (FS, ReadFile, (void*) Rbuffer, cursize);126 127rbuffer[cursize]=' /'; - 129fprintf (stdout,"read '%s ' from file!\n", Rbuffer); the 131     } the 133  134 135 Free (rbuffer);136 137 hdfsclosefile (FS, readFile);138 139     /* $ 141 * Disconnect to HDFs.142 143      */144 145 Hdfsdisconnect (FS);146 147  148 149     return 0; Max 151}

The procedure is relatively simple, the important place has the comment, here does not explain each one. The main function implemented by the program is to create a new file called Testfile.txt in the/user/root/output/directory of HDFs , and write Hello, world!, and then the Hello, world! Read from the file and print it out. If you do not have a/user/root/output/directory in HDFs, you will need to create a new one or change the path to an existing one.

The following is a compilation connection command in my system:

g++ main.cpp-i/root/hadoop-1.0.1/src/c++/libhdfs/-i/usr/java/jdk1.7.0_51/include/-i/usr/java/jdk1.7.0_51/ include/linux/-l/root/hadoop-1.0.1/c++/linux-amd64-64/lib/-lhdfs-l/usr/java/jdk1.7.0_51/jre/lib/amd64/server/- Ljvm-o Hdfs-test

g++ -i -l -lhdfs and -LJVM libhdfs.so.0 and libjvm.so /etc/ld.so.conf file, and then execute the ldconfig command, which is equivalent to registering the appropriate library in the system and the runtime will not be able to find it.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.