The source code for the C + + library is located in:
&http://www.aliyun.com/zixun/aggregation/37954.html ">nbsp;
Hadoop-2.4.0-src/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs
This provides a makefile for compiling these source files directly, and is compiled and named LIBHDFS.A. Makefile content is:
cc++ = gcc
defines =-dg_arch_x86_64
Cflags + +-fpic-d_largefile_source-d_file_offset_bits=64-pipe-o3-d_reentrant $ (defines)
Cxxflags + +-pipe-o3-d_reentrant $ (defines)-rdynamic
AR = ar cqs
Lflags =-rdynamic
OBJECTS = exception.o expect.o hdfs.o jni_helper.o NATIVE_MINI_DFS.O
TARGET = LIBHDFS.A
#command, don ' t change
chk_dir_exists= test-d
Del_file = Rm-f
First:all
####### Implicit rules
. Suffixes:. O. C. cpp. cc. cxx. C. cu
. CPP.O:
$ (CXX)-C $ (cxxflags) $ (incpath)-O "$@" "$<"
. CC.O:
$ (CXX)-C $ (cxxflags) $ (incpath)-O "$@" "$<"
. CXX.O:
$ (CXX)-C $ (cxxflags) $ (incpath)-O "$@" "$<"
. C.O:
$ (CXX)-C $ (cxxflags) $ (incpath)-O "$@" "$<"
. C.O:
$ (CC)-C $ (cflags) $ (incpath)-O "$@" "$<"
####### Build Rules
All: $ (AR)
$ (AR): $ (TARGET)
$ (TARGET): $ (OBJECTS)
$ (AR) $ (TARGET) $ (OBJECTS)
Clean:
-$ (del_file) $ (OBJECTS) $ (TARGET)
After saving, make directly. Compile the information as follows:
Gcc-c-fpic-d_largefile_source-d_file_offset_bits=64-pipe-o3-d_reentrant-dg_arch_x86_64-o "EXCEPTION.O" Exception.c "
Gcc-c-fpic-d_largefile_source-d_file_offset_bits=64-pipe-o3-d_reentrant-dg_arch_x86_64-o "EXPECT.O" "expect.c"
Gcc-c-fpic-d_largefile_source-d_file_offset_bits=64-pipe-o3-d_reentrant-dg_arch_x86_64-o "HDFS.O" "HDFS.C"
Gcc-c-fpic-d_largefile_source-d_file_offset_bits=64-pipe-o3-d_reentrant-dg_arch_x86_64-o "JNI_HELPER.O" jni_ Helper.c "
Gcc-c-fpic-d_largefile_source-d_file_offset_bits=64-pipe-o3-d_reentrant-dg_arch_x86_64-o "NATIVE_MINI_DFS.O" NATIVE_MINI_DFS.C "
Ar cqs libhdfs.a exception.o expect.o hdfs.o jni_helper.o
The next test is whether the library can be used. Enter the following directory
Hadoop-2.4.0-src/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/test
Locate the test source code and compile all the test code in the folder. Here is a simple makefile, which reads as follows:
LIBS =-l$ (java_home)/jre/lib/amd64/server/-ljvm-l. /-lhdfs
Incpath =-i$ (java_home)/include-i$ (Java_home)/include/linux-i. I.
All:
Gcc-o hdfs_ops test_libhdfs_ops.c $ (incpath) $ (LIBS)
Gcc-o hdfs_read TEST_LIBHDFS_READ.C $ (incpath) $ (LIBS)
Gcc-o hdfs_write test_libhdfs_write.c $ (incpath) $ (LIBS)
Gcc-o hdfs_zerocopy test_libhdfs_zerocopy.c $ (incpath) $ (LIBS)
Direct make, compiling information as follows:
Gcc-o Hdfs_ops test_libhdfs_ops.c-i/d0/data/lichao/software/java/jdk1.7.0_55/include-i/d0/data/lichao/software/ Java/jdk1.7.0_55/include/linux-i. I.. -l/d0/data/lichao/software/java/jdk1.7.0_55/jre/lib/amd64/server/-ljvm-l. /-lhdfs
Gcc-o Hdfs_read Test_libhdfs_read.c-i/d0/data/lichao/software/java/jdk1.7.0_55/include-i/d0/data/lichao/software /java/jdk1.7.0_55/include/linux-i. I.. -l/d0/data/lichao/software/java/jdk1.7.0_55/jre/lib/amd64/server/-ljvm-l. /-lhdfs
Gcc-o Hdfs_write test_libhdfs_write.c-i/d0/data/lichao/software/java/jdk1.7.0_55/include-i/d0/data/lichao/ Software/java/jdk1.7.0_55/include/linux-i. I.. -l/d0/data/lichao/software/java/jdk1.7.0_55/jre/lib/amd64/server/-ljvm-l. /-lhdfs
Gcc-o hdfs_zerocopy test_libhdfs_zerocopy.c-i/d0/data/lichao/software/java/jdk1.7.0_55/include-i/d0/data/lichao/ Software/java/jdk1.7.0_55/include/linux-i. I.. -l/d0/data/lichao/software/java/jdk1.7.0_55/jre/lib/amd64/server/-ljvm-l. /-lhdfs
We randomly generate a file that contains 1 to 10 of these 10 digits and is loaded into the HDFs file system.
Seq 1 > Tmpfile
Hadoop Fs-mkdir/data
Hadoop fs-put Tmpfile/data
Hadoop fs-cat/data/tmpfile
1
2
3
4
5
6
7
8
9
10
Ok。 Now run the generated Hdfs_read program and test the HDFs 64-bit C + + interface:
./hdfs_read/data/tmpfile 21 32
Run the information as follows:
Log4j:warn No Appenders could is found for logger (org.apache.hadoop.metrics2.lib.MutableMetricsFactory).
Log4j:warn Please initialize the log4j system properly.
Log4j:warn http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
1
2
3
4
5
6
7
8
9
10
--------------------------------------Split Line--------------------------------------
Set up Hadoop environment http://www.linuxidc.com/Linux/2013-06/86106.htm on Ubuntu 13.04
Ubuntu 12.10 +hadoop 1.2.1 Version cluster configuration http://www.linuxidc.com/Linux/2013-09/90600.htm
Set up Hadoop environment on Ubuntu (stand-alone mode + pseudo distribution mode) http://www.linuxidc.com/Linux/2013-01/77681.htm
Configuration of the Hadoop environment under Ubuntu http://www.linuxidc.com/Linux/2012-11/74539.htm
The single edition builds the Hadoop environment picture and text tutorial detailed http://www.linuxidc.com/Linux/2012-02/53927.htm