"Big Data series" uses APIs to modify the number of replicas and block sizes of Hadoop

Source: Internet
Author: User
Tags deprecated hdfs dfs

Package Com.slp.hdfs;import Org.apache.commons.io.output.bytearrayoutputstream;import Org.apache.hadoop.conf.configuration;import Org.apache.hadoop.fs.fsdatainputstream;import Org.apache.hadoop.fs.fsdataoutputstream;import Org.apache.hadoop.fs.filesystem;import Org.apache.hadoop.fs.Path; Import Org.apache.hadoop.io.ioutils;import org.junit.test;import java.io.ioexception;/** * @author SANGLP * @create 2017-12-08 11:26 * @desc HDFs Test **/public class Testhdfs {/** * Normal output * I am a girl * I want to bes a super Man * But I cannot still now * resource under Core-site.xml s201 if there is no local mapping will be reported to Unknow host * If the read file does not exist it will be reported not Exis TS */@Test public void Testsave () {/** * Load operation source code * Static {* Deprecationcontext = new Atomicreference (New Configuration.deprecationcontext ((configuration.deprecationcontext) NULL,         Defaultdeprecations));         * ClassLoader CL = Thread.CurrentThread (). Getcontextclassloader ();       * if (CL = = null) {  * CL = Configuration.class.getClassLoader (); *} * * if (Cl.getresource ("hadoop-site.xml") = null) {* Log.warn ("Deprecated:hadoop-site.xml fo und in the classpath. Usage of Hadoop-site.xml is deprecated. Instead use Core-site.xml, Mapred-site.xml and Hdfs-site.xml to override properties of Core-default.xml, Mapred-default.x         ML and hdfs-default.xml respectively ");         *} * * Adddefaultresource ("Core-default.xml");         * Adddefaultresource ("Core-site.xml");  * */Configuration Configuration = new configuration ();//load classpath under File try{FileSystem fs =            Filesystem.get (configuration); Path PATH = new Path ("Hdfs://192.168.181.201/user/sanglp/hadoop/hello.txt");//s201 resolution not configured locally java.lang.IllegalArgumentException:java.net.UnknownHostException:s201 file does not exist Java.io.FileNotFoundException:File         Does not exist:/user/sanglp/hadoop/hello.txt fsdatainputstream fis = fs.open (path);   Bytearrayoutputstream BAOs = new Bytearrayoutputstream ();            Ioutils.copybytes (fis,baos,1024);            Fis.close ();        System.out.print (New String (Baos.tobytearray ()));        } catch (IOException e) {e.printstacktrace (); }}/** * Permissions configuration: * Org.apache.hadoop.security.AccessControlException:Permission Denied:user=hadoop, access= WRITE, inode= "/user/sanglp/hadoop": Sanglp:supergroup:drwxr-xr-x * HDFs dfs-chmod o+w/user/sanglp/hadoop */@T        est public void Testwrite () {Configuration configuration = new configuration ();            try {FileSystem fs = filesystem.get (configuration);            Path PATH = new Path ("Hdfs://192.168.181.201/user/sanglp/hello.txt");            Fsdataoutputstream Fsdataoutputstream = fs.create (New Path ("/user/sanglp/hadoop/a.txt"));        Fsdataoutputstream.write ("How is You". GetBytes ());        } catch (IOException e) {e.printstacktrace ();  }    }    /**   * Custom copy number and BlockSize * settings block too small * org.apache.hadoop.ipc.RemoteException (java.io.IOException): Specified block size is Less than configured minimum value (dfs.namenode.fs-limits.min-block-size): 5 < 1048576 * Hdfs-site.xml * &lt     ;p roperty> * <name>dfs.namenode.fs-limits.min-block-size</name> * <value>5</value>         * </property> * @Test public void TestWrite2 () {Configuration configuration = new configuration ();            try {FileSystem fs = filesystem.get (configuration);            Path PATH = new Path ("Hdfs://192.168.181.201/user/sanglp/hello.txt");            Public Fsdataoutputstream Create (Path F, boolean overwrite, Int. buffersize, short replication, long blockSize)            Fsdataoutputstream Fsdataoutputstream = fs.create (New Path ("/user/sanglp/hadoop/a.txt"), true,1024, (short) 2,5);        Fsdataoutputstream.write ("How is You". GetBytes ()); } catch (IOException e) {E.prinTstacktrace (); }    }}

  

"Big Data series" uses APIs to modify the number of replicas and block sizes of Hadoop

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.