26 Preliminary use of clusterDesign ideas of HDFsL Design IdeasDivide and Conquer: Large files, large batches of files, distributed on a large number of servers, so as to facilitate the use of divide-and-conquer method of massive data analysis;L role in Big Data systems:For a variety of distributed computing framework (such as: Mapreduce,spark,tez, ... ) Provides data storage servicesL Key Concepts: File Cut, copy storage, meta data26.1 HDFs Use1. Vie
"), also add our standard Spark classpath, built using compute-classpath.sh.
Classpath= ' $FWDIR/bin/compute-classpath.sh '
Classdata-path= "$SPARK _qiutest_jar: $CLASSPATH"
# find Java Binary
If [-N "${java_home}"]; Then
Runner= "${java_home}/bin/java"
Else
If [' command-v Java ']; Then
Runner= "Java"
Else
echo "Java_home is not set" >2
Exit 1
Fi
Fi
If ["$SPARK _print_launch_command" = = "1"]; Then
Echo-n "Spark Command:"
echo "$RUNNER"-CP "$CLASSPATH" "$@"
echo "=============================
persistent storage. If namenode dies before the file is closed, the file will be lost.
The above method is adopted after careful consideration of the characteristics of the target application running on HDFS. These applications need to write streaming files. If a client directly writes a remote file without the client cache, the network speed and network congestion will greatly affect the throughput. This method is not without precedent. Early distri
I. OverviewIn recent years, big data technology in full swing, how to store huge amounts of data has become a hot and difficult problem today, and HDFs Distributed File system as a distributed storage base for Hadoop projects, but also provide data persistence for hbase, it has a very wide range of applications in big data projects.The Hadoop distributed filesystem (Hadoop Distributed File System,hdfs) is d
I. OverviewIn recent years, big data technology in full swing, how to store huge amounts of data has become a hot and difficult problem today, and HDFs Distributed File system as a distributed storage base for Hadoop projects, but also for hbase to provide data persistence, it has a wide range of applications in big data projects.Hadoop distributed FileSystem (Hadoop Distributed File System. HDFS) is design
Introduction
Hadoop Distributed File System (HDFS) is a distributed file system designed for running on commercial hardware. It has many similarities with the existing distributed file system. However, it is very different from other distributed file systems. HDFS is highly fault tolerant and intended to be deployed on low-cost hardware. HDFS provides high-throug
Java Operation HDFS Development environment constructionWe have previously described how to build hdfs pseudo-distributed environment on Linux, and also introduced some common commands in HDFs. But how do you do it at the code level? This is what is going to be covered in this section:1. First use idea to create a MAVEN project:Maven defaults to a warehouse that
now let's take a closer look at the FileSystem class for Hadoop. This class is used to interact with Hadoop's file system. While we are mainly targeting HDFS here, we should let our code use only abstract class filesystem so that our code can interact with any Hadoop file system. When we write the test code, we can test it with the local file system, use HDFs when deploying, just configure it, no need to mo
Multiple interfaces are available to access HDFS. The command line interface is the simplest and the most familiar method for programmers.
In this example, HDFS in pseudo sodistributed mode is used to simulate a distributed file system. For more information about how to configure the pseudo-distributed mode, see configure:
This means that the default file system of hadoop is
02_note_ Distributed File System HDFS principle and operation, HDFS API programming; 2.x under HDFS new features, high availability, federated, snapshotHDFS Basic Features/home/henry/app/hadoop-2.8.1/tmp/dfs/name/current-on namenodeCat./versionNamespaceid (spatial identification number, similar to cluster identification number)/home/henry/app/hadoop-2.8.1/tmp/dfs
Label: style blog HTTP color Io Java strong SP File
Copy Mechanism
1. Copy placement policy
The first copy is placed on the datanode of the uploaded file. If it is submitted outside the cluster, a node with a low disk speed and a low CPU usage will be randomly selected;The second copy is placed on nodes in different racks of the first copy;Third copy: different nodes in the same rack as the second copy;If there are more copies: randomly placed in the node;
2. Copy Coefficient
1) Whe
1. There is a block on the blocks hard disk, which represents the smallest data unit that can be read and written, usually 512 bytes. A file system based on a single hard disk also has the concept of block. Generally, a group of blocks on the hard disk are combined into a block, which is usually several kb in size. These are transparent to users of the file system. Users only know that they have written a certain size of files to the hard disk or read a certain size of files from the hard disk.
complete the unfinished part of the previous section, and then analyze the internal principle of the HDFs read-write file.Enumerating FilesThe Liststatus () method of the FileSystem (Org.apache.hadoop.fs.FileSystem) can list the contents of a directory.Public filestatus[] Liststatus (Path f) throws FileNotFoundException, Ioexception;public filestatus[] Liststatus (Path[] files) throws FileNotFoundException, Ioexception;public filestatus[] Liststatus (
The main class used for file operations in Hadoop is located in the org. apache. hadoop. fs package. Basic file operations include open, read, write, and close. In fact, the file API of Hadoop is generic and can be used in file systems other than HDFS.
The starting point of the Hadoop file API is the FileSystem class, which is an abstract class that interacts with the file system. Different implementation subclasses exist to process
Hadoop distributed FileSystem (Hadoop Distributed File System, HDFS)A distributed File system is a file system that consents to file sharing on multiple hosts over a network. Allows multiple users on multiple machines to share files and storage space.HDFs is just one of them. applies to the case of one write, multiple queries. Concurrent write scenarios are not supported. Small files are not appropriate. 2.HDFS
Use this command bin/Hadoop fs-cat to read the file content on HDFS to the console.
You can also use HDFS APIs to read data. As follows:
Import java.net. URI;Import java. io. InputStream;Import org. apache. hadoop. conf. Configuration;Import org. apache. hadoop. fs. FileSystem;Import org. apache. hadoop. fs. Path;Import org. apache. hadoop. io. IOUtils;Public class FileCat{Public static void main (String []
You can use the command line bin/Hadoop fs-rm (r) to delete files (folders) on hdfs)
You can also use HDFS APIs. As follows:
Import java.net. URI;Import org. apache. hadoop. conf. Configuration;Import org. apache. hadoop. fs. FileSystem;Import org. apache. hadoop. fs. Path;Public class FileDelete{Public static void main (String [] args) throws Exception{If (args. length! = 1 ){System. out. println ("Usage
Not much to say, directly on the code.CodePackage zhouls.bigdata.myWholeHadoop.HDFS.hdfs5;Import java.io.IOException;Import Java.net.URI;Import java.net.URISyntaxException;Import org.apache.hadoop.conf.Configuration;Import Org.apache.hadoop.fs.FileSystem;Import Org.apache.hadoop.fs.Path;/**** @author* @function Copying from the Local file system to HDFS**/public class Copyinglocalfiletohdfs{/*** @function Main () method* @param args* @throws IOExcepti
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.