A collection of __hadoop of the Hadoop face test

Source: Internet
Author: User
Tags shuffle sorts

Q1. Name the most common inputformats defined in Hadoop? Which one is default?

Following 2 are most common inputformats defined in Hadoop

-Textinputformat

-Keyvalueinputformat

-Sequencefileinputformat

62ω What is the difference between Textinputformatand Keyvalueinputformat class

Textinputformat:it reads lines of text files and provides the offset of the line as key to the Mapper and actual line as Value to the Mapper

Keyvalueinputformat:reads text file and parses lines into key, Val pairs. Everything up to the "the" character is sent as key to the Mapper and the remainder of the the "the" is sent as value to th E mapper.

Q3. What is inputsplit in Hadoop

When a Hadoop job was run, it splits input files into chunks and assign all split to a mapper to process. This is called Input Split

Q4. How are the splitting of the file invoked in Hadoop Framework

It is invoked by the Hadoop framework by running Getinputsplit () method of the "Input format class" (Like Fileinputformat) de Fined by the user

Q5. Consider case Scenario:in M/R system,

-HDFS block size is MB

-Input format is Fileinputformat

-We have 3 files of size 64K, 65Mb and 127Mb

Then how many input splits would be made by the Hadoop framework?

Hadoop would make 5 splits as follows

-1 split for 64K files

-2 splits for 65Mb files

-2 Splits for 127Mb file

Q6. What is the purpose of Recordreader in Hadoop

The Inputsplithas defined a slice of work, but does not describe how to access it.  The Recordreaderclass actually loads the data from its source and converts it into (key, value) pairs suitable for reading by the Mapper. The Recordreader instance is defined by the InputFormat

Q7. After the Map phase finishes, the Hadoop framework does "partitioning, Shuffle and sort". Explain What happens in this phase?

-Partitioning

Partitioning is the process of determining which reducer instance'll receive which intermediate keys and values. Each mapper must determine for all of the its output (key, value) pairs which reducer'll receive them. It is necessary this for any key, regardless of which mapper instance generated it, the destination partition is the same

-Shuffle

After the "the" I map tasks have completed, the nodes may still is performing several more map tasks. But They also begin exchanging the intermediate outputs from the map tasks to where they are the required by the reducers. This process's moving map outputs to the reducers is known as shuffling.

-Sort

Each reduce task is responsible to reducing the values associated with several intermediate keys. The set of intermediate keys on a single node are automatically sorted by Hadoop before they are presented to the reducer

Q9. If No custom partitioner is defined in the Hadoop then how are data partitioned before its sent to the reducer

The default partitioner computes a hash value for the "key" and assigns the partition based on this result

Q10. What is a combiner

The combiner is a "mini-reduce" process which operates only on data generated by a mapper. The combiner would receive as input all data emitted by the Mapper instances on a given node. The output from the combiner are then sent to the reducers, instead of the output from the mappers.

Q11. Give an example scenario where a cobiner can is used and where it cannot be used

There can be several examples following are the most common ones

-Scenario where can use Combiner

Getting list of distinct words in a file

-Scenario where you cannot use a combiner

Calculating mean of a list of numbers

Q12. What is Job Tracker

Job Tracker is the service within Hadoop this runs Map Reduce jobs on the cluster

Q13. What are some typical functions of Job Tracker

The following are some typical tasks of Job Tracker

-Accepts jobs from clients

-It talks to the Namenode to determine the location of the data

-It locates Tasktracker nodes with available slots at or near the data

-It submits the work to the chosen task Tracker nodes and monitors progress of the all task by receiving heartbeat signals F Rom Task Tracker

Q14. What is Task tracker

Task Tracker is a node in the cluster that accepts tasks like Map, Reduce and Shuffle operations-from a Jobtracker

Q15. Whats the relationship between Jobs and Tasks in Hadoop

One job is broken to one or many tasks in Hadoop.

Q16. Suppose Hadoop spawned tasks for a job and one of the task failed. What Willhadoop do?

It would restart the task again on some other tasks tracker and only if the task fails more than 4 (default setting and can is changed) times would it kill the job

Q17.  Hadoop achieves parallelism by dividing the tasks across many nodes, it are possible for a few slow nodes to rate-limit the Rest of the program and slow. What mechanism Hadoop provides to combat this

Speculative Execution

Q18. How does speculative execution works in Hadoop

Job Tracker makes different task trackers process same input. When tasks complete, they announce this fact to the Job Tracker. Whichever copy of a task finishes becomes the definitive copy. If other copies were executing speculatively, Hadoop tells the Task trackers to abandon the tasks and discard output S. The reducers then receive their inputs from whichever Mapper completed successfully, a.

Q19. Using command line with Linux, how would you

-All jobs running in the Hadoop cluster

-Kill a job

-Hadoop job-list

-Hadoop Job-kill Jobid

Q20. What is Hadoop streaming

Streaming is a generic API that allows programs written in virtually any language to be used Ashadoop Mapper and reducer I Mplementations

Q21. What is the characteristic of streaming APIs that makes it flexible run map reduce jobs in languages like Perl, Ruby, awk E Tc.

Hadoop streaming allows to use arbitrary programs for the Mapper and reducer phases of a map Reduce job by has both Map Pers and reducers receive their input on stdin and emit output (key, value) pairs on stdout.

Q22. Whats is distributed Cache in Hadoop

Distributed Cache is a facility provided from the Map/reduce framework to cache files (text, archives, Jars, and) Neede D by applications during execution of the job. The framework would copy the necessary files to the slave node before any tasks for the job are executed in that node.

Q23. What is the benifit of distributed cache, why can we just have the file in HDFS and have the application read it

this is because distributed cache is much faster. it copies  The file to all trackers at the start of the job. now  if the task tracker runs 10 or 100 mappers or reducer,  it will use the same copy of distributed cache. on the  other hand, if you put code in file to read it  From hdfs in the mr job then every mapper will try to  access it from hdfs hence if a task tracker run 100  map jobs then it will try to read this file 100  Times from hdfs. also hdfs is not very efficient when used  like this.

q.24 What Mechanism does HADOOP framework provides to synchronize changes made in distribution Cache during Application

This is a trick questions. There is no such mechanism. Distributed Cache by are read only during the time of Job execution

Q25. Have you ever used counters in Hadoop. Give us an example scenario

Anybody who claims to have worked on a Hadoop project are expected to use counters

Q26. Is it the possible to provide multiple input to Hadoop? If Yes then how can your give multiple directories as input to the Hadoop job

Yes, the input format class provides methods to add multiple directories as input to a Hadoop job

Q27. Is it possible to have Hadoop job output in multiple directories. If Yes then how

Yes, by using multiple outputs class

Q28. What'll a Hadoop job do if your try to run it and an output directory This is already present? Would it

-Overwrite It

-Warn you and continue

-Throw an exception and exit

The Hadoop job would throw an exception and exit.

Q29. How can your set an arbitary number of mappers to be created for a job in Hadoop

This is a trick question. You cannot set it

Q30. How can your set an arbitary number of reducers to be created for a job in Hadoop

Can either do it progamatically by using method Setnumreducetasksin the Jobconfclass or set it up as a configuration s Etting

32. Design a system so that it can extract the data in the specified format from the increasing number of different data sources.
Requirements: 1, the operation of the results to be able to roughly know the extraction effect, and can continue to improve the extraction method;
2, due to the diversity of data sources, please give a flexible configuration of the program framework;
3, the data source may have mysql,sqlserver and so on;
4, the system has the ability of continuous excavation, that is, can be repeatedly extracted more information;

33. A classic question:

The existing 100 million integers evenly distributed, if you want to get the first 1K maximum number, to find the optimal algorithm. ­
(regardless of the memory limit, also do not consider read and write storage, time complexity is the least algorithm is the best algorithm)

Let me first say what I think: chunking, like a 1W block, 1W per block, and then find each maximum, from the largest 1W value to find the largest 1K, then the other 9K maximum value of the block can be thrown away from the remaining largest 1K value in the block to find the former 1K. Then the original problem was scaled down to 1/10. ­

Problem:
1. The optimal time complexity of this method of chunking. ­
2. How to block to achieve the best. For example, can also be divided into 10W blocks, 1000 per block number. The scale of the problem can be reduced to the original 1/100. But in fact, the complexity does not diminish. ­
3. There is no better way to solve this problem. ­

MapReduce approximate process.

Combiner, Partition role.

36. Implement SQL statement Select COUNT (x) from a group by B with MapReduce.

36. How to implement two table joins with MapReduce, and what are the methods.

1, the MapReduce in which the sorting occurred in several stages. Whether these sorts can be avoided, and why.

A: A mapreduce job consists of the map stage and the reduce phase, which are sorted by the data, and in this sense the MapReduce framework is essentially a distributed sort. In the map phase, in the map phase, the map task outputs a file in the local disk that is sorted by key (in which a quick sort is used) (the middle may produce multiple files but will eventually be merged into one), and in the reduce phase, each reduce task sorts the data received, so that The data is divided into groups according to Key, which is then assigned to reduce () processing in groups. A lot of people misunderstand that in the Map phase, if you don't use combiner, it's not going to be sorted, which is wrong, regardless of whether you use the Combiner,map task to sort the resulting data (if there is no reduce task, it will not be sorted, In fact, the map phase is sorted to reduce the sort load on the reduce side. Since these sorts are mapreduce automatically, the user is out of control and therefore cannot be avoided in the Hadoop 1.x and can not be closed, but hadoop2.x can be turned off.

2, write MapReduce job, how to do in the reduce phase, first to the key sorting, and then the value of the order.

A: This problem is often called "two order", the most common method is to put value into the key, implement a combination of key, and then customize the key collation (for key implementation of a writablecomparable)

3, how to use MapReduce to implement two table join, you can consider several situations: (1) A table large, a table small (can be placed in memory); (2) Two tables are large tables

A: The first situation is simpler, just put the small table into the distributedcache can be; the second commonly used methods are: Map-side join (requires the input data ordered, usually the user hbase in the data table), reduce-side join, Semi Join (semi-connection), specific information can be queried online

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.