Cloudera Hadoop Administrator Ccah; developer CCA-175 Exam Outline

Source: Internet
Author: User
Tags sqoop

Cloudera Certified Administrator forapache Hadoop (CCA-500)
Number of Questions:Questions
Time Limit:minutes
Passing Score:70%
Language:中文版, Japanese

Exam Sections and Blueprint

1. HDFS (17%)

  • Describe the function of HDFS daemons

  • Describe the normal operation of a Apache Hadoop cluster, both in data storage and in data processing

  • Identify current features of computing systems, motivate a system like Apache Hadoop

  • Classify major goals of HDFS Design

  • Given a scenario, identify appropriate use case for HDFS Federation

  • Identify components and daemon of an HDFS ha-quorum cluster

  • Analyze the role of HDFS Security (Kerberos)

  • Determine the best data serialization choice for a given scenario

  • Describe file read and write paths

  • Identify the commands to manipulate files in the Hadoop File System Shell

2. YARN and MapReduce version 2 (MRV2) (17%)

  • Understand how upgrading a cluster from Hadoop 1 to Hadoop 2 affects cluster settings

  • Understand how to deploy MapReduce V2 (Mrv2/yarn), including all YARN daemons

  • Understand basic design Strategy for MapReduce v2 (MRV2)

  • Determine how YARN handles resource allocations

  • Identify the workflow of MapReduce job running on YARN

  • Determine which files you must the change and the How on order to migrate a cluster from the MapReduce version 1 (MRv1) to mapredu CE version 2 (MRV2) running on YARN

3. Hadoop Cluster Planning (16%)

  • Principal points to consider in choosing the hardware and operating systems to host a Apache Hadoop cluster

  • Analyze the choices in selecting an OS

  • Understand kernel tuning and disk swapping

  • Given a scenario and workload pattern, identify a hardware configuration appropriate to the scenario

  • Given a scenario, determine the ecosystem components your cluster needs to run in order to fulfill the SLA

  • Cluster Sizing:given a scenario and frequency of execution, identify the specifics for the workload, including CPU, Memory, storage, disk I/O

  • Disk Sizing and Configuration, including JBOD versus RAID, SANs, virtualization, and disk Sizing requirements in a Cluster

  • Network Topologies:understand network usage in Hadoop (for both HDFS and MapReduce) and propose or identify key n Etwork design components for a given scenario

4. Hadoop Cluster installation andadministration (25%)

  • Given a scenario, identify how the cluster would handle disk and machine failures

  • Analyze A logging configuration and logging configuration file format

  • Understand the basics of Hadoop metrics and cluster health monitoring

  • Identify the function and purpose of available tools for cluster monitoring

  • Be able to install all the Ecoystme the CDH 5, including (and not limited): Impala, Flume, Oozie, Hue, Cloudera Manager, Sqoop, Hive, and Pig

  • Identify the function and purpose of available tools for managing the Apache Hadoop file system

5. Resource Management (10%)

    • Understand the overall design goals of each of the Hadoop schedulers

    • Given a scenario, determine how the FIFO Scheduler allocates cluster resources

    • Given a scenario, determine how the Fair Scheduler allocates cluster resources under YARN

    • Given a scenario, determine how the capacity Scheduler allocates cluster resources

6. Monitoring and Logging (15%)

    • Understand The functions and features of     Hadoop ' s metric collection abilities

    • analyze the NameNode and Jobtracker Web UIs

    • understand How to monitor cluster daemons

    • identify and monitor CPU usage on master nodes

    • describe How to monitor swap and memory     allocation on all nodes

    • identify How to view and manage Hadoop ' s log     files

    • interpret a log file

CCA Spark and Hadoop Developer Exam (CCA175)

Number of questions:10–12performance-based (hands-on) tasks on CDH5 cluster. See below for full clusterconfiguration

Time limit:120 minutes

Passing score:70%

Language:english, Japanese (forthcoming)

Required Skills

Data Ingest

The skills to transfer data between external Systemsand your cluster. This includes the following:

    • Import Data from a MySQL database into HDFS     using Sqoop

    • export data to a MySQL database from HDFS     using Sqoop

    • change the delimiter and file format of data     during import using Sqoop

    • ingest Real-time and Near-real time (NRT)     streaming data into HDFS using Flume

    • load data into and out of HDFS using the     Hadoop File System (FS) commands

Transform, Stage, Store

Convert a set of data values in a given format Storedin HDFS into new data values and/or a new data format and write them Into HDFS. This includes writing Spark applications in both Scala and Python:

    • Load data from HDFs and store results back to HDFs using Spark

    • Join disparate datasets together using Spark

    • Calculate aggregate statistics (e.g., average or sum) using Spark

    • Filter data into a smaller dataset using Spark

    • Write a query that produces ranked or sorted data using Spark

Data Analysis

use Data Definition Language (DDL) to the Create tables inthe hive Metastore for use by Hive and Impala.

    • Read and/or create a table in the Hive     Metastore in a given schema

    • extract an Avro schema from a set of datafiles     using avro-tools

    • create a table in the Hive Metastore using the     Avro file format and an external schema file

    • improve query performance by creating     Partitioned tables in the Hive metastore

    • evolve an Avro schema by changing JSON files

above, there are questions to add Q1438118790 Ask


Cloudera Hadoop Administrator Ccah; developer CCA-175 Exam Outline

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.