HCIA-Big Data H13-711 training material

  Edina  08-20-2019

In order to help you pass H13-711 HCIA-Big Data V2.0 Exam, Passquestion new updated HCIA-Big Data H13-711 training material which have a very close similarity with real exam. Passquestion can promise that you can 100% pass your exam to attend Huawei certification H13-711 exam. The accuracy rate of HCIA-Big Data H13-711 training material provided by Passquestion is very high and they can 100% guarantee you pass the exam successfully for one time.

H13-711 HCIA-Big Data Exam Contents

HCIA-Big Data certification is based on big data technologies with the focus on how to assess and certify basic technical principles and operation practices of common and essential big data components, as well as functions and features of the FusionInsight HD solution.

Passing the HCIA-Big Data certifies that the holder is familiar with typical application scenarios of big data, has mastered the technical principles and architectures of common and essential big data components, and knows how to use Huawei FusionInsight HD to import and export massive volumes of data, how to perform basic operations on HDFS and on the HBase client and HBase tables, and how to run common HQL statements on Hive. In conclusion, the holder has acquired knowledge and skills required for positions such as big data pre-sales and after-sales technical support, manage big data project and big data O&M, develop big data, and analyse big data.

H13-711 HCIA-Big Data Exam Knowledge Points

Chapter 01 Big Data Industry and Technological Trends
Chapter 02 HDFS – Distributed File System Technology
Chapter 03 MapReduce – Distributed Offline Batch Processing and Yarn – Resource Coordination
Chapter 04 Spark2x – In-memory Distributed Computing Engine
Chapter 05 HBase – Distributed NoSQL Database
Chapter 06 Hive – Distributed Data Warehouse
Chapter 07 Streaming – Distributed Stream Computing Engine
Chapter 08 Flink – Stream Processing and Batch Processing Platform
Chapter 09 Loader – Data Conversion
Chapter 10 Flume–Massive Log Aggregation
Chapter 11 Kafka – Distributed Message Subscription System
Chapter 12 ZooKeeper – Distributed Coordination Service
Chapter 13 FusionInsight HD Solution Overview

Download HCIA-Big Data H13-711 Training Material:

1. Which of the following descriptions are correct about the large amount of small file storage? (Multiple Choice)
A. Storing a large number of small files in HDFS puts a lot of pressure on the NameNode.
B. HBase stores a large number of small files, and Compaction will waste IO resources.
C. Huawei HFS is suitable for storing large files and can selectively store files in HDFS or MOB.
D. All above statement is wrong.
Answer: ABC

2. Flink is a unified computing framework that combines batch processing and stream processing. Its core is a stream data processing engine for data distribution and parallel computing.
A. True
B. False
Answer: A

3. When using Loader for data import and export, you must go through the Reduce phase for data processing.
A. True
B. False
Answer: B

4. What is the default Block Size of HDFS in the Fusionlnsight HD system?
A. 32MB
B. 64MB
C. 128MB
D. 256MB
Answer: C

5. Which of the following description is wrong about the Hive log collection on the Fusionlnsight Manager interface?
A. You can specify an instance for log collection, such as specifying a log to collect MetaStore.
B. You can specify the time period for log collection. For example, only logs from 2016-1-1 to 2016-1-10 are collected.
C. You can specify the node IP for log collection, such as downloading only the logs of an IP.
D. You can specify a specific user for log collection, such as downloading only logs generated by UserA users.
Answer: D

6. Flink is a unified computing framework that combines batch processing and stream processing. Its core is a stream data processing engine for data distribution and parallel computing.
A. True
B. False
Answer: A

Leave And reply:

  TOP 50 Exam Questions
Exam