Exam Number: 1Z0-449
Exam Title: Oracle Big Data 2017 Implementation Essentials
Associated Certification Paths
Passing this exam is required to earn these certifications. Select each certification title below to view full requirements.
Oracle Big Data 2017 Certification Implementation Specialist
Duration: 120 minutes
Number of Questions: 72
Passing Score: 67%
Beta exam score reports will be available in CertView approximately June 6, 2016. You will receive an email with instructions on how to access your beta exam results.
View passing score policy
Validated Against:
Exam has been validated against Oracle Big Data X4-2.
Format: Multiple Choice
Complete Recommended Training
Complete the training below to prepare for your exam (optional):
For Partners Only
Oracle Big Data 2016 Implementation Specialist
Oracle Big Data 2016 Implementation Boot Camp
OU Training
Oracle Big Data Fundamentals
Oracle NoSQL Database for Administrators
Additional Preparation and Information
A combination of Oracle training and hands-on experience (attained via labs and/or field experience) provides the best preparation for passing the exam.
Oracle Documentation
Oracle Big Data Documentation
Oracle NoSQL Documentation
Product tutorials
Big Data Learning Library
Datasheets and white papers
Oracle Big Data Resources and Whitepapers
Oracle NoSQL Enterprise Edition
Apache Flume User Guide
Big Data Technical Overview
Describe the architectural components of the Big Data Appliance
Describe how Big Data Appliance integrates with Exadata and Exalytics
Identify and architect the services that run on each node in the Big Data Appliance, as it expands from single to multiple nodes
Describe the Big Data Discovery and Big Data Spatial and Graph solutions
Explain the business drivers behind Big Data and NoSQL versus Hadoop
Core Hadoop
Explain the Hadoop Ecosystem
Implement the Hadoop Distributed File System
Identify the benefits of the Hadoop Distributed File System (HDFS)
Describe the architectural components of MapReduce
Describe the differences between MapReduce and YARN
Describe Hadoop High Availability
Describe the importance of Namenode, Datanode, JobTracker, TaskTracker in Hadoop
Use Flume in the Hadoop Distributed File System
Implement the data flow mechanism used in Flume
Oracle NoSQL Database
Use an Oracle NoSQL database
Describe the architectural components (Shard, Replica, Master) of the Oracle NoSQL database
Set up the KVStore
Use KVLite to test NoSQL applications
Integrate an Oracle NoSQL database with an Oracle database and Hadoop
Cloudera Enterprise Hadoop Distribution
Describe the Hive architecture
Set up Hive with formatters and SerDe
Implement the Oracle Table Access for a Hadoop Connector
Describe the Impala real-time query and explain how it differs from Hive
Create a database and table from a Hadoop Distributed File System file in Hive
Use Pig Latin to query data in HDFS
Execute a Hive query
Move data from a database to a Hadoop Distributed File System using Sqoop
Programming with R
Describe the Oracle R Advanced Analytics for a Hadoop connector
Use Oracle R Advanced Analytics for a Hadoop connector
Describe the architectural components of Oracle R Advanced Analytics for Hadoop
Implement an Oracle Database connection with Oracle R Enterprise
Oracle Loader for Hadoop
Explain the Oracle Loader for Hadoop
Configure the online and offline options for the Oracle Loader for Hadoop
Load Hadoop Distributed File System Data into an Oracle database
Oracle SQL Connector for Hadoop Distributed File System (HDFS)
Configure an external table for HDFS using the Oracle SQL Connector for Hadoop
Install the Oracle SQL Connector for Hadoop
Describe the Oracle SQL Connector for Hadoop Connector
Oracle Data Integrator (ODI) and Hadoop
Use ODI to transform data from Hive to Hive
Use ODI to move data from Hive to Oracle
Use ODI to move data from an Oracle database to a Hadoop Distributed File System using sqoop
Configure the Oracle Data Integrator with Application Adaptor for Hadoop to interact with Hadoop
Big Data SQL
Explain how Big Data SQL is used in a Big Data Appliance/Exadata architecture
Set up and configure Oracle Big Data SQL
Demonstrate Big Data SQL syntax used in create table statements
Access NoSQL and Hadoop data using a Big Data SQL query
Xquery for Hadoop Connector
Set up Oracle Xquery for Hadoop connector
Perform a simple Xquery using Oracle XQuery for Hadoop
Use Oracle Xquery with Hadoop-Hive to map an XML file into a Hive table
Securing Hadoop
Describe Oracle Big Data Appliance security and encryption features
Set up Kerberos security in Hadoop
Set up the Hadoop Distributed File System to use Access Control Lists
Set up Hive and Impala access security using Apache Sentry
Use LDAP and the Active directory for Hadoop access control
QUESTION 1
You need to place the results of a PigLatin script into an HDFS output directory.
What is the correct syntax in Apache Pig?
A. update hdfs set D as ‘./output’;
B. store D into ‘./output’;
C. place D into ‘./output’;
D. write D as ‘./output’;
E. hdfsstore D into ‘./output’;
Answer: B
Explanation:
Use the STORE operator to run (execute) Pig Latin statements and save (persist) results to the file system. Use STORE
for production scripts and batch mode processing.
Syntax: STORE alias INTO ‘directory’ [USING function];
Example: In this example data is stored using PigStorage and the asterisk character (*) as the field delimiter.
A = LOAD ‘data’ AS (a1:int,a2:int,a3:int);
DUMP A;
(1,2,3)
(4,2,1)
(8,3,4)
(4,3,3)
(7,2,5)
(8,4,3)
STORE A INTO ‘myoutput’ USING PigStorage (‘*’);
CAT myoutput;
1*2*3
4*2*1
8*3*4
4*3*3
7*2*5
8*4*3
QUESTION 2
The Hadoop NameNode is running on port #3001, the DataNode on port #4001, the KVStore agent on port #5001, and
the replication node on port #6001. All the services are running on localhost.
What is the valid syntax to create an external table in Hive and query data from the NoSQL Database?
A. CREATE EXTERNAL TABLE IF NOT EXISTSMOVIE( id INT,original_tit1e STRING,overview
STRING)STORED BY ‘oracle.kv.hadoop.hive.table.TableStorageHandler’TBLPROPERTIES
(“oracle.kv.kvstore”=”kvscore”,”oracle.kv.hosts”=”localhost:3001″,”oracle.kv.hadoop.hosts”
=”localhost”,”oracle.kv.tableName”= MOVIE”);
B. CREATE EXTERNAL TABLE IF NOT EXISTSMOVIE( id INT,original_title STRING,overview
STRING)STORED BY ‘oracle.kv.hadoop.hive.table.TableStorageHandler’TBLPROPERTIES (“oracle.kv.kvstore “=”
kvstore “,”oracle.kv.hosts”=”localhost:5001″,”oracle.kv.hadoop.hosts”=”localhost”,”oracle.kv.tab1eN ame”=”MOVIE”);
C. CREATE EXTERNAL TABLE IF NOT EXISTSMOVIE( id INT,original_title STRING,overview
STRING)STORED BY
‘oracle,kv.hadoop.hive.table.TableStorageHandler’TBLPROPERTIES
(“oracle.kv.kvstore”=”kvstore”,”oracle.kv.hosts”=”localhost:4001″,”oracle.kv.hadoop.hosts”=
“localhost”,”oracle.kv.tab1eName”=”MOVIE”);
D. CREATE EXTERNAL TABLE IF NOT EXISTSMOVIE( id INT,original_title STRING,overview
STRING)STORED BY ‘oracle,kv.hadoop.hive.table.TableStorageHandler’TBLPROPERTIES
(“oracle.kv.kvstore”=”kvstore”,”oracle.kv.hosts”=”localhost:6001″,”oracle.kv.hadoop.hosts”=
“localhost”,”oracle.kv.tab1eName”=”MOVIE”);
Answer: C
Explanation:
The following is the basic syntax of a Hive CREATE TABLE statement for a Hive external table over an Oracle NoSQL table:
CREATE EXTERNAL TABLE tablename colname coltype[, colname coltype,…]
STORED BY ‘oracle.kv.hadoop.hive.table.TableStorageHandler’
TBLPROPERTIES (
“oracle.kv.kvstore” = “database”,
“oracle.kv.hosts” = “nosql_node1:port[, nosql_node2:port…]”,
“oracle.kv.hadoop.hosts” = “hadoop_node1[,hadoop_node2…]”,
“oracle.kv.tableName” = “table_name”);
Where oracle.kv.hosts is a comma-delimited list of host names and port numbers in the
Oracle NoSQL Database cluster. Each string has the format hostname:port.
Enter multiple names to provide redundancy in the event that a host fails.
QUESTION 3
You need to create an architecture for your customer’s Oracle NoSQL KVStore. The customer needs to store clinical and non-clinical data together but only the clinical data is mission critical.
How can both types of data exist in the same KVStore?
A. Store the clinical data on the master node and the non-clinical data on the replica nodes.
B. Store the two types of data in separate partitions on highly available storage.
C. Store the two types of data in two separate KVStore units and create database aliases to mimic one KVStore.
D. Store the two types of data with differing consistency policies.
Answer: B
Explanation:
The KVStore is a collection of Storage Nodes which host a set of Replication Nodes. Data is spread across the Replication Nodes.
Each shard contains one or more partitions. Key-value pairs in the store are organized according to the key. Keys, in turn,
are assigned to a partition. Once a key is placed in a partition, it cannot be moved to a different partition. Oracle NoSQL
Database automatically assigns keys evenly across all the available partitions.
Note: At a very high level, a Replication Node can be thought of as a single database which contains key-value pairs.
Replication Nodes are organized into shards. A shard contains a single Replication Node which is responsible for
performing database writes, and which copies those writes to the other Replication Nodes in the shard.
QUESTION 4
Your customer is spending a lot of money on archiving data to comply with government regulations to retain data for 10 years.
How should you reduce your customer’s archival costs?
A. Denormalize the data.
B. Offload the data into Hadoop.
C. Use Oracle Data Integrator to improve performance.
D. Move the data into the warehousing database.
Answer: B
Explanation:
Extend Information Lifecycle Management to Hadoop
For many years, Oracle Database has provided rich support for Information Lifecycle Management (ILM). Numerous capabilities are available for data tiering – or storing data in different media based on access requirements and storage cost considerations.
These tiers may scale from
1) in-memory for real time data analysis,
2) Database Flash for frequently accessed data,
3) Database Storage and Exadata Cells for queries of operational data and
4) Hadoop for infrequently accessed raw and archive data:
QUESTION 5
Your customer keeps getting an error when writing a key/value pair to a NoSQL replica.
What is causing the error?
A. The master may be in read-only mode and as result, writes to replicas are not being allowed.
B. The replica may be out of sync with the master and is not able to maintain consistency.
C. The writes must be done to the master.
D. The replica is in read-only mode.
E. The data file for the replica is corrupt.
Answer: C
Explanation:
Replication Nodes are organized into shards. A shard contains a single Replication Node which is responsible for
performing database writes, and which copies those writes to the other Replication Nodes in the shard. This is called the
master node. All other Replication Nodes in the shard are used to service read-only operations.
Note: Oracle NoSQL Database provides multi-terabyte distributed key/value pair storage that offers scalable throughput
and performance. That is, it services network requests to store and retrieve data which is organized into key-value pairs.
Click here to view complete Q&A of 1Z0-449 exam
Certkingdom Review, Certkingdom PDF Torrents
Best Oracle 1Z0-449 Certification, Oracle 1Z0-449 Training at certkingdom.com
Comments Off on 1Z0-449 Oracle Big Data 2017 Implementation Essentials