Dumpszone Cloudera CCA175 Dumps | CCA175 VCE

[April /2018] 2018 Dumpszone Cloudera CCA175 Dumps | CCA175 VCE - Free Try

New Updated CCA175 Exam Questions from Dumpszone CCA175 PDF dumps! Welcome to download the newest Dumpszone CCA175 VCE dumps: https://www.dumpszone.com/CCA175-braindumps.html

Keywords: CCA175 exam dumps, CCA175 exam questions, CCA175 VCE , CCA175 VCE Dumps, CCA175 PDF dumps, CCA175 PDF Dumps Questions, CCA175 questions answers, CCA175 practice test, CCA175 study guide, CCA175 braindumps, CCA175 – CCA Spark and Hadoop Developer Exam - Performance Based Scenarios Exam

P.S. Free CCA175 VCE dumps download from direct PDF Link: http://www.dumpszone.com/top/demo/Cloudera/CCA175.pdf


QUESTION NO: 1
Problem Scenario 1:

You have been given MySQL DB with following details.



user=retail_dba

password=cloudera

database=retail_db

table=retail_db.categories

jdbc URL = jdbc:mysql://quickstart:3306/retail_db



Please accomplish following activities.



1. Connect MySQL DB and check the content of the tables.

2. Copy "retaildb.categories" table to hdfs, without specifying directory name.

3. Copy "retaildb.categories" table to hdfs, in a directory name "categories_target".

4. Copy "retaildb.categories" table to hdfs, in a warehouse directory name "categories_warehouse".





Answer: See the explanation for Step by Step Solution and configuration.

Explanation:

Solution :



Step 1 : Connecting to existing MySQL Database mysql --user=retail_dba --password=cloudera retail_db



Step 2 : Show all the available tables show tables;



Step 3 : View/Count data from a table in MySQL select count(1} from categories;



Step 4 : Check the currently available data in HDFS directory hdfs dfs -Is



Step 5 : Import Single table (Without specifying directory).

sqoop import --connect jdbc:mysql://quickstart:3306/retail_db -username=retail_dba -password=cloudera -table=categories

Note : Please check you dont have space between before or after '=' sign. Sqoop uses the MapReduce framework to copy data from RDBMS to hdfs



Step 6 : Read the data from one of the partition, created using above command, hdfs dfs -catxategories/part-m-00000

Step 7 : Specifying target directory in import command (We are using number of mappers =1, you can change accordingly) sqoop import -connect jdbc:mysql://quickstart:3306/retail_db -username=retail_dba -password=cloudera ~table=categories -target-dir=categortes_target --m 1

Step 8 : Check the content in one of the partition file.

 hdfs dfs -cat categories_target/part-m-00000



Step 9 : Specifying parent directory so that you can copy more than one table in a specified target directory. Command to specify warehouse directory.

sqoop import -.-connect jdbc:mysql://quickstart:3306/retail_db --username=retail  dba -password=cloudera -table=categories -warehouse-dir=categories_warehouse --m 1











QUESTION NO: 2

Problem Scenario 2 :



There is a parent organization called "ABC Group Inc", which has two child companies named Tech Inc and MPTech.

 Both companies employee information is given in two separate text file as below. Please do the following activity for employee details.



Tech Inc.txt

1,Alok,Hyderabad

2,Krish,Hongkong

3,Jyoti,Mumbai

4,Atul,Banglore

5,Ishan,Gurgaon



MPTech.txt

6,John,Newyork

7,alp2004,California

8,tellme,Mumbai

9,Gagan21,Pune

10,Mukesh,Chennai

1. Which command will you use to check all the available command line options on HDFS and How will you get the Help for individual command.

2. Create a new Empty Directory named Employee using Command line. And also create an empty file named in it Techinc.txt

3. Load both companies Employee data in Employee directory (How to override existing file in HDFS).

4.  Merge both the Employees data in a Single tile called MergedEmployee.txt, merged tiles should have new line character at the end of each file content.

5.  Upload merged file on HDFS and change the file permission on HDFS merged file, so that owner and group member can read and write, other user can read the file.

6. Write a command to export the individual file as well as entire directory from HDFS to local file System.





Answer: See the explanation for Step by Step Solution and configuration.

Explanation:

Solution :



Step 1 : Check All Available command hdfs dfs



Step 2 : Get help on Individual command hdfs dfs -help get



Step 3 : Create a directory in HDFS using named Employee and create a Dummy file in it called e.g. Techinc.txt hdfs dfs -mkdir Employee

Now create an emplty file in Employee directory using Hue.



Step 4 : Create a directory on Local file System and then Create two files, with the given data in problems.



Step 5 : Now we have an existing directory with content in it, now using HDFS command line , overrid this existing Employee directory. While copying these files from local file System to HDFS. cd /home/cloudera/Desktop/ hdfs dfs -put -f Employee



Step 6 : Check All files in directory copied successfully hdfs dfs -Is Employee



Step 7 : Now merge all the files in Employee directory, hdfs dfs -getmerge -nl Employee MergedEmployee.txt



Step 8 : Check the content of the file. cat MergedEmployee.txt



Step 9 : Copy merged file in Employeed directory from local file ssytem to HDFS. hdfs dfs -put MergedEmployee.txt Employee/



Step 10 : Check file copied or not. hdfs dfs -Is Employee



Step 11 : Change the permission of the merged file on HDFS hdfs dfs -chmpd 664 Employee/MergedEmployee.txt



Step 12 : Get the file from HDFS to local file system, hdfs dfs -get Employee Employee_hdfs





QUESTION NO: 3

Problem Scenario 3: You have been given MySQL DB with following details.



user=retail_dba

password=cloudera

database=retail_db

table=retail_db.categories

jdbc URL = jdbc:mysql://quickstart:3306/retail_db



Please accomplish following activities.



1.  Import data from categories table, where category=22 (Data should be stored in categories subset)

2.  Import data from categories table, where category>22 (Data should be stored in categories_subset_2)

3.  Import data from categories table, where category between 1 and 22 (Data should be stored in categories_subset_3)

4. While importing catagories data change the delimiter to '|' (Data should be stored in categories_subset_S)

5. Importing data from catagories table and restrict the import to category_name,category  id columns only with delimiter as '|'

6. Add null values in the table using below SQL statement ALTER TABLE categories modify category_department_id int(11); INSERT INTO categories values (eO.NULL.'TESTING');

7.  Importing data from catagories table (In categories_subset_17 directory) using '|' delimiter and categoryjd between 1 and 61 and encode null values for both string and non string columns.

8. Import entire schema retail_db in a directory categories_subset_all_tables





Answer: See the explanation for Step by Step Solution and configuration.

Explanation:

Solution:



Step 1: Import Single table (Subset data} Note: Here the ' is the same you find on - key

sqoop import --connect jdbc:mysql://quickstart:3306/retail_db --username=retail_dba -password=cloudera -table=categories ~warehouse-dir= categories_subset --where \'category_id\’=22 --m 1





Step 2 : Check the output partition

hdfs dfs -cat categoriessubset/categories/part-m-00000



Step 3 : Change the selection criteria (Subset data)

sqoop import --connect jdbc:mysql://quickstart:3306/retail_db --username=retail_dba -password=cloudera -table=categories ~warehouse-dir= categories_subset_2 --where \’category_id\’\>22 -m 1





Step 4 : Check the output partition

hdfs dfs -cat categories_subset_2/categories/part-m-00000



Step 5 : Use between clause (Subset data)

sqoop import --connect jdbc:mysql://quickstart:3306/retail_db --username=retail_dba -password=cloudera -table=categories ~warehouse-dir=categories_subset_3 --where "\’category_id\' between 1 and 22" --m 1





Step 6 : Check the output partition

hdfs dfs -cat categories_subset_3/categories/part-m-00000



Step 7 : Changing the delimiter during import.

sqoop import --connect jdbc:mysql://quickstart:3306/retail_db --username=retail  dba -password=cloudera -table=categories -warehouse-dir=:categories_subset_6 --where "/’categoryjd /’ between 1 and 22" -fields-terminated-by='|' -m 1





Step 8 : Check the.output partition

hdfs dfs -cat categories_subset_6/categories/part-m-00000



Step 9 : Selecting subset columns

sqoop import --connect jdbc:mysql://quickstart:3306/retail_db --username=retail_dba -password=cloudera -table=categories --warehouse-dir=categories  subset col -where "/’category  id/’ between 1 and 22" -fields-terminated-by=T -columns=category  name,category  id --m 1





Step 10 : Check the output partition

hdfs dfs -cat categories_subset_col/categories/part-m-00000



Step 11 : Inserting record with null values (Using mysql} ALTER TABLE categories modify category_department_id int(11); INSERT INTO categories values ^NULL/TESTING'); select" from categories;



Step 12 : Encode non string null column

sqoop import --connect jdbc:mysql://quickstart:3306/retail_db --username=retail  dba -password=cloudera -table=categories --warehouse-dir=categortes_subset_17 -where "\"category_id\" between 1 and 61" -fields-terminated-by=,|' --null-string-N' -null-non-string=,N' --m 1





Step 13 : View the content

hdfs dfs -cat categories_subset_17/categories/part-m-00000



Step 14 : Import all the tables from a schema (This step will take little time)

sqoop import-all-tables -connect jdbc:mysql://quickstart:3306/retail_db --username=retail_dba -password=cloudera -warehouse-dir=categories_si



Step 15 : View the contents

hdfs dfs -Is categories_subset_all_tables



Step 16 : Cleanup or back to originals.

delete from categories where categoryid in (59,60);

ALTER TABLE categories modify category_department_id int(11) NOTNULL;

ALTER TABLE categories modify category_name varchar(45) NOT NULL;

desc categories;





QUESTION NO: 4

Problem Scenario 4: You have been given MySQL DB with following details.



user=retail_dba

password=cloudera

database=retail_db

table=retail_db.categories

jdbc URL = jdbc:mysql://quickstart:3306/retail_db



Please accomplish following activities.



Import Single table categories (Subset data} to hive managed table , where category_id between 1 and 22





Answer: See the explanation for Step by Step Solution and configuration.

Explanation:
Solution :

Step 1 : Import Single table (Subset data)
sqoop import --connect jdbc:mysql://quickstart:3306/retail_db -username=retail_dba -password=cloudera -table=categories -where "\’category_id\’ between 1 and 22" --hive-import --m 1
Note: Here the ' is the same you find on ~ key
This command will create a managed table and content will be created in the following directory.

/user/hive/warehouse/categories
Step 2 : Check whether table is created or not (In Hive)
show tables;
select * from categories;


Download the newest Dumpszone CCA175 dumps from Dumpszone.com now! 100% Pass Guarantee!

CCA175 PDF dumps & CCA175 VCE dumps
: https://www.dumpszone.com/CCA175-braindumps.html  (New Questions Are 100% Available and Wrong Answers Have Been Corrected! Free VCE simulator!)

P.S. Free CCA175 VCE dumps download from direct PDF Link: http://www.dumpszone.com/top/demo/Cloudera/CCA175.pdf

Topic: in CCA175 Braindumps, CCA175 Exam Dumps, CCA175 Exam Questions, CCA175 PDF Dumps, CCA175 Practice Tests, CCA175 questions answers. CCA175 Study Guide, CCA175 VCE Dumps, CCA175 PDF Braindumps

Comments

  1. I needed enthusiasm and motivation for success what I found in CCA175 Braindumps. It was hard to think about CCA Spark and Hadoop Developer Exam - Performance Based Scenarios Exam before getting CCA175 Dumps for my preparation. I got a comprehensive command of the field and answered all the questions in the final test. I am thankful to all the experts at Realexamdumps.com who made such a helpful study guide for IT candidates. My suggestion to all my friends is to use CCA175 study guide for a definite success.

    ReplyDelete

Post a Comment

Popular posts from this blog

Dumpszone CompTIA CAS-002 Dumps | CAS-002 VCE - Free Try

156-215.80 Exam Questions - Download Free 156-215.80 Answers

Latest 201-450 Dumps with PDF and 201-450 VCE - Dumpsout