1.What is
BIG DATA?
Big
Data is nothing but an assortment of such a huge and complex data that it
becomes very tedious to capture, store, process, retrieve and analyze it with
the help of on-hand database management tools or traditional data processing
techniques. To know more about BIG DATA, browse through The Hype Behind Big Data!
2.Can you
give some examples of Big Data?
There
are many real life examples of Big Data! Facebook is generating 500+ terabytes
of data per day, NYSE (New York Stock Exchange) generates about 1 terabyte of
new trade data per day, a jet airline collects 10 terabytes of censor data for
every 30 minutes of flying time. All these are day to day examples of Big Data!
3.Can you
give a detailed overview about the Big Data being generated by Facebook?
As
of December 31, 2012, there are 1.06 billion monthly active users on facebook
and 680 million mobile users. On an average, 3.2 billion likes and comments are
posted every day on Facebook. 72% of web audience is on Facebook. And why not!
There are so many activities going on facebook from wall posts, sharing images,
videos, writing comments and liking posts, etc. In fact, Facebook started
using Hadoop in mid-2009 and was one of the initial users of Hadoop.
4.According
to IBM, what are the three characteristics of Big Data?
According
to IBM, the three characteristics of Big Data are:
Volume: Facebook generating 500+ terabytes of
data per day.
Velocity: Analyzing 2 million records each day to identify the reason for losses.
Variety: images, audio, video, sensor data, log files, etc.
Velocity: Analyzing 2 million records each day to identify the reason for losses.
Variety: images, audio, video, sensor data, log files, etc.
5.How Big
is ‘Big Data’?
With
time, data volume is growing exponentially. Earlier we used to talk about
Megabytes or Gigabytes. But time has arrived when we talk about data volume in
terms of terabytes, petabytes and also zettabytes! Global data volume was
around 1.8ZB in 2011 and is expected to be 7.9ZB in 2015. It is also known that
the global information doubles in every two years!
6.How
analysis of Big Data is useful for organizations?
Effective
analysis of Big Data provides a lot of business advantage as organizations will
learn which areas to focus on and which areas are less important. Big data
analysis provides some early key indicators that can prevent the company from a
huge loss or help in grasping a great opportunity with open hands! A precise
analysis of Big Data helps in decision making! For instance, nowadays people
rely so much on Facebook and Twitter before buying any product or service. All
thanks to the Big Data explosion.
7.Who are
‘Data Scientists’?
Data
scientists are soon replacing business analysts or data analysts. Data
scientists are experts who find solutions to analyze data. Just as web analysis,
we have data scientists who have good business insight as to how to handle a
business challenge. Sharp data scientists are not only involved in dealing
business problems, but also choosing the relevant issues that can bring
value-addition to the organization.
8.What is
Hadoop?
Hadoop
is a framework that allows for distributed processing of large data sets across
clusters of commodity computers using a simple programming model.Click on What Is Hadoop all about to know more!
9.Why the
name ‘Hadoop’?
Hadoop
doesn’t have any expanding version like ‘oops’. The charming yellow elephant
you see is basically named after Doug’s son’s toy elephant!
10.Why do we
need Hadoop?
Everyday
a large amount of unstructured data is getting dumped into our machines. The
major challenge is not to store large data sets in our systems but to retrieve
and analyze the big data in the organizations, that too data present in
different machines at different locations. In this situation a necessity for
Hadoop arises. Hadoop has the ability to analyze the data present in different
machines at different locations very quickly and in a very cost effective way.
It uses the concept of MapReduce which enables it to divide the query into
small parts and process them in parallel. This is also known as parallel
computing. The link Why Hadoop gives you a detailed explanation about why
Hadoop is gaining so much popularity!
11.What are
some of the characteristics of Hadoop framework?
Hadoop
framework is written in Java. It is designed to solve problems that involve
analyzing large data (e.g. petabytes). The programming model is based on
Google’s MapReduce. The infrastructure is based on Google’s Big Data and
Distributed File System. Hadoop handles large files/data throughput and
supports data intensive distributed applications. Hadoop is scalable as more
nodes can be easily added to it.
12.Give a
brief overview of Hadoop history.
In
2002, Doug Cutting created an open source, web crawler project.
In 2004, Google published MapReduce, GFS papers.
In 2006, Doug Cutting developed the open source, Mapreduce and HDFS project.
In 2008, Yahoo ran 4,000 node Hadoop cluster and Hadoop won terabyte sort benchmark.
In 2009, Facebook launched SQL support for Hadoop.
In 2004, Google published MapReduce, GFS papers.
In 2006, Doug Cutting developed the open source, Mapreduce and HDFS project.
In 2008, Yahoo ran 4,000 node Hadoop cluster and Hadoop won terabyte sort benchmark.
In 2009, Facebook launched SQL support for Hadoop.
13.Give
examples of some companies that are using Hadoop structure?
A
lot of companies are using the Hadoop structure such as Cloudera, EMC, MapR,
Hortonworks, Amazon, Facebook, eBay, Twitter, Google and so on.
14.What is
the basic difference between traditional RDBMS and Hadoop?
Traditional RDBMS is
used for transactional systems to report and archive the data, whereas Hadoop is an approach to store huge amount of data in
the distributed file system and process it. RDBMS will be useful when you want
to seek one record from Big data, whereas, Hadoop will be useful when you want
Big data in one shot and perform analysis on that later.
15.What is
structured and unstructured data?
Structured data is the data that is easily identifiable as
it is organized in a structure. The most common form of structured data is a
database where specific information is stored in tables, that
is, rows and columns. Unstructured data refers to any data that
cannot be identified easily. It could be in the form of images, videos,
documents, email, logs and random text. It is not in the form of rows and
columns.
16.What are
the core components of Hadoop?
Core
components of Hadoop are HDFS and MapReduce. HDFS is basically used to store
large data sets and MapReduce is used to process such large data sets.
17.What is
HDFS?
HDFS
is a file system designed for storing very large files with streaming data
access patterns, running clusters on commodity hardware.
18.What are the
key features of HDFS?
HDFS
is highly fault-tolerant, with high throughput, suitable for applications with
large data sets, streaming access to file system data and can be built out of
commodity hardware.
19.What is
Fault Tolerance?
Suppose
you have a file stored in a system, and due to some technical problem that file
gets destroyed. Then there is no chance of getting the data back present in
that file. To avoid such situations, Hadoop has introduced the feature of fault
tolerance in HDFS. In Hadoop, when we store a file, it automatically gets
replicated at two other locations also. So even if one or two of the systems
collapse, the file is still available on the third system.
20.Replication
causes data redundancy then why is pursued in HDFS?
HDFS
works with commodity hardware (systems with average configurations) that has
high chances of getting crashed any time. Thus, to make the entire system
highly fault-tolerant, HDFS replicates and stores data in different places. Any
data on HDFS gets stored at atleast 3 different locations. So, even if one of
them is corrupted and the other is unavailable for some time for any reason,
then data can be accessed from the third one. Hence, there is no chance of
losing the data. This replication factor helps us to attain the feature of
Hadoop called Fault Tolerant.
21.Since the
data is replicated thrice in HDFS, does it mean that any calculation done on
one node will also be replicated on the other two?
Since
there are 3 nodes, when we send the MapReduce programs, calculations will be
done only on the original data. The master node will know which node exactly
has that particular data. In case, if one of the nodes is not
responding, it is assumed to be failed. Only then, the required calculation
will be done on the second replica.
22.What is
throughput? How does HDFS get a good throughput?
Throughput is the amount of
work done in a unit time. It describes how fast the data is getting accessed
from the system and it is usually used to measure performance of the system. In
HDFS, when we want to perform a task or an action, then the work is divided and
shared among different systems. So all the systems will be executing the
tasks assigned to them independently and in parallel. So the work will be
completed in a very short period of time. In this way, the HDFS gives good
throughput. By reading data in parallel, we decrease the actual time to read
data tremendously.
23.What is
streaming access?
As HDFS works on the principle of ‘Write Once, Read Many‘,
the feature of streaming access is extremely important in HDFS. HDFS
focuses not so much on storing the data but how to retrieve it at the
fastest possible speed, especially while analyzing logs. In HDFS, reading the
complete data is more important than the time taken to fetch a single record
from the data.
24.What is a
commodity hardware? Does commodity hardware include RAM?
Commodity
hardware is a non-expensive system which is not of high quality or
high-availability. Hadoop can be installed in any average commodity hardware.
We don’t need super computers or high-end hardware to work on Hadoop. Yes,
Commodity hardware includes RAM because there will be some services which will
be running on RAM.
25.What is a
Namenode?
Namenode
is the master node on which job tracker runs and consists of the metadata. It
maintains and manages the blocks which are present on the datanodes. It is a
high-availability machine and single point of failure in HDFS.
26.Is
Namenode also a commodity?
No.
Namenode can never be a commodity hardware because the entire
HDFS rely on it. It is the single point of failure in HDFS. Namenode has to be
a high-availability machine.
27.What is a
metadata?
Metadata
is the information about the data stored in datanodes such as location of the
file, size of the file and so on.
28.What is a
Datanode?
Datanodes
are the slaves which are deployed on each machine and provide the actual
storage. These are responsible for serving read and write requests for the
clients.
29.Why do we
use HDFS for applications having large data sets and not when there are lot of
small files?
HDFS
is more suitable for large amount of data sets in a single file as compared to
small amount of data spread across multiple files. This is because Namenode is
a very expensive high performance system, so it is not prudent to occupy the
space in the Namenode by unnecessary amount of metadata that is generated for
multiple small files. So, when there is a large amount of data in a single
file, name node will occupy less space. Hence for getting optimized
performance, HDFS supports large data sets instead of multiple small files.
30.What is a
daemon?
Daemon
is a process or service that runs in background. In general, we use this word
in UNIX environment. The equivalent of Daemon in Windows is “services” and in
Dos is ” TSR”.
31.What is a
job tracker?
Job
tracker is a daemon that runs on a namenode for submitting and tracking
MapReduce jobs in Hadoop. It assigns the tasks to the different task
tracker. In a Hadoop cluster, there will be only one job tracker but many
task trackers. It is the single point of failure for Hadoop and MapReduce
Service. If the job tracker goes down all the running jobs are halted. It
receives heartbeat from task tracker based on which Job tracker decides whether
the assigned task is completed or not.
32.What is a
task tracker?
Task
tracker is also a daemon that runs on datanodes. Task Trackers manage the
execution of individual tasks on slave node. When a client submits a job, the
job tracker will initialize the job and divide the work and assign them to
different task trackers to perform MapReduce tasks. While performing this
action, the task tracker will be simultaneously communicating with job tracker
by sending heartbeat. If the job tracker does not receive heartbeat from task
tracker within specified time, then it will assume that task tracker has
crashed and assign that task to another task tracker in the cluster.
33.Is
Namenode machine same as datanode machine as in terms of hardware?
It
depends upon the cluster you are trying to create. The Hadoop VM can be there
on the same machine or on another machine. For instance, in a single node
cluster, there is only one machine, whereas in the development or in a testing
environment, Namenode and datanodes are on different machines.
34.What is a
heartbeat in HDFS?
A
heartbeat is a signal indicating that it is alive. A datanode sends heartbeat
to Namenode and task tracker will send its heart beat to job tracker. If the
Namenode or job tracker does not receive heart beat then they will decide that
there is some problem in datanode or task tracker is unable to perform the
assigned task.
35.Are
Namenode and job tracker on the same host?
No,
in practical environment, Namenode is on a separate host and job tracker
is on a separate host.
36.What is a
‘block’ in HDFS?
A
‘block’ is the minimum amount of data that can be read or written. In HDFS, the
default block size is 64 MB as contrast to the block size of 8192 bytes in
Unix/Linux. Files in HDFS are broken down into block-sized chunks, which are
stored as independent units. HDFS blocks are large as compared to disk blocks,
particularly to minimize the cost of seeks.
37.If a particular file is 50 mb, will the HDFS block still consume
64 mb as the default size?
No,
not at all! 64 mb is just a unit where the data will be stored. In this
particular situation, only 50 mb will be consumed by an HDFS block and 14 mb
will be free to store something else. It is the MasterNode that does data
allocation in an efficient manner.
38.What are
the benefits of block transfer?
A
file can be larger than any single disk in the network. There’s nothing that
requires the blocks from a file to be stored on the same disk, so they can take
advantage of any of the disks in the cluster. Making the unit of
abstraction a block rather than a file simplifies the storage
subsystem. Blocks provide fault tolerance and availability. To insure
against corrupted blocks and disk and machine failure, each block is replicated
to a small number of physically separate machines (typically three). If a block
becomes unavailable, a copy can be read from another location in a way that is
transparent to the client.
39.If we
want to copy 10 blocks from one machine to another, but another machine can
copy only 8.5 blocks, can the blocks be broken at the time of replication?
In
HDFS, blocks cannot be broken down. Before copying the blocks from one
machine to another, the Master node will figure out what is the actual amount
of space required, how many block are being used, how much space is available,
and it will allocate the blocks accordingly.
40.How
indexing is done in HDFS?
Hadoop
has its own way of indexing. Depending upon the block size, once the data is
stored, HDFS will keep on storing the last part of the data which will say
where the next part of the data will be. In fact, this is the base of HDFS.
41.If a data
Node is full how it’s identified?
When
data is stored in datanode, then the metadata of that data will be stored in
the Namenode. So Namenode will identify if the data node is full.
42.If
datanodes increase, then do we need to upgrade Namenode?
While
installing the Hadoop system, Namenode is determined based on the size of the
clusters. Most of the time, we do not need to upgrade the Namenode because it
does not store the actual data, but just the metadata, so such a requirement
rarely arise.
43.Are job
tracker and task trackers present in separate machines?
Yes,
job tracker and task tracker are present in different machines. The reason is
job tracker is a single point of failure for the Hadoop MapReduce service. If
it goes down, all running jobs are halted.
44.When we
send a data to a node, do we allow settling in time, before sending another
data to that node?
Yes,
we do.
45.Does
hadoop always require digital data to process?
Yes.
Hadoop always require digital data to be processed.
46.On what
basis Namenode will decide which datanode to write on?
As
the Namenode has the metadata (information) related to all the data nodes, it
knows which datanode is free.
47.Doesn’t
Google have its very own version of DFS?
Yes, Google owns a DFS known as “Google File System (GFS)”
developed by Google Inc. for its own use.
48.Who is a ‘user’ in HDFS?
A
user is like you or me, who has some query or who needs some kind of data.
49.Is client
the end user in HDFS?
No,
Client is an application which runs on your machine, which is used to interact
with the Namenode (job tracker) or datanode (task tracker).
50.What is
the communication channel between client and namenode/datanode?
The
mode of communication is SSH.
51.What is a
rack?
Rack
is a storage area with all the datanodes put together. These datanodes can be
physically located at different places. Rack is a physical collection of
datanodes which are stored at a single location. There can be multiple racks in
a single location.
52.On what
basis data will be stored on a rack?
When the client is ready to load a file into the cluster, the
content of the file will be divided into blocks. Now the client consults the
Namenode and gets 3 datanodes for every block of the file which indicates where
the block should be stored. While placing the datanodes, the key rule followed
is “for every block of data, two copies will exist in one rack, third
copy in a different rack“. This rule is known as “Replica Placement Policy“.
53.Do we
need to place 2nd and 3rd data in rack 2 only?
Yes,
this is to avoid datanode failure.
54.What if
rack 2 and datanode fails?
If
both rack2 and datanode present in rack 1 fails then there is no chance of
getting data from it. In order to avoid such situations, we need to replicate
that data more number of times instead of replicating only thrice. This can be
done by changing the value in replication factor which is set to 3 by default.
55.What is a
Secondary Namenode? Is it a substitute to the Namenode?
The
secondary Namenode constantly reads the data from the RAM of the Namenode and
writes it into the hard disk or the file system. It is not a substitute to the
Namenode, so if the Namenode fails, the entire Hadoop system goes down.
56.What is
the difference between Gen1 and Gen2 Hadoop with regards to the Namenode?
In
Gen 1 Hadoop, Namenode is the single point of failure. In Gen 2 Hadoop, we have
what is known as Active and Passive Namenodes kind of a structure. If the
active Namenode fails, passive Namenode takes over the charge.
57.What is
MapReduce?
Map Reduce is the ‘heart‘ of Hadoop that
consists of two parts – ‘map’ and ‘reduce’. Maps and reduces are programs for
processing data. ‘Map’ processes the data first to give some intermediate
output which is further processed by ‘Reduce’ to generate the final output.
Thus, MapReduce allows for distributed processing of the map and reduction
operations.
58.Can you
explain how do ‘map’ and ‘reduce’ work?
Namenode
takes the input and divide it into parts and assign them to data nodes. These
datanodes process the tasks assigned to them and make a key-value pair and
returns the intermediate output to the Reducer. The reducer collects this key
value pairs of all the datanodes and combines them and generates the final
output.
59.What is
‘Key value pair’ in HDFS?
Key value pair is
the intermediate data generated by maps and sent to reduces for generating the
final output.
60.What is
the difference between MapReduce engine and HDFS cluster?
HDFS
cluster is the name given to the whole configuration of master and slaves where
data is stored. Map Reduce Engine is the programming module which is used to
retrieve and analyze data.
61.Is map
like a pointer?
No,
Map is not like a pointer.
62.Do we
require two servers for the Namenode and the datanodes?
Yes,
we need two different servers for the Namenode and the datanodes. This is
because Namenode requires highly configurable system as it stores information
about the location details of all the files stored in different datanodes and
on the other hand, datanodes require low configuration system.
63.Why are
the number of splits equal to the number of maps?
The
number of maps is equal to the number of input splits because we want the key
and value pairs of all the input splits.
64.Is a job
split into maps?
No,
a job is not split into maps. Spilt is created for the file. The file is placed
on datanodes in blocks. For each split, a map is needed.
65.Which are
the two types of ‘writes’ in HDFS?
There are two types of writes in HDFS: posted and non-posted write. Posted Write is when we write
it and forget about it, without worrying about the acknowledgement. It is
similar to our traditional Indian post. In a Non-posted Write, we wait for
the acknowledgement. It is similar to the today’s courier services. Naturally,
non-posted write is more expensive than the posted write. It is much more
expensive, though both writes are asynchronous.
66.Why ‘Reading‘ is done in parallel and ‘Writing‘ is not in HDFS?
Reading is done in
parallel because by doing so we can access the data fast. But we do not perform
the write operation in parallel. The reason is that if we
perform the write operation in parallel, then it might result in
data inconsistency. For example, you have a file and two nodes are trying to
write data into the file in parallel, then the first node does not know what
the second node has written and vice-versa. So, this makes it confusing which
data to be stored and accessed.
67.Can
Hadoop be compared to NOSQL database like Cassandra?
Though
NOSQL is the closet technology that can be compared to Hadoop, it
has its own pros and cons. There is no DFS in NOSQL. Hadoop is not a database.
It’s a filesystem (HDFS) and distributed programming framework (MapReduce).
Hadoop MapReduce Questions
68.What is
MapReduce?
It
is a framework or a programming model that is used for processing large data
sets over clusters of computers using distributed programming.
69.What are
‘maps’ and ‘reduces’?
‘Maps‘ and ‘Reduces‘ are two phases of solving a query in HDFS.
‘Map’ is responsible to read data from input location, and based on the input
type, it will generate a key value pair, that is, an intermediate output in local
machine. ’Reducer’ is
responsible to process the intermediate output received from the
mapper and generate the final output.
70.What are
the four basic parameters of a mapper?
The four basic parameters of a mapper are LongWritable, text, text
and IntWritable. The first two
represent input parameters and the second two represent intermediate output
parameters.
71.What are the four basic parameters of a reducer?
The four basic parameters of a reducer are text, IntWritable, text,
IntWritable. The first two
represent intermediate output parameters and the second two represent final
output parameters.
72.What do the master class and the output class do?
Master
is defined to update the Master or the job tracker and the output class is
defined to write data onto the output location.
73.What
is the input type/format in MapReduce by default?
By
default the type input type in MapReduce is ‘text’.
74.Is it
mandatory to set input and output type/format in MapReduce?
No,
it is not mandatory to set the input and output type/format in MapReduce. By
default, the cluster takes the input and the output type as ‘text’.
75.What does
the text input format do?
In text input format, each line will create a line object, that
is an hexa-decimal number. Key is considered as a line object and value is
considered as a whole line text. This is how the data gets processed by a
mapper. The mapper will receive the ‘key’ as a ‘LongWritable‘ parameter and value as a ‘text‘ parameter.
76.What does
job conf class do?
MapReduce needs to logically
separate different jobs running on the same cluster. ‘Job conf class‘ helps to do job level settings such
as declaring a job in real environment. It is recommended
that Job name should be descriptive and represent the type of job that is
being executed.
77.What
does conf.setMapper Class do?
Conf.setMapper class sets the mapper class and all the
stuff related to map job such as reading a data and generating a key-value pair out of the mapper.
78.What do
sorting and shuffling do?
Sorting and shuffling are responsible for creating a unique key
and a list of values. Making similar keys at one location is known
as Sorting. And the process
by which the intermediate output of the mapper is sorted and sent across to the
reducers is known as Shuffling.
79.What
does a split do?
Before transferring the data from hard disk location to map
method, there is a phase or method called the ‘Split Method‘. Split method pulls a block of data from
HDFS to the framework. The Split class does not write anything, but reads data
from the block and pass it to the mapper. Be default, Split is taken care
by the framework. Split method is equal to the block size and is used to divide
block into bunch of splits.
How
can we change the split size if our commodity hardware has less storage space?
If our commodity hardware has less storage space, we can change
the split size by writing the ‘custom splitter‘. There is a feature of customization in
Hadoop which can be called from the main method.
80.What does
a MapReduce partitioner do?
A MapReduce partitioner makes sure that all the value of a
single key goes to the same reducer, thus allows evenly distribution of the map
output over the reducers. It redirects the mapper output to the reducer by
determining which reducer is responsible for a particular key.
81.How is Hadoop different from other data processing tools?
In
Hadoop, based upon your requirements, you can increase or decrease the number
of mappers without bothering about the volume of data to be processed. this is
the beauty of parallel processing in contrast to the other data
processing tools available.
82.Can we
rename the output file?
Yes we can rename the output file by implementing multiple format output class.
83.Why
we cannot do aggregation (addition) in a mapper? Why we require reducer for
that?
We
cannot do aggregation (addition) in a mapper because, sorting is not done in a
mapper. Sorting happens only on the reducer side. Mapper method initialization
depends upon each input split. While doing aggregation, we will lose the value
of the previous instance. For each row, a new mapper will get initialized. For
each row, input split again gets divided into mapper, thus we
do not have a track of the previous row value.
84.What is
Streaming?
Streaming
is a feature with Hadoop framework that allows us to do programming using
MapReduce in any programming language which can accept standard input and can
produce standard output. It could be Perl, Python, Ruby and not necessarily be
Java. However, customization in MapReduce can only be done using Java and not
any other programming language.
85.What
is a Combiner?
A
‘Combiner’ is a mini reducer that performs the local reduce task. It receives
the input from the mapper on a particular node and sends the output to the
reducer. Combiners help in enhancing the efficiency of MapReduce by
reducing the quantum of data that is required to be sent to the reducers.
86.What is
the difference between an HDFS Block and Input Split?
HDFS Block is the physical division of the data
and Input Split is
the logical division of the data.
87.What
happens in a textinputformat?
In textinputformat, each line in the text file is a record. Key is the byte offset of the line and value is the content of the line. For
instance, Key: longWritable, value: text.
88.What do you know about keyvaluetextinputformat?
In keyvaluetextinputformat, each line in the text file is a ‘record‘. The first separator character divides each
line. Everything before the separator is the key and everything after the separator is
the value. For instance, Key:
text, value: text.
89.What do you know about Sequencefileinputformat?
Sequencefileinputformat is an input format for reading in sequence
files. Key and value are user defined. It is a specific
compressed binary file format which is optimized for passing the data between
the output of one MapReduce job to the input of some other MapReduce job.
90.What do you know about Nlineoutputformat?
Nlineoutputformat splits ‘n’ lines of input as one split.
Setting Up Hadoop Cluster
91.Which are
the three modes in which Hadoop can be run?
The
three modes in which Hadoop can be run are:
1.
standalone (local) mode
2. Pseudo-distributed mode
3. Fully distributed mode
2. Pseudo-distributed mode
3. Fully distributed mode
92.What are
the features of Stand alone (local) mode?
In
stand-alone mode there are no daemons, everything runs on a single JVM. It has
no DFS and utilizes the local file system. Stand-alone mode is suitable
only for running MapReduce programs during development. It is one of the most
least used environments.
93.What are
the features of Pseudo mode?
Pseudo
mode is used both for development and in the QA environment. In the Pseudo mode
all the daemons run on the same machine.
94.Can we
call VMs as pseudos?
No,
VMs are not pseudos because VM is something different and pesudo is very
specific to Hadoop.
95.What are
the features of Fully Distributed mode?
Fully
Distributed mode is used in the production environment, where we have ‘n’
number of machines forming a Hadoop cluster. Hadoop daemons run on a cluster of
machines. There is one host onto which Namenode is running and another host on
which datanode is running and then there are machines on which task tracker is
running. We have separate masters and separate slaves in this distribution.
96.Does
Hadoop follows the UNIX pattern?
Yes, Hadoop closely follows the UNIX pattern. Hadoop also has
the ‘conf‘ directory as in the
case of UNIX.
97.In which
directory Hadoop is installed?
Cloudera
and Apache has the same directory structure. Hadoop is installed in cd
/usr/lib/hadoop-0.20/.
98.What are
the port numbers of Namenode, job tracker and task tracker?
The
port number for Namenode is ’70′, for job tracker is ’30′ and for task tracker
is ’60′.
99.What is
the Hadoop-core configuration?
Hadoop
core is configured by two xml files:
1. hadoop-default.xml which was renamed to 2. hadoop-site.xml.
These files are written in xml format. We have certain properties in these xml files, which consist of name and value. But these files do not exist now.
1. hadoop-default.xml which was renamed to 2. hadoop-site.xml.
These files are written in xml format. We have certain properties in these xml files, which consist of name and value. But these files do not exist now.
100.What are
the Hadoop configuration files at present?
There
are 3 configuration files in Hadoop:
1.
core-site.xml
2.
hdfs-site.xml
3.
mapred-site.xml
These files are located in the conf/ subdirectory.
101.How to
exit the Vi editor?
To
exit the Vi Editor, press ESC and type :q and then press enter.
102.What is a
spill factor with respect to the RAM?
Spill
factor is the size after which your files move to the temp file. Hadoop-temp
directory is used for this.
103.Is
fs.mapr.working.dir a single directory?
Yes, fs.mapr.working.dir it is just one directory.
104.Which are the three main hdfs-site.xml properties?
The
three main hdfs-site.xml properties are:
1. dfs.name.dir which
gives you the location on which metadata will be stored and where DFS is
located – on disk or onto the remote.
2. dfs.data.dir which
gives you the location where the data is going to be stored.
3. fs.checkpoint.dir
which is for secondary Namenode.
105.How to
come out of the insert mode?
To
come out of the insert mode, press ESC, type :q (if you have not written
anything) OR type :wq (if you have written anything in the file) and then press
ENTER.
106.What is
Cloudera and why it is used?
Cloudera
is the distribution of Hadoop. It is a user created on VM by default. Cloudera
belongs to Apache and is used for data processing.
107.What
happens if you get a ‘connection refused java exception’ when you type hadoop
fsck /?
It
could mean that the Namenode is not working on your VM.
108.We are
using Ubuntu operating system with Cloudera, but from where we can download
Hadoop or does it come by default with Ubuntu?
This
is a default configuration of Hadoop that you have to download from Cloudera or
from Edureka’s dropbox and the run it on your systems. You can also proceed
with your own configuration but you need a Linux box, be it Ubuntu or Red hat.
There are installation steps present at the Cloudera location or in Edureka’s
Drop box. You can go either ways.
109.What does
‘jps’ command do?
This
command checks whether your Namenode, datanode, task tracker, job tracker, etc
are working or not.
110.How can I
restart Namenode?
1. Click on stop-all.sh and then click on start-all.sh OR
2. Write sudo hdfs (press enter), su-hdfs (press enter), /etc/init.d/ha (press enter) and then /etc/init.d/hadoop-0.20-namenode start (press enter).
2. Write sudo hdfs (press enter), su-hdfs (press enter), /etc/init.d/ha (press enter) and then /etc/init.d/hadoop-0.20-namenode start (press enter).
111.What is
the full form of fsck?
Full form of fsck is File System Check.
112.How can
we check whether Namenode is working or not?
To
check whether Namenode is working or not, use the command /etc/init.d/hadoop-0.20-namenode
status or as simple as jps.
113.What does
the command mapred.job.tracker do?
The
command mapred.job.tracker lists out which of your nodes is
acting as a job tracker.
114.What does
/etc /init.d do?
/etc
/init.d specifies where
daemons (services) are placed or to see the status of these daemons. It is very
LINUX specific, and nothing to do with Hadoop.
115.How can
we look for the Namenode in the browser?
If you have to look for Namenode in the browser, you don’t have
to give localhost:8021, the port number to look for Namenode in the brower
is 50070.
116.How to
change from SU to Cloudera?
To
change from SU to Cloudera just type exit.
117.Which
files are used by the startup and shutdown commands?
Slaves and Masters are used by the startup and the shutdown
commands.
118.What do
slaves consist of?
Slaves
consist of a list of hosts, one per line, that host datanode and task tracker
servers.
119.What do
masters consist of?
Masters
contain a list of hosts, one per line, that are to host secondary namenode
servers.
120.What does
hadoop-env.sh do?
hadoop-env.sh provides the environment for Hadoop to
run. JAVA_HOME is set over here.
121.Can we
have multiple entries in the master files?
Yes,
we can have multiple entries in the Master files.
122.Where is
hadoop-env.sh file present?
hadoop-env.sh file is present in the conf location.
123.In
Hadoop_PID_DIR, what does PID stands for?
PID
stands for ‘Process ID’.
124.What does
/var/hadoop/pids do?
It
stores the PID.
125.What does
hadoop-metrics.properties file do?
hadoop-metrics.properties is used for ‘Reporting‘ purposes. It controls the reporting for
Hadoop. The default status is ‘not to report‘.
126.What are
the network requirements for Hadoop?
The Hadoop core uses Shell (SSH) to launch the server processes
on the slave nodes. It requires password-less SSH connection between the master and
all the slaves and the secondary machines.
127.Why do we
need a password-less SSH in Fully Distributed environment?
We need a password-less SSH in a Fully-Distributed environment
because when the cluster is LIVE and running in Fully
Distributed environment, the communication is too frequent. The job tracker should be able to send a task to task tracker quickly.
Distributed environment, the communication is too frequent. The job tracker should be able to send a task to task tracker quickly.
128.Does this
lead to security issues?
No,
not at all. Hadoop cluster is an isolated cluster. And generally it has nothing
to do with an internet. It has a different kind of a configuration. We needn’t
worry about that kind of a security breach, for instance, someone hacking
through the internet, and so on. Hadoop has a very secured way to connect to
other machines to fetch and to process data.
129.On which
port does SSH work?
SSH
works on Port No. 22, though it can be configured. 22 is
the default Port number.
130.Can you
tell us more about SSH?
SSH
is nothing but a secure shell communication, it is a kind of a protocol that
works on a Port No. 22, and when you do an SSH, what you really require is a
password.
131.Why
password is needed in SSH localhost?
Password is required in SSH for security and in a situation
where password-less communication
is not set.
132.Do we
need to give a password, even if the key is added in SSH?
Yes,
password is still required even if the key is added in SSH.
133.What if a
Namenode has no data?
If
a Namenode has no data it is not a Namenode. Practically, Namenode will have
some data.
134.What
happens to job tracker when Namenode is down?
When
Namenode is down, your cluster is OFF, this is because Namenode is the single
point of failure in HDFS.
135.What
happens to a Namenode, when job tracker is down?
When
a job tracker is down, it will not be functional but Namenode will be present.
So, cluster is accessible if Namenode is working, even if the job tracker is
not working.
136.Can you
give us some more details about SSH communication between Masters and the
Slaves?
SSH
is a password-less secure communication where data packets are sent across the
slave. It has some format into which data is sent across. SSH is not only
between masters and slaves but also between two hosts.
137.What is
formatting of the DFS?
Just
like we do for Windows, DFS is formatted for proper structuring. It is not usually
done as it formats the Namenode too.
138.Does the
HDFS client decide the input split or Namenode?
No,
the Client does not decide. It is already specified in one of the
configurations through which input split is already configured.
139.In
Cloudera there is already a cluster, but if I want to form a cluster on Ubuntu
can we do it?
Yes,
you can go ahead with this! There are installation steps for creating a new
cluster. You can uninstall your present cluster and install the new cluster.
140.Can we
create a Hadoop cluster from scratch?
Yes
we can do that also once we are familiar with the Hadoop environment.
141.Can we
use Windows for Hadoop?
Actually, Red Hat Linux or Ubuntu are the best Operating Systems for
Hadoop. Windows is not used frequently for installing Hadoop as there are many
support problems attached with Windows. Thus, Windows is not a preferred
environment for Hadoop.
Hadoop PIG
142.Can you
give us some examples how Hadoop is used in real time environment?
Let us assume that the we have an exam consisting of 10
Multiple-choice questions and 20 students appear for that exam. Every
student will attempt each question. For each question and each answer option, a
key will be generated. So we have a set of key-value pairs for all the questions and all the answer
options for every student. Based on the options that the students have
selected, you have to analyze and find out how many students have answered
correctly. This isn’t an easy task. Here Hadoop comes into picture! Hadoop
helps you in solving these problems quickly and without much effort. You may
also take the case of how many students have wrongly attempted a particular
question.
143.What is
BloomMapFile used for?
The BloomMapFile is
a class that extends MapFile. So its functionality
is similar to MapFile. BloomMapFile uses dynamic Bloom filters to provide quick
membership test for the keys. It is used in Hbase table format.
144.What is
PIG?
PIG
is a platform for analyzing large data sets that consist of high level language
for expressing data analysis programs, coupled with infrastructure for
evaluating these programs. PIG’s infrastructure layer consists of a compiler
that produces sequence of MapReduce Programs.
145.What is
the difference between logical and physical plans?
Pig undergoes some steps when a Pig Latin Script is converted
into MapReduce jobs. After performing the basic parsing and semantic checking,
it produces a logical plan. The logical plan describes the logical operators that
have to be executed by Pig during execution. After this, Pig produces a
physical plan. The physical plan describes
the physical operators that are needed to execute the script.
146.Does
‘ILLUSTRATE’ run MR job?
No,
illustrate will not pull any MR, it will pull the internal data. On the
console, illustrate will not do any job. It just shows output of each stage and
not the final output.
147.Is the
keyword ‘DEFINE’ like a function name?
Yes,
the keyword ‘DEFINE’ is like a function name. Once you have registered, you
have to define it. Whatever logic you have written in Java program, you have an
exported jar and also a jar registered by you. Now the compiler will
check the function in exported jar. When the function is not present in the
library, it looks into your jar.
148.Is the
keyword ‘FUNCTIONAL’ a User Defined Function (UDF)?
No, the keyword ‘FUNCTIONAL’ is
not a User Defined Function (UDF). While using UDF, we have to override some
functions. Certainly you have to do your job with the help of these functions
only. But the keyword ‘FUNCTIONAL’ is a built-in function i.e a pre-defined
function, therefore it does not work as a UDF.
149.Why do we
need MapReduce during Pig programming?
Pig is a high-level platform that makes many Hadoop data
analysis issues easier to execute. The language we use for this platform
is: Pig Latin.
A program written in Pig Latin is
like a query written in SQL, where we need an execution engine to execute the
query. So, when a program is written in Pig Latin, Pig compiler will convert
the program into MapReduce jobs. Here, MapReduce acts as the execution engine.
150.Are there
any problems which can only be solved by MapReduce and cannot be solved by PIG?
In which kind of scenarios MR jobs will be more useful than PIG?
Let us take a scenario where we want to count the population in
two cities. I have a data set and sensor list of different cities. I want
to count the population by using one mapreduce for two cities. Let us
assume that one is Bangalore and the other is Noida. So I need to consider key
of Bangalore city similar to Noida through which I can bring the
population data of these two cities to one reducer. The idea behind this is
some how I have to instruct map reducer program – whenever you find city with
the name ‘Bangalore‘
and city with the name ‘Noida’, you create the
alias name which will be the common name for these two cities so that you
create a common key for both the cities and it get passed to the same reducer.
For this, we have to write custom partitioner.
In mapreduce when you create a ‘key’ for city, you have to consider ’city’ as the key. So, whenever the framework
comes across a different city, it considers it as a different key. Hence, we
need to use customized partitioner. There is a provision in mapreduce only,
where you can write your custom partitioner and mention if city = bangalore or
noida then pass similar hashcode. However, we cannot create custom
partitioner in Pig. As Pig is not a framework, we cannot direct execution engine
to customize the partitioner. In such scenarios, MapReduce works better than
Pig.
151.Does Pig
give any warning when there is a type mismatch or missing field?
No,
Pig will not show any warning if there is no matching field or a mismatch. If
you assume that Pig gives such a warning, then it is difficult to find in log
file. If any mismatch is found, it assumes a null value in Pig.
152.What
co-group does in Pig?
Co-group
joins the data set by grouping one particular data set only. It groups the
elements by their common field and then returns a set of records containing two
separate bags. The first bag consists of the record of the first data set with
the common data set and the second bag consists of the records of the
second data set with the common data set.
153.Can we say
cogroup is a group of more than 1 data set?
Cogroup is a group of one data set. But in the case of more than
one data sets, cogroup will group all the data sets and join them based on the
common field. Hence, we can say that cogroup is a group of more than one data set and join of that data set as well.
154.What does
FOREACH do?
FOREACH
is used to apply transformations to the data and to generate new data items.
The name itself is indicating that for each element of a data bag, the
respective action will be performed.
Syntax : FOREACH bagname GENERATE
expression1, expression2, …..
The meaning of this statement is that the expressions mentioned after GENERATE will be applied to the current record of the data bag.
The meaning of this statement is that the expressions mentioned after GENERATE will be applied to the current record of the data bag.
156.What is bag?
JAVA
- How comfortable on 1 - 10 scale.
- What is Collection Interface, difference b/w SET & MAP.
- What are Object Oriented Programming (OOPS) concepts, and explain.
- What is the difference b/w Interface & Abstract class, when you need to define them?
Hadoop:
- What is your work? how you get work & and with whom you worked?
- What are the different configuration settings needs to be done to setup HDFS and different configuration files you use?
- Explain the JobConf?
- What happens to the data if any job fails in b/w?
- What is the difference b/w Comparable & Comparator Classes.
- How do you run or execute a Job? how does it executes.
- What is the difference b/w External Table & Internal Table in HIVE.
- How Partitions can be done on HIVE Tables(partition the file using a Date field), and what Schema you used.
- What is difficult query you have written using HiveQL?
- How can you improve performance of a MapReduce Job without increasing number of Reducers because if you increase it will also effect other Jobs/users.
- Have heard about UDF's and how comfortable using UDF(User defined functionality).
http://www.javacodegeeks.com/2012/05/mapreduce-questions-and-answers-part-1.html
ReplyDeletehttp://www.fromdev.com/2010/12/interview-questions-hadoop-mapreduce.html
http://hadoopexam.com/Hadoop_Interview_Question.pdf
ReplyDeletehttp://wiki.apache.org/hadoop/FAQ
ReplyDeletehttp://www.expertsfollow.com/hadoop/questions_answers/learning/forum/1/1
ReplyDeletehttp://hadoopeasytoall.blogspot.com/2012/09/hadoop-and-java-questions-for-interviews.html
ReplyDeletehttp://hadoopbymanjunath.blogspot.com/2013/06/hadoop-core-java-interview-questions-by.html --- G
ReplyDeleteHive
ReplyDeletehttps://cwiki.apache.org/confluence/display/Hive/User+FAQ
PIG
ReplyDeletehttps://cwiki.apache.org/confluence/display/PIG/FAQ
PIG UDF's
ReplyDeletehttp://wiki.apache.org/pig/UDFManual
Cloudera Certified Developer for Apache Hadoop (CCD-410) Exam
ReplyDeletehttp://university.cloudera.com/certification/CCDH/CCD-410
http://hadoop.apache.org/docs/r0.18.0/hdfs_design.pdf
ReplyDeletetutorials on Upgrade Hadoop is excellent.I am happy to found such helpful and fascinating post that is written in well manner. i actually enhanced my data when browse your post .thanks
ReplyDeleteHadoop Training in hyderabad
You want big data interview questions and answers follow this link.
ReplyDeletehttp://kalyanhadooptraining.blogspot.in/search/label/Big%20Data%20Interview%20Questions%20and%20Answers
This comment has been removed by the author.
ReplyDeleteThese are really awesome Hadoop interview questions and answers. Thanks for sharing. It’s worth reading. I found one more good resource related to this, just have a look at this link: https://intellipaat.com/big-data-hadoop-training/
ReplyDeleteLearn Big Data from Basics ... Hadoop Training in Hyderabad
ReplyDeletethis blog really helpful to everyone and explanation are very clear so easy to understand
ReplyDeletehadoop training institute in adyar | big data training institute in adyar | hadoop training in chennai adyar | big data training in chennai adyar
all questions are very useful to crack interviews and explanation are very clear..
ReplyDeletehadoop training institute in chenni | big data training institute in chennai | hadoop training in velachery | big data training in velachery
Really Good blog post.provided a helpful information.I hope that you will post more updates like this Big Data Hadoop Online Course Hyderabad
ReplyDelete
ReplyDeleteThanks for sharing the knowledgeable stuff to enlighten us no words for this amazing blog.. learnt so many things I recommend everyone to learn something from this blogger and blog.. I am sharing it with others also
IT Software Training in Chennai | Python Training in Chennai | Dot Net Training in Chennai |Android Training in Chennai | J2ee Training in Chennai
Thanks for sharing the valuable information.
ReplyDeleteRPA UiPath Online Training
RPA UiPath Training in Hyderabad
RPA UiPath Training in Ameerpet
Best RPA UiPath Training in Hyderabad