Sunday, March 22, 2020

Hadoop Arch

Hadoop 1.0
1.First job submits to Job Tracker
2.Job Tracker contact Name node for Data
3.Then it give back the information to client
4.Meanwhile Job Tracker send information to Task Tracker about the job which
gonna come
5.client reaches TaskTracker and give job details ---(Jar path,Data path)
6.Task Tracker starts the job (Map Reduce)


Hadoop 2.0
1.Clients submits the job to Resource Manager
2.RM contacts Name Node
3.Name node give info to RM(Input split transformation)
4.RM give back information to client
5.Meanwhile RM send information to respective node manager to create container
where the data is present also it send an Info to one manager to create application master
6.The data reaches container and Application master submits the job inside container and monitors the job inside container
7.NM monitors the life of container
8.Application master gets the track of Map Reduce

No comments:

Post a Comment

Python Challenges Program

Challenges program: program 1: #Input :ABAABBCA #Output: A4B3C1 str1="ABAABBCA" str2="" d={} for x in str1: d[x]=d...