A HADOOP ANALYSIS OF MAPREDUCE SCHEDULING ALGORITHMS
Main Article Content
Abstract
Big data has ushered in the era of the tera, during which massive amounts of data are being gathered at accelerating rates. The size of the world's data is increasing in zeta-bytes as a result of improvements in processing speed, storage capacity, and data availability. One of the big data technologies is Hadoop, which uses the Map-Reduce and Hadoop Distributed File System to analyse data. An essential task for effective cluster resource management is job scheduling. Schedulers for Hadoop are pluggable parts that allocate resources to jobs. The default FIFO, Fair, and Capacity schedulers are prevalent in a variety of schedulers. A thorough analysis of the various work scheduling algorithms has been carried out in this paper. Additionally, their comparative parametric analysis was conducted while highlighting the essential features that these schedulers have in common.