You are here

ECE PhD Defense: Yi Yao

11
Aug

442 Dana

August 11, 2015 2:00 pm to 4:00 pm
August 11, 2015 2:00 pm to 4:00 pm

Title:
Resource Management in Cluster Computing Platforms for Large-scale Data Processing

Abstract:
In the era of data explosion, one of the most significant research areas is cluster computing for large-scale data processing. Many cluster computing frameworks and cluster resource management schemes are recently developed to satisfy the increasing demands on large volume data processing. Among them, Apache Hadoop becomes the de facto platform that has been widely adopted in both industry and academia due to its prominent features such as scalability, simplicity and fault tolerance. The original Hadoop platform was designed to closely resemble the MapReduce framework, which is a programming paradigm for cluster computing proposed by Google. However, the MapReduce framework does not fit all kinds of data processing requirements such as iterative processing. Therefore, the Hadoop platform has recently evolved into its second generation, Hadoop YARN, which serves as a unified cluster resource management layer to support multiplexing of different cluster computing frameworks. A fundamental issue in this field is that given limited computing resources in a cluster, how to efficiently manage and schedule the execution of a large number of data processing jobs. Therefore, in this dissertation, we mainly focus on improving system efficiency and performance for cluster computing platforms, e.g., Hadoop MapReduce and Hadoop YARN, by designing the following new scheduling algorithms and resource management schemes. 

First, we developed a new scheduler (LsPS), and a slot configuration scheme (TuMM) for Hadoop MapReduce in order to improve performance of MapReduce jobs in the Hadoop platform. In a production Hadoop MapReduce cluster, there are mainly two kinds of data processing jobs: (1) ad hoc jobs that are submitted by multiple users over time for time sensitive queries, and (2) batch jobs which are usually grouped as a batch and submitted at the same time for processing. The major performance consideration for ad hoc jobs is average job response times, while the total completion time, i.e., makespan, is the most important performance metric for batch jobs. We first designed a Hadoop scheduler, named LsPS, which aims to improve average job response times by leveraging job size patterns of different users to tune resource sharing between users as well as choose a good scheduling policy for each user. We further presented a self-adjusting slot configuration scheme, named TuMM, for Hadoop MapReduce to improve the makespan of batch jobs. TuMM abandons the static and manual slot configurations in the existing Hadoop MapReduce framework. Instead, by using a feedback control mechanism, TuMM dynamically tunes map and reduce slot numbers on each cluster node based on monitored workload information to align the execution of map and reduce phases. Our experimental results demonstrate that both average job response time and job makespan are improved under our proposed Hadoop scheduler LsPS, and dynamic slot configuration scheme TuMM.

The second main contribution of this dissertation lies in the development of new
scheduler and resource management scheme for the next generation Hadoop, i.e., Hadoop YARN. Different from the first generation Hadoop platform, fine-grained resource management is enabled in Hadoop YARN. However, we found that the existing scheduling policies do not take multi-dimension resource requirements (cpu cores and memory) of tasks into consideration, which results in low resource utilization and poor system throughput. To address this issue, we designed a YARN scheduler, named HaSTE, which can effectively reduce the makespan of MapReduce jobs in YARN platform by leveraging the information of requested resources, resource capacities, and dependency between tasks. Moreover, we still observed low cluster resource utilization even under a good scheduling algorithm. The main reason is that current reservation based resource management schemes in Hadoop YARN always reserve a fixed amount of resources for each task during its entire life time. Since tasks often have fluctuating resource usage patterns, their occupied resources can become idle during a long period. To solve this problem, we proposed an opportunistic scheduling scheme to reassign those idle and reserved resources to other waiting tasks. This opportunistic scheduling scheme leverages actual system resource utilizations to determine the availability of resources on each working node and identifies short tasks to be eligible for getting idle resources. The major goal of our new scheme is to improve system resource utilization without incurring severe resource contentions due to resource over provisioning.

We implemented all of our resource management schemes in the open source of Hadoop MapReduce and Hadoop YARN, and evaluated the effectiveness of these new schedulers and schemes on different cluster systems, including our local clusters and large clusters in cloud computing, such as Amazon EC2. Representative benchmarks (e.g., wordcount, terasort, TPCH) are used for sensitivity analysis and performance evaluations. Experimental results demonstrate that our new Hadoop/YARN schedulers and resource management schemes can successfully improve the performance in terms of job response times, job makespan, and system utilization in both Hadoop MapReduce and Hadoop YARN platforms.

Advisor:
Professor Ningfang Mi

Committee Members:
Professor Mirek Riedewald
Dr. Xiaoyun Zhu
Professor Yunsei Fei