By the same authors

Bidding policies for market-based HPC workflow scheduling

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Standard

Bidding policies for market-based HPC workflow scheduling. / Burkimsher, A.; Indrusiak, L. S.

Proc. 2nd Int Workshop on Dynamic Resource Allocation and Management in Embedded, High Performance and Cloud Computing (DREAMCloud) - HiPEAC Conference. Arxiv (Cornell University), 2016.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Harvard

Burkimsher, A & Indrusiak, LS 2016, Bidding policies for market-based HPC workflow scheduling. in Proc. 2nd Int Workshop on Dynamic Resource Allocation and Management in Embedded, High Performance and Cloud Computing (DREAMCloud) - HiPEAC Conference. Arxiv (Cornell University), HiPEAC Conference, Prague, Czech Republic, 18/01/16. <http://arxiv.org/abs/1601.07047>

APA

Burkimsher, A., & Indrusiak, L. S. (2016). Bidding policies for market-based HPC workflow scheduling. In Proc. 2nd Int Workshop on Dynamic Resource Allocation and Management in Embedded, High Performance and Cloud Computing (DREAMCloud) - HiPEAC Conference Arxiv (Cornell University). http://arxiv.org/abs/1601.07047

Vancouver

Burkimsher A, Indrusiak LS. Bidding policies for market-based HPC workflow scheduling. In Proc. 2nd Int Workshop on Dynamic Resource Allocation and Management in Embedded, High Performance and Cloud Computing (DREAMCloud) - HiPEAC Conference. Arxiv (Cornell University). 2016

Author

Burkimsher, A. ; Indrusiak, L. S. / Bidding policies for market-based HPC workflow scheduling. Proc. 2nd Int Workshop on Dynamic Resource Allocation and Management in Embedded, High Performance and Cloud Computing (DREAMCloud) - HiPEAC Conference. Arxiv (Cornell University), 2016.

Bibtex - Download

@inproceedings{8dfc5a1871714dd6b33bd088724e25f1,
title = "Bidding policies for market-based HPC workflow scheduling",
abstract = "This paper considers the scheduling of jobs on distributed, heterogeneous High Performance Computing (HPC) clusters. Market-based approaches are known to be efficient for allocating limited resources to those that are most prepared to pay. This context is applicable to an HPC or cloud computing scenario where the platform is overloaded. In this paper, jobs are composed of dependent tasks. Each job has a non-increasing time-value curve associated with it. Jobs are submitted to and scheduled by a market-clearing centralised auctioneer. This paper compares the performance of several policies for generating task bids. The aim investigated here is to maximise the value for the platform provider while minimising the number of jobs that do not complete (or starve). It is found that the Projected Value Remaining bidding policy gives the highest level of value under a typical overload situation, and gives the lowest number of starved tasks across the space of utilisation examined. It does this by attempting to capture the urgency of tasks in the queue. At high levels of overload, some alternative algorithms produce slightly higher value, but at the cost of a hugely higher number of starved workflows.",
author = "A. Burkimsher and Indrusiak, {L. S.}",
year = "2016",
month = jan,
language = "English",
booktitle = "Proc. 2nd Int Workshop on Dynamic Resource Allocation and Management in Embedded, High Performance and Cloud Computing (DREAMCloud) - HiPEAC Conference",
publisher = "Arxiv (Cornell University)",
note = "HiPEAC Conference ; Conference date: 18-01-2016 Through 20-01-2016",

}

RIS (suitable for import to EndNote) - Download

TY - GEN

T1 - Bidding policies for market-based HPC workflow scheduling

AU - Burkimsher, A.

AU - Indrusiak, L. S.

PY - 2016/1

Y1 - 2016/1

N2 - This paper considers the scheduling of jobs on distributed, heterogeneous High Performance Computing (HPC) clusters. Market-based approaches are known to be efficient for allocating limited resources to those that are most prepared to pay. This context is applicable to an HPC or cloud computing scenario where the platform is overloaded. In this paper, jobs are composed of dependent tasks. Each job has a non-increasing time-value curve associated with it. Jobs are submitted to and scheduled by a market-clearing centralised auctioneer. This paper compares the performance of several policies for generating task bids. The aim investigated here is to maximise the value for the platform provider while minimising the number of jobs that do not complete (or starve). It is found that the Projected Value Remaining bidding policy gives the highest level of value under a typical overload situation, and gives the lowest number of starved tasks across the space of utilisation examined. It does this by attempting to capture the urgency of tasks in the queue. At high levels of overload, some alternative algorithms produce slightly higher value, but at the cost of a hugely higher number of starved workflows.

AB - This paper considers the scheduling of jobs on distributed, heterogeneous High Performance Computing (HPC) clusters. Market-based approaches are known to be efficient for allocating limited resources to those that are most prepared to pay. This context is applicable to an HPC or cloud computing scenario where the platform is overloaded. In this paper, jobs are composed of dependent tasks. Each job has a non-increasing time-value curve associated with it. Jobs are submitted to and scheduled by a market-clearing centralised auctioneer. This paper compares the performance of several policies for generating task bids. The aim investigated here is to maximise the value for the platform provider while minimising the number of jobs that do not complete (or starve). It is found that the Projected Value Remaining bidding policy gives the highest level of value under a typical overload situation, and gives the lowest number of starved tasks across the space of utilisation examined. It does this by attempting to capture the urgency of tasks in the queue. At high levels of overload, some alternative algorithms produce slightly higher value, but at the cost of a hugely higher number of starved workflows.

M3 - Conference contribution

BT - Proc. 2nd Int Workshop on Dynamic Resource Allocation and Management in Embedded, High Performance and Cloud Computing (DREAMCloud) - HiPEAC Conference

PB - Arxiv (Cornell University)

T2 - HiPEAC Conference

Y2 - 18 January 2016 through 20 January 2016

ER -