By the same authors

From the same journal

Parallel file system analysis through application I/O Tracing

Research output: Contribution to journalArticle

Full text download(s)

Published copy (DOI)

Author(s)

  • S. A. Wright
  • S. D. Hammond
  • S. J. Pennycook
  • R. F. Bird
  • J. A. Herdman
  • I. Miller
  • A. Vadgama
  • A. Bhalerao
  • S. A. Jarvis

Department/unit(s)

Publication details

JournalComputer Journal
DatePublished - 1 Feb 2013
Issue number2
Volume56
Number of pages15
Pages (from-to)141-155
Original languageEnglish

Abstract

Input/Output (I/O) operations can represent a significant proportion of the run-time of parallel scientific computing applications. Although there have been several advances in file format libraries, file system design and I/O hardware, a growing divergence exists between the performance of parallel file systems and the compute clusters that they support. In this paper, we document the design and application of the RIOT I/O toolkit (RIOT) being developed at the University of Warwick with our industrial partners at the Atomic Weapons Establishment and Sandia National Laboratories. We use the toolkit to assess the performance of three industry-standard I/O benchmarks on three contrasting supercomputers, ranging from a mid-sized commodity cluster to a large-scale proprietary IBM BlueGene/P system. RIOT provides a powerful framework in which to analyse I/O and parallel file system behaviour-we demonstrate, for example, the large file locking overhead of IBM's General Parallel File System, which can consume nearly 30% of the total write time in the FLASH-IO benchmark. Through I/O trace analysis, we also assess the performance of HDF-5 in its default configuration, identifying a bottleneck created by the use of suboptimal Message Passing Interface hints. Furthermore, we investigate the performance gains attributed to the Parallel Log-structured File System (PLFS) being developed by EMC Corporation and the Los Alamos National Laboratory. Our evaluation of PLFS involves two high-performance computing systems with contrasting I/O backplanes and illustrates the varied improvements to I/O that result from the deployment of PLFS (ranging from up to 25× speed-up in I/O performance on a large I/O installation to 2× speed-up on the much smaller installation at the University of Warwick).

Bibliographical note

© The Author 2012

    Research areas

  • checkpointing, file systems, high performance computing, input/output, MPI

Discover related content

Find related publications, people, projects, datasets and more using interactive charts.

View graph of relations