By the same authors

Assuring the Machine Learning Lifecycle: Desiderata, Methods, and Challenges

Research output: Working paper

Standard

Assuring the Machine Learning Lifecycle: Desiderata, Methods, and Challenges. / Ashmore, Rob; Calinescu, Radu; Paterson, Colin.

2019.

Research output: Working paper

Harvard

Ashmore, R, Calinescu, R & Paterson, C 2019 'Assuring the Machine Learning Lifecycle: Desiderata, Methods, and Challenges'.

APA

Ashmore, R., Calinescu, R., & Paterson, C. (2019). Assuring the Machine Learning Lifecycle: Desiderata, Methods, and Challenges.

Vancouver

Ashmore R, Calinescu R, Paterson C. Assuring the Machine Learning Lifecycle: Desiderata, Methods, and Challenges. 2019 May 10.

Author

Ashmore, Rob ; Calinescu, Radu ; Paterson, Colin. / Assuring the Machine Learning Lifecycle: Desiderata, Methods, and Challenges. 2019.

Bibtex - Download

@techreport{cd57cde00804429eade80f43e84980bb,
title = "Assuring the Machine Learning Lifecycle: Desiderata, Methods, and Challenges",
abstract = " Machine learning has evolved into an enabling technology for a wide range of highly successful applications. The potential for this success to continue and accelerate has placed machine learning (ML) at the top of research, economic and political agendas. Such unprecedented interest is fuelled by a vision of ML applicability extending to healthcare, transportation, defence and other domains of great societal importance. Achieving this vision requires the use of ML in safety-critical applications that demand levels of assurance beyond those needed for current ML applications. Our paper provides a comprehensive survey of the state-of-the-art in the assurance of ML, i.e. in the generation of evidence that ML is sufficiently safe for its intended use. The survey covers the methods capable of providing such evidence at different stages of the machine learning lifecycle, i.e. of the complex, iterative process that starts with the collection of the data used to train an ML component for a system, and ends with the deployment of that component within the system. The paper begins with a systematic presentation of the ML lifecycle and its stages. We then define assurance desiderata for each stage, review existing methods that contribute to achieving these desiderata, and identify open challenges that require further research. ",
author = "Rob Ashmore and Radu Calinescu and Colin Paterson",
year = "2019",
month = may,
day = "10",
language = "Undefined/Unknown",
type = "WorkingPaper",

}

RIS (suitable for import to EndNote) - Download

TY - UNPB

T1 - Assuring the Machine Learning Lifecycle: Desiderata, Methods, and Challenges

AU - Ashmore, Rob

AU - Calinescu, Radu

AU - Paterson, Colin

PY - 2019/5/10

Y1 - 2019/5/10

N2 - Machine learning has evolved into an enabling technology for a wide range of highly successful applications. The potential for this success to continue and accelerate has placed machine learning (ML) at the top of research, economic and political agendas. Such unprecedented interest is fuelled by a vision of ML applicability extending to healthcare, transportation, defence and other domains of great societal importance. Achieving this vision requires the use of ML in safety-critical applications that demand levels of assurance beyond those needed for current ML applications. Our paper provides a comprehensive survey of the state-of-the-art in the assurance of ML, i.e. in the generation of evidence that ML is sufficiently safe for its intended use. The survey covers the methods capable of providing such evidence at different stages of the machine learning lifecycle, i.e. of the complex, iterative process that starts with the collection of the data used to train an ML component for a system, and ends with the deployment of that component within the system. The paper begins with a systematic presentation of the ML lifecycle and its stages. We then define assurance desiderata for each stage, review existing methods that contribute to achieving these desiderata, and identify open challenges that require further research.

AB - Machine learning has evolved into an enabling technology for a wide range of highly successful applications. The potential for this success to continue and accelerate has placed machine learning (ML) at the top of research, economic and political agendas. Such unprecedented interest is fuelled by a vision of ML applicability extending to healthcare, transportation, defence and other domains of great societal importance. Achieving this vision requires the use of ML in safety-critical applications that demand levels of assurance beyond those needed for current ML applications. Our paper provides a comprehensive survey of the state-of-the-art in the assurance of ML, i.e. in the generation of evidence that ML is sufficiently safe for its intended use. The survey covers the methods capable of providing such evidence at different stages of the machine learning lifecycle, i.e. of the complex, iterative process that starts with the collection of the data used to train an ML component for a system, and ends with the deployment of that component within the system. The paper begins with a systematic presentation of the ML lifecycle and its stages. We then define assurance desiderata for each stage, review existing methods that contribute to achieving these desiderata, and identify open challenges that require further research.

M3 - Working paper

BT - Assuring the Machine Learning Lifecycle: Desiderata, Methods, and Challenges

ER -