Towards a low-code solution for monitoring machine learning model performance

Panagiotis Kourouklidis, Dimitris Kolovos, Nicholas Matragkas, Joost Noppen

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

As the use of machine learning techniques by organisations has become more common, the need for software tools that provide the robustness required in a production environment has become apparent. In this paper, we review relevant literature and outline a research agenda for the development of a low-code solution for monitoring the performance of a deployed machine learning model on a continuous basis.

Original languageEnglish
Title of host publicationProceedings - 23rd ACM/IEEE International Conference on Model Driven Engineering Languages and Systems, MODELS-C 2020 - Companion Proceedings
PublisherACM
Pages423-430
Number of pages8
ISBN (Electronic)9781450381352
DOIs
Publication statusPublished - 26 Oct 2020
Event23rd ACM/IEEE International Conference on Model Driven Engineering Languages and Systems, MODELS-C 2020 - Virtual, Online, Canada
Duration: 16 Oct 202023 Oct 2020

Publication series

NameProceedings - 23rd ACM/IEEE International Conference on Model Driven Engineering Languages and Systems, MODELS-C 2020 - Companion Proceedings

Conference

Conference23rd ACM/IEEE International Conference on Model Driven Engineering Languages and Systems, MODELS-C 2020
Country/TerritoryCanada
CityVirtual, Online
Period16/10/2023/10/20

Bibliographical note

Funding Information:
This paper disseminates results from the Lowcomote project, that received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 813884.

Publisher Copyright:
© 2020 ACM.

Keywords

  • Concept drift
  • Data drift
  • Machine learning
  • Model monitoring
  • Software engineering

Cite this