Abstract
As the use of machine learning techniques by organisations has become more common, the need for software tools that provide the robustness required in a production environment has become apparent. In this paper, we review relevant literature and outline a research agenda for the development of a low-code solution for monitoring the performance of a deployed machine learning model on a continuous basis.
Original language | English |
---|---|
Title of host publication | Proceedings - 23rd ACM/IEEE International Conference on Model Driven Engineering Languages and Systems, MODELS-C 2020 - Companion Proceedings |
Publisher | ACM |
Pages | 423-430 |
Number of pages | 8 |
ISBN (Electronic) | 9781450381352 |
DOIs | |
Publication status | Published - 26 Oct 2020 |
Event | 23rd ACM/IEEE International Conference on Model Driven Engineering Languages and Systems, MODELS-C 2020 - Virtual, Online, Canada Duration: 16 Oct 2020 → 23 Oct 2020 |
Publication series
Name | Proceedings - 23rd ACM/IEEE International Conference on Model Driven Engineering Languages and Systems, MODELS-C 2020 - Companion Proceedings |
---|
Conference
Conference | 23rd ACM/IEEE International Conference on Model Driven Engineering Languages and Systems, MODELS-C 2020 |
---|---|
Country/Territory | Canada |
City | Virtual, Online |
Period | 16/10/20 → 23/10/20 |
Bibliographical note
Funding Information:This paper disseminates results from the Lowcomote project, that received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 813884.
Publisher Copyright:
© 2020 ACM.
Keywords
- Concept drift
- Data drift
- Machine learning
- Model monitoring
- Software engineering