Abstract
Models produced by machine learning are not guaranteed to be free from bias, particularly when trained and tested with data produced in discriminatory environments. The bias can be unethical, mainly when the data contains sensitive attributes, such as sex, race, age, etc. Some approaches have contributed to mitigating such biases by providing bias metrics and mitigation algorithms. The challenge is users have to implement their code in general/statistical programming languages, which can be demanding for users with little programming and fairness in machine learning experience. We present FairML, a model-based approach to facilitate bias measurement and mitigation with reduced software development effort. Our evaluation shows that FairML requires fewer lines of code to produce comparable measurement values to the ones produced by the baseline code.
Original language | English |
---|---|
Title of host publication | Proceedings - 25th ACM/IEEE International Conference on Model Driven Engineering Languages and Systems, MODELS 2022 |
Publisher | Association for Computing Machinery, Inc |
Pages | 143-153 |
Number of pages | 11 |
ISBN (Electronic) | 9781450394666 |
DOIs | |
Publication status | Published - 23 Oct 2022 |
Event | 25th ACM/IEEE International Conference on Model Driven Engineering Languages and Systems, MODELS 2022 - Montreal, Canada Duration: 23 Oct 2022 → 28 Oct 2022 |
Publication series
Name | Proceedings - 25th ACM/IEEE International Conference on Model Driven Engineering Languages and Systems, MODELS 2022 |
---|
Conference
Conference | 25th ACM/IEEE International Conference on Model Driven Engineering Languages and Systems, MODELS 2022 |
---|---|
Country/Territory | Canada |
City | Montreal |
Period | 23/10/22 → 28/10/22 |
Bibliographical note
Funding Information:This work has been funded through the York-Maastricht partnership’s Responsible Data Science by Design programme (https: //www.york.ac.uk/maastricht). We thank the Maastricht team for all their valuable contributions.
Publisher Copyright:
© 2022 ACM.
Keywords
- bias metrics
- bias mitigation
- generative programming
- machine learning
- model-driven engineering