Towards model-based bias mitigation in machine learning

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Models produced by machine learning are not guaranteed to be free from bias, particularly when trained and tested with data produced in discriminatory environments. The bias can be unethical, mainly when the data contains sensitive attributes, such as sex, race, age, etc. Some approaches have contributed to mitigating such biases by providing bias metrics and mitigation algorithms. The challenge is users have to implement their code in general/statistical programming languages, which can be demanding for users with little programming and fairness in machine learning experience. We present FairML, a model-based approach to facilitate bias measurement and mitigation with reduced software development effort. Our evaluation shows that FairML requires fewer lines of code to produce comparable measurement values to the ones produced by the baseline code.

Original languageEnglish
Title of host publicationProceedings - 25th ACM/IEEE International Conference on Model Driven Engineering Languages and Systems, MODELS 2022
PublisherAssociation for Computing Machinery, Inc
Pages143-153
Number of pages11
ISBN (Electronic)9781450394666
DOIs
Publication statusPublished - 23 Oct 2022
Event25th ACM/IEEE International Conference on Model Driven Engineering Languages and Systems, MODELS 2022 - Montreal, Canada
Duration: 23 Oct 202228 Oct 2022

Publication series

NameProceedings - 25th ACM/IEEE International Conference on Model Driven Engineering Languages and Systems, MODELS 2022

Conference

Conference25th ACM/IEEE International Conference on Model Driven Engineering Languages and Systems, MODELS 2022
Country/TerritoryCanada
CityMontreal
Period23/10/2228/10/22

Bibliographical note

Funding Information:
This work has been funded through the York-Maastricht partnership’s Responsible Data Science by Design programme (https: //www.york.ac.uk/maastricht). We thank the Maastricht team for all their valuable contributions.

Publisher Copyright:
© 2022 ACM.

Keywords

  • bias metrics
  • bias mitigation
  • generative programming
  • machine learning
  • model-driven engineering

Cite this