Impact of Model Ensemble On the Fairness of Classifiers in Machine Learning

Document Type

Conference Proceeding

Publication Date

5-21-2021

Abstract

Machine Learning (ML) models are trained using historical data that may contain stereotypes of the society (biases). These biases will be inherently learned by the ML models which might eventually result in discrimination against certain subjects, for instance, people with certain protected characteristics (race, gender, age, religion, etc.). Since the decision provided by ML models might affect people's lives, fairness of these models becomes crucially important. When training a model with fairness constraints, a significant loss in accuracy relative to the unconstrained model may be unavoidable. Reducing the trade-off between fairness and accuracy is an active research question within the fair ML community, i.e., to provide models with high accuracy with as little bias as possible. In this paper, we extensively investigate the fairness metrics over different ML models and study the impact of ensemble models on fairness. To this end, we compare different ensemble strategies and empirically show which strategy is preferable for different fairness metrics. Furthermore, we also propose a novel weighting technique that allows a balance between fairness and accuracy. In essence, we assign weights such that they are proportional to classifiers' performance in term of fairness and accuracy. Our experimental results show that our weighting technique reduces the trade-off between fairness and accuracy in ensemble models.

ISBN

978-1-7281-5934-8

Publisher

Institute of Electrical and Electronics Engineers (IEEE)

Volume

00

Disciplines

Computer Sciences

Keywords

Measurement, Training, Machine learning, Data models

Indexed in Scopus

no

Open Access

no

Share

COinS