Skip to content

Enhance Transparency through Fairness Metrics new

Overview

Sustainability DimensionSocial
ML Development PhaseDeployment and Monitoring
ML Development StakeholdersBusiness Stakeholder, ML Development, Auditing & Testing

Description

“Enhance Transparency through Fairness Metrics” describes the improvement of ML model transparency by calculating, analyzing, and publishing fairness metrics. Therefore, Pagano et al. (2023) propose the collection of metrics such as equality of opportunity, demographic parity, and individual differential fairness. Furthermore, introducing a multi-differential fairness auditor helps to analyze the results of classifiers regarding different groups with similar features (Gitiaux & Rangwala, 2019). Hence, this DP creates more transparency around an ML model's decisions.

Sources

  • Pagano, T. P., Loureiro, R. B., Lisboa, F. V. N., Peixoto, R. M., Guimarães, G. A. S., Cruz, G. O. R., Araujo, M. M., Santos, L. L., Cruz, M. A. S., Oliveira, E. L. S., Winkler, I., & Nascimento, E. G. S. (2023). Bias and Unfairness in Machine Learning Models: A Systematic Review on Datasets, Tools, Fairness Metrics, and Identification and Mitigation Methods. Big Data and Cognitive Computing, 7(1), 15. https://doi.org/10.3390/bdcc7010015
  • Gitiaux, X., & Rangwala, H. (2019). Multi-Differential Fairness Auditor for Black Box Classifiers (arXiv:1903.07609). arXiv. https://doi.org/10.48550/arXiv.1903.07609