Conduct Fairness Evaluation new
Overview
Sustainability Dimension | Social |
ML Development Phase | Modeling and Training |
ML Development Stakeholders | Business Stakeholder, Domain Expert, ML Development, Auditing & Testing |
Description
“Conduct Fairness Evaluation” represents a DP that aims to employ fairness-driven ML model evaluations. This is crucial to ensure fairness mitigation throughout the training stages. Therefore, Pagano et al. (2023), Bellamy et al. (2019), and Weerts et al. (2023) propose to use tools such as AIF360, TensorFlow Responsible AI, and Aequitas. These tools support developers in identifying fairness issues early.
Sources
- Bellamy, R. K. E., Dey, K., Hind, M., Hoffman, S. C., Houde, S., Kannan, K., Lohia, P., Mehta, S., Mojsilovic, A., Nagar, S., Ramamurthy, K. N., Richards, J., Saha, D., Sattigeri, P., Singh, M., Varshney, K. R., & Zhang, Y. (2019). Think Your Artificial Intelligence Software Is Fair? Think Again. IEEE Software, 36(4), 76–80. https://doi.org/10.1109/MS.2019.2908514
- Weerts, H., Dudík, M., Edgar, R., Jalali, A., Lutz, R., & Madaio, M. (2023). Fairlearn: Assessing and Improving Fairness of AI Systems. Journal of Machine Learning Research, 24, 1–8.
- Pagano, T. P., Loureiro, R. B., Lisboa, F. V. N., Peixoto, R. M., Guimarães, G. A. S., Cruz, G. O. R., Araujo, M. M., Santos, L. L., Cruz, M. A. S., Oliveira, E. L. S., Winkler, I., & Nascimento, E. G. S. (2023). Bias and Unfairness in Machine Learning Models: A Systematic Review on Datasets, Tools, Fairness Metrics, and Identification and Mitigation Methods. Big Data and Cognitive Computing, 7(1), 15. https://doi.org/10.3390/bdcc7010015