Leverage Fair and Interpretable Models new β
Overview β
Sustainability Dimension | Social |
ML Development Phase | Modeling and Training |
ML Development Stakeholders | Business Stakeholder, ML Development, Auditing & Testing |
Description β
The DP βLeverage Fair and Interpretable Modelsβ describes the prioritization of interpretable and fair ML models over black-box models whenever possible. Interpretable ML models enable individuals lacking a comprehensive statistical background to comprehend decisions, detect errors, and aid in the due diligence process (Wang et al., 2023). Several papers have shown that interpretable ML models can perform approximately as well as black-box models while providing previously introduced benefits (Nori et al., 2019; Wang et al., 2023). Furthermore, fairly designed models and model combinations can help to directly improve decision-making from a social perspective (Van Giffen et al., 2022).
Sources β
- Wang, C., Han, B., Patel, B., & Rudin, C. (2023). In Pursuit of Interpretable, Fair and Accurate Machine Learning for Criminal Recidivism Prediction. Journal of Quantitative Criminology, 39(2), 519β581. https://doi.org/10.1007/s10940-022-09545-w
- Nori, H., Jenkins, S., Koch, P., & Caruana, R. (2019). InterpretML: A Unified Framework for Machine Learning Interpretability (arXiv:1909.09223). arXiv.
- Van Giffen, B., Herhausen, D., & Fahse, T. (2022). Overcoming the pitfalls and perils of algorithms: A classification of machine learning biases and mitigation methods. Journal of Business Research, 144, 93β106. https://doi.org/10.1016/j.jbusres.2022.01.076