Skip to content

Leverage Fair and Interpretable Models new ​

Overview ​

Sustainability DimensionSocial
ML Development PhaseModeling and Training
ML Development StakeholdersBusiness Stakeholder, ML Development, Auditing & Testing

Description ​

The DP β€œLeverage Fair and Interpretable Models” describes the prioritization of interpretable and fair ML models over black-box models whenever possible. Interpretable ML models enable individuals lacking a comprehensive statistical background to comprehend decisions, detect errors, and aid in the due diligence process (Wang et al., 2023). Several papers have shown that interpretable ML models can perform approximately as well as black-box models while providing previously introduced benefits (Nori et al., 2019; Wang et al., 2023). Furthermore, fairly designed models and model combinations can help to directly improve decision-making from a social perspective (Van Giffen et al., 2022).

Sources ​

  • Wang, C., Han, B., Patel, B., & Rudin, C. (2023). In Pursuit of Interpretable, Fair and Accurate Machine Learning for Criminal Recidivism Prediction. Journal of Quantitative Criminology, 39(2), 519–581. https://doi.org/10.1007/s10940-022-09545-w
  • Nori, H., Jenkins, S., Koch, P., & Caruana, R. (2019). InterpretML: A Unified Framework for Machine Learning Interpretability (arXiv:1909.09223). arXiv.
  • Van Giffen, B., Herhausen, D., & Fahse, T. (2022). Overcoming the pitfalls and perils of algorithms: A classification of machine learning biases and mitigation methods. Journal of Business Research, 144, 93–106. https://doi.org/10.1016/j.jbusres.2022.01.076