Introduce ML-Model Transparency for Active Participation new
Overview
Sustainability Dimension | Governance |
ML Development Phase | Modeling and Training |
ML Development Stakeholders | ML Development, Auditing & Testing |
Description
Contemporary ML models are often black-box models that are hard to interpret regarding input and output data and the data processing within the ML model (Gao & Guan, 2023). To achieve company-wide adoption and participation, ML models must be somewhat interpretable (Grennan et al., 2022). Interpretability refers to the degree to which a person can understand the reasoning behind a decision (Biran & Cotton, 2017; Miller, 2017). The DP “Introduce ML-Model Transparency for Active Participation” encourages ML developers to apply interpretability methods during the development phase to embrace the discussion. These methods range from simple methods, such as an overview of the created input features, to feature importance to global post hoc methods.
Sources
- Gao, L., & Guan, L. (2023). Interpretability of Machine Learning: Recent Advances and Future Prospects (arXiv:2305.00537). arXiv. http://arxiv.org/abs/2305.00537
- Grennan, L., Kremer, A., Singla, A., & Zipparo, P. (2022). Explainable AI: Getting it right in business. https://www.mckinsey.com/capabilities/quantumblack/our-insights/why-businesses-need-explainable-ai-and-how-to-deliver-it
- Biran, O., & Cotton, C. (2017). Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI), 8(1), 8–13.
- Miller, T. (2017). Explanation in Artificial Intelligence: Insights from the Social Sciences. https://doi.org/10.48550/ARXIV.1706.07269