Ensure Continuous (Human) Monitoring for Fairness new β
Overview β
Sustainability Dimension | Social |
ML Development Phase | Deployment and Monitoring |
ML Development Stakeholders | Business Stakeholder, Software Development, Auditing & Testing |
Description β
The DP βEnsure Continuous (Human) Monitoring for Fairnessβ entails the continuous monitoring of ML model predictions and decisions in the real-world environment (Fahse et al., 2021). This can be achieved by establishing a continuous process to evaluate the ML model for fairness of predictions as new data is integrated, and the ML model is retrained (Burkhardt et al., 2019). Additionally, interrogating ML model decisions for plausibility by humans in fixed intervals can be considered (van Giffen et al., 2022).
Sources β
- Fahse, T., Huber, V., & Van Giffen, B. (2021). Managing Bias in Machine Learning Projects. Innovation Through Information Systems, 47, 94β109. https://doi.org/10.1007/978-3-030-86797-3_7
- Burkhardt. (2019). Leading your organization to responsible AI | McKinsey. https://www.mckinsey.com/capabilities/quantumblack/our-insights/leading-your-organization-to-responsible-ai
- Van Giffen, B., Herhausen, D., & Fahse, T. (2022). Overcoming the pitfalls and perils of algorithms: A classification of machine learning biases and mitigation methods. Journal of Business Research, 144, 93β106. https://doi.org/10.1016/j.jbusres.2022.01.076