Eliminate Inefficiency in ML-Model Architecture new โ
Overview โ
Sustainability Dimension | Ecological |
ML Development Phase | Modeling and Training |
ML Development Stakeholders | ML Development |
Description โ
Subsequently, the DP โEliminate Inefficiency in ML-Model Architectureโ focuses on reducing the energy consumption within the ML model architecture through optimizing energy-intensive parts (Lee et al., 2023; Microsoft, 2023a). One example of model optimization in the context of artificial neural networks is utilizing optimized open-source code. For instance, pre-trained initializations can lead to more energy-efficient convergence (Xu, 2022). Kumar et al. (2020) suggest using profiling software (e.g., Java Energy Profiler and Optimizer) to get real-time suggestions for energy-saving adjustments.
Sources โ
- Lee, J., Mukhanov, L., Molahosseini, A. S., Minhas, U., Yang Hua, Del Rincon, J. M., Dichev, K., Cheol-Ho Hong, & Vandierendonck, H. (2023). Resource-Efficient Convolutional Networks: A Survey on Model-, Arithmetic-, and Implementation-Level Techniques. ACM Computing Surveys, 55, 1โ36. https://doi.org/10.1145/3587095
- Microsoft. (2023). Code With Engineering Playbook [Book]. https://microsoft.github.io/code-with-engineering-playbook
- Xu, T. (2022). These simple changes can make AI research much more energy efficient. MIT Technology Review. https://www.technologyreview.com/2022/07/06/1055458/ai-research-emissions-energy-efficient/
- Kumar, M., Zhang, X., Liu, L., Wang, Y., & Shi, W. (2020). Energy-Efficient Machine Learning on the Edges. 2020 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW), 912โ921. https://doi.org/10.1109/IPDPSW50202.2020.00153