Skip to content

Design Computationally Sparse ML Architecture new โ€‹

Overview โ€‹

Sustainability DimensionEcological
ML Development PhaseDeployment and Monitoring
ML Development StakeholdersSoftware Development

Description โ€‹

โ€œDesign Computationally Sparse ML Architectureโ€, focuses on reducing the environmental costs associated with the inference phase of ML models. Considering storage, Donovan (2020) suggests analyzing the ML architecture regarding a) how long data must be stored, as storage costs a considerable amount of energy and b) where data shall be stored. For example, in the case of large datasets, on-premises storage may be more efficient (Donovan, 2020). Furthermore, the type of inference must be chosen, i.e., batch or real-time inference. The latter requires a continuous server uptime and, thus, a higher energy demand (Natarajan et al., 2022). Despite computational limitations, multiple authors advise using edge devices due to lower energy consumption and latencies (Zhu et al., 2022).

Sources โ€‹