Energy-Efficient Cache Partitioning Using Machine Learning for Embedded Systems
Samar Nour, Shahira M. Habashy, Sameh A. Salem | Pages: 285-300 |

Abstract— Nowadays, embedded device applications have become partially correlated and can share platform resources. Cross-execution and sharing resources can cause memory access conflicts, especially in the Last Level Cache (LLC). LLC is a promising candidate for improving system performance on multicore embedded systems. It leads to a reduction in the number of high-latency main memory accesses. Currently, commercial devices can use cache partitioning. The software could better utilize the LLC and conserve energy by caching. This paper proposes a new energy-optimization model for embedded multicore systems based on a reconfigurable artificial neural network LLC architecture. The proposed model uses a machine-learning approach to express the reconfiguration of LLC, and can predict each task’s next interval LLC partitioning factor at runtime. The obtained experimental results reveal that the proposed model – compared to other algorithms – improves energy consumption by 28%, and gives 33% reduction in the LLC miss rate.