“An Experimental Survey of Energy Management across the Stack”, 2014-10-15 ():
Modern demand for energy-efficient computation has spurred research at all levels of the stack, from devices to microarchitecture, operating systems, compilers, and languages. Unfortunately, this breadth has resulted in a disjointed space, with technologies at different levels of the system stack rarely compared, let alone coordinated.
This work begins to remedy the problem, conducting an experimental survey of the present state of energy management across the stack. Focusing on settings that are exposed to software, we measure the total energy, average power, and execution time of 41 benchmark applications in 220 configurations, across a total of 200,000 program executions.
Some of the more important findings of the survey include that effective parallelization and compiler optimizations have the potential to save far more energy than Linux’s frequency tuning algorithms; that certain non-complementary energy strategies can undercut each other’s savings by half when combined; and that while the power impacts of most strategies remain constant across applications, the runtime impacts vary, resulting in inconsistent energy impacts.
See Also:
Understanding sources of inefficiency in general-purpose chips
There’s plenty of room at the Top: What will drive computer performance after Moore’s law?
Proebsting’s Law: Compiler Advances Double Computing Power Every 18 Years
Implications of Historical Trends in the Electrical Efficiency of Computing
Google-Wide Profiling: A Continuous Profiling Infrastructure for Data Centers