Power and compilers at the NMI
I recently attended an event focused on power and embedded software hosted by the NMI in the UK, where I had been invited to make a presentation. My session was titled “Power Management in a Real Time Operating System”. If you would like a copy of the slides, please email.
Of course, apart from presenting myself, I was interested in the other sessions, in particular one about compilers and power consumption …
The session that caught my eye was titled “The Impact of Compiler Options on Energy Consumption in Embedded Platforms”. This was a report on some university research work, which was seeking a correlation between the use of optimization options on a GNU compiler and the energy consumption during execution of the resulting code. The approach was to build the code of a selection of [rather arbitrary] benchmarks, using a large number of combinations of compiler options, then execute the code and measure power consumption. They also used a number of different target systems or different architectures [including multicore].
My interpretation of their conclusions is that there were no clear conclusions. I felt that the research was not well designed or thought through. The use of compiler options was not systematic – they even used one, the function of which was unclear to the researchers. To my mind, this was equivalent to randomly mixing chemicals in the hope of creating a wonder drug. Their use of multiple targets [in the first instance] just served to complicate matters.
I do, however, feel that this is an interesting area of research, but it needs a slightly different approach, largely focused on understanding compiler optimizations, hypothesizing on their effect on power and testing that hypothesis. In general, there are 3 kind of optimizations:
- Speed. Faster code is often larger, but is very likely to consume more energy to perform a specific task.
- Size. Compact code is usually slower, so is likely to consume more power. However, there is likely to be the opportunity to reduce the memory size, which represents a power saving. [This possibility was not addressed in the research.]
- Lucky. Sometimes code is smaller and faster. Obviously this combines the benefits of (1) and (2).
Interesting research would be to find out whether fast code does offer the best overall energy consumption result. This could result in compilers being provided with a “compile for low power” option.
Other sessions at the event were interesting in various ways. There was a very inspiring session concerning a project to encourage young people to start writing software. Another session was presented by a guy who looked and sounded exactly like the comedian David Mitchell [who is well known in the UK, at least]. A session from Intel looked at the technology that they provide to enable software to optimize system power consumption. This included the random fact that the energy in a banana is equivalent to that held in two fully-charged laptop batteries. I do not believe that Intel are announcing banana powered computing …