Cloud computing systems often use large server farms in order to provide services, and energy consumption is becoming a major issue for data centres operating 24 hours a day, 7 days a week.
Energy efficiency and physical size are key issues for server cards used in warehouse-sized data centres. These factors do not only affect the operational costs and ecological footprint, but have also an impact on the possibilities to construct or expand data centres.
In order to demonstrate the possible gain of using low power processors and energy aware workload management techniques, the Embedded Systems Laboratory built a cluster of low-power, low-cost single-board computer. Several studies show that typical servers have a relatively low average utilization rate but at the same time a relatively the large load fluctuation over a period a several days. The use of slower but more energy-efficient cores enables system level power managements, which can dynamically matches the load fluctuation to the computational capacity of the cluster at a much finer granularity than server-grade cores.
Measurements done by the research group show that low-power, low-cost ARM based cores are 3 to 11 times more energy efficient than server-grade core for typical cloud and server applications. In order words with the same amount of energy they can produce 3 to 11 times more services. The cluster will be used as a proof of concept demonstrating the soundness of using a set of low-power, low-cost CPUs instead of a server-grade CPU for a set of server applications. It will be used to demonstrate energy-aware workload management techniques distributing work on a larger amount of processors.