New Delhi: A principal research scientist at Hewlett Packard Labs in California, Partha Ranganathan, was named by the Massachusetts Institute of Technology science magazine Technology Review, as one of the world’s top 35 young innovators for the year 2007. An Indian Institute of Technology, Madras, alumnus and an avid fan of former president Dr. A.P.J. Abdul Kalam’s India 2020 vision, Ranganathan’s research interests are low-power system design, system architecture, and performance evaluation. Edited excerpts from an e-mail interview:
Congratulations on being selected for the TR 35 2007 Young Innovator award. How does it feel to be topping such a prestigious list?
I am delighted at this recognition. I have a lot of respect and admiration for the past winners of this award (founders of Google, Linux Torvalds, etc) and am happy toshare something in common with them!
Computer caring: Scientist Partha Ranganathan looks into a massive computing spectrum - from servers and supercomputers - trying to check adverse effects and enhancing performance
At the same time, I am also extremely mindful that the work and impact that comes in the wake of receiving this award, which is not something I can take entire credit for. This would not have been possible without the support, encouragement, and hard work of a lot of other people. It is a team effort and future achievements, breakthroughs will come only with their suport.
How did you touch upon the idea of delving into energy-efficient and environment-friendly computing?
From a consumer perspective, improved battery life from our technologies dramatically enhances the usability of mobile devices. Studies show that mobile users often rate the need for longer battery life which can be higher than the need for better performance. I think most of us can relate to that. Power management also impacts the broader system functionality, for example from factor, weight, packaging and user experience like multiple power adapters, feature set of these devices. Similarly, in the enterprise space, the capital and recurring costs associated with power delivery, electricity consumption, and heat extraction are dramatically increasing.
A recent Wall Street Journal article reports that, “in 2006, businesses world-wide spent about $55.4 billion on new servers; to power and cool those machines, they spent $29 billion, more than half the cost of the equipment itself -- and that number is rising”. Additionally, better power and heat management allows for more consolidation and reliability in future data centres. This can, in turn, lead to more cost-effective enterprise backbones, enabling more widespread deployment of computing services, even for the cost structures of emerging markets like India and China.
How dangerously can computing contribute to greenhouse gas emissions?
Power consumption also has broader societal impact with its environmental benefits. For instance, the energy consumed by compute equipment contributes to more than four million tonnes of carbon dioxide emissions annually. Given the recent increase in emphasis on the conversations around sustainability and global warming, “green” computing environments with effective power management are likely to be critical components of future solutions. Indeed, environmental agencies worldwide (TopRunner, EnergyStar) are already driving aggressive standards to encourage energy-efficient system designs.
What are the specific areas in power and heat management that you are dealing with for better computing in future? What kind of changes will it trigger in the next generation computing world?
Across the computing spectrum - from servers and supercomputers to personal computers and cell phones - power consumption is rapidly emerging as one of the key limiting factors impeding greater adoption of more advanced computing solutions. We were among those who realized early on (long before it became evident to the rest of the industry) that a key impediment to further innovation in the compute fabric will be the problem of power and heat management in future computing devices, and our recent research has been focusing on that.
The project has addressed the power challenge holistically and pioneered several innovative methods to systematically address key bottlenecks in future systems. Some key recent contributions include:
Ensemble-level power management for future blade servers that enforce the power budget in software, and holistically across a collection of servers (e.g., at the enclosure level). For current enterprise deployments, this can reduce the power by a factor of two, translating to millions of dollars of savings for large data centers.
Facilities-aware data centre resource provisioning that adapts workload scheduling to optimize for power and cooling costs in addition to performance. For example, temperature-aware resource scheduling can move heat-generating workloads to cooler locations of the data center and reduce cooling costs by half. Again, for large data centers, this can dramatically reduce operating costs.
Energy-adaptive dsiplays and energy aware user interfaces that pioneered the notion of displays that adapt their energy consumption based on scope of user interest. As a result, the display battery life on mobile devices like cell phones and MP3 players improves two-fold to twenty-fold.
Heterogeneous (or ‘asymmetric’) multi-core architectures that design core diversity into chip multiprocessors (such as the recently announced dual-core and quad-core processors from Intel and AMD) to better match workload requirements to architectural efficiency. This design enables two-fold to ten-fold improvements in power, for the same performance, and creates a power-efficient processing foundation for compute platforms.
In addition, the project has also developed several new approaches for power measuring and monitoring, including Joulesort, energy scale down, energy-based statistical profiling, location-aware knowledge planes, as well as proxy-based environmental modeling.
Even though our work, when adopted, will have dramatic implications on the energy consumption of computing devices – from handhelds to datacentres, I believe we have only scratched the surface. We continue to work on several cutting-edge power management technologies that help improve energy consumption in future computing devices.
Have you started using your findings commercially? What kind of feedback have you received?
Many of these approaches are being considered for benchmarks in academia and industry. Our technical innovations have resulted in over 60 publications -- including an upcoming book on power management and several prestigious IEEE and ACM publications -- as well as more than 40 patents. Recently HP announced a new business to deliver the “Smart Data Centre” solution and a new “C-class blade” server, both with enterprise power management innovations developed by our work. It is noteworthy that PG&E in recognition for the energy savings and the environmental implications of this work is also offering HP customers rebates to buy these solutions.
You are covering a huge area of computing from small mobile devices to dense servers in data centres as it reflects on your job profile at the Hewlett Packard Research Labs? Could you tell us more about it.
We are currently working on an ambitious and important project within HP with the goal to design the next-generation computer fabric for future data centres. We are investigating some radical blade server architectures for better manageability and performance. In particular, in a paper that appeared in the prestigious International Symposium on Computer Architecture this year, we described an innovative new architecture that provides greater fault tolerance in the volume market. This is particularly important, considering that technology trends show that the number of faults in computing systems is likely to go up by orders of magnitude in the next few generations. Another recent paper showed the benefits of dedicating some cores to accelerating key operating system calls that do not require much context switching. We saw tremendous benefits in power efficiency, performance and security. Another recent paper explores the problems of how best to handle the problem of manipulating large unstructured data sets in storage for jobs such as search, finding malware or superimposing images. With collaborators from University of Virginia, we have explored trade-offs of using helper cores on a CPU versus a standalone accelerator.
Keeping in view that India is a global player in the IT arena, do you have any India-centric projects?
My work tries to solve fundamental problems around energy-efficiency that affects each one of us. These problems do not have any boundaries of US versus India. For example, I am sure all of us in India would love to have longer battery lives for our cellphones just as much as anyone else! In the enterprise space, our work on power management for datacentres, if anything, is even more relevant for the power and heat constraints of emerging economies like India. Similarly environmental implications of our work must have global benefits.
I continue to be involved in numerous activities in India and am passionate about how NRIs in the US can contribute to various initiatives back home. I have founded and helped run several non-profit organizations. Back, when I was doing my Phd, I was very active with Friends of Young Minds, a non-profit organization that helped ship old computers to underprivileged children in India. I am active in the IIT alumni association (www.iit.org) and have been involved in setting up and running it besides funding come of their projects.
I firmly believe that if the alumni “gives-back” in whatever form, to its alma mater, growth and impact will be much higher. I try to visit India once a year and whenever I am there, I do my best to ‘guest-teach’ or give lectures on my work and technologies in the US. I also meet other young people to help with mentoring.