In October, Justin Rattner , chief technology officer of the world’s biggest computer chip maker Intel Corp., explained his company’s ‘failing fast’ strategy to a group of engineers in Barcelona, Spain. Rattner explained how hundreds of millions of dollars in costs can be avoided by experimenting, and failing fast, much before a product or an idea gets to the production stage. In an interview during a recent visit to Bangalore, Rattner explains why failures are important for Intel and how research organizations can learn from failed projects and ideas.
What does failure mean to you?
Part of the problem is that people don’t celebrate failures, they always celebrate success. I think there is great value in learning from failures. When you succeed, you have all kinds of convenient explanations and everybody is patting you on the back but when you fail you get some very fundamental insights, and that’s where the real learning takes place.
I think research in the industrial setting has got something of a bad name in the later part of the last century. There were pretty spectacular failures; you can chose anyone you want but probably the one that relates to personal computing was Xerox’s Palo Alto research centre, which couldn’t use some of its own invented paradigms in computing for decades. It was left to Apple and later Microsoft to make the best out of personal computers. It obviously takes more than just assembling a very talented group of individuals, giving them resources to do their work.
What’s the ‘failing fast’ strategy you follow at Intel?
The impact of a failure is dramatically greater in our in industry because we build factories well in advance. So, failure very late in the project is hugely expensive. That’s what’s behind the strategy of ‘failing fast’.
What have been some of the top failures?
Probably the most publicized of our failures is our graphic engine; many of us had our professional reputation at stake. The hardware was fine. We had completely underestimated the software challenge because this was software-driven graphic architecture. When we had to put together all of software capabilities, it just wasn’t competitive. We finally had to admit to that and, in retrospect, I wish we had done more on research side on software because we did a multi-hundred million dollar silicon work that essentially went nowhere. We just didn’t do the homework well.
Let me give you examples of ones that didn’t work in the beginning. They are just harder to describe. One of them is Intel’s Thunderbolt technology that went into all Macintosh machines. That’s particularly interesting because it was actually embraced by Apple before Intel. We struggled a bit in terms of dealing with that because Apple was like telling us, here’s our schedule, cost target, when we hit the market and so on. Meanwhile, at Intel we were not sure if we wanted to do it, and were telling them this is the product that would go in the road map for 2013-14. And Apple was like no, it’s 2010. It was quite a bit of internal tension but the management was able to see that it’s important and put all the resources; we were able to get the silicon out in time. It was a business that went from zero to $100 million on the first day because of the number of machines Apple shipped. It was a great case study in terms of research connecting and working with a customer.
There are many areas we have not succeeded. Some work we did in energy efficiency area didn’t yield desired results. Then there were some other projects that didn’t get picked up by product organizations. We picked ourselves up, dusted it off and started again.
What’s the history of learning from failures at Intel?
There’s an interesting bit from the Intel history which I mentioned during the Barcelona talk. Intel’s founders struggled with this challenge of getting results out of the research labs into the products and factories. The decision then was not to have a research organization at all, and for nearly 15-20 years there was no research department at Intel. In fact we created a euphemism for research called technology development and heaven forbid, you call somebody a technology researcher and they’ll hit you!.
Eventually, they came up with this notion called ‘pathfinding’ where they created a dynamic organization for each new generation of technology. The path-finding team consisted of selected researchers and selected developers who had the responsibility over roughly 18 months to take the best of their research and design, and bring them to the manufacturing process. There was never a path-finding organisation; there was one individual, actually an Intel senior fellow, who was responsible for all this. One of the reasons why Intel enjoys the lead it has today is because that process has proved so successful over more than a decade-and-a-half at the company.
During the early part of the last decade, Intel created a new technology organisation that was focused above the transistor level. So now we were talking about microprocessors, software systems, energy efficiency solutions, so on and so forth. For the first few years, this organisation struggled a bit. It really struggled on how to get research results. Fortunately, Intel was going through a reengineering exercise, which was wonderful and the timing exquisite because management didn’t have time to fix anything beyond falling revenue and profit. When business turns bad, you’ve got a much bigger fish to fry. The personal computing market was changing and Intel had to change and respond.
That’s when we started looking at the path-finding teams. The question was how to get more of these ideas to real products that can drive Intel’s future. We embraced pathfinding, moulded it into different structures across new business units. What we came to do was to create different pathfinding teams, much smaller in scale, with roughly the same number of researchers and developers.
How did failing fast help Intel?
We went from a success rate of less than 50% of research results moving into product development to now where we are over 90% of those technologies that are identified by the business units as having value in their product road maps. It benefitted researchers immensely because increasingly they were becoming aware that their work would actually see the light somewhere down the line, say three-four years, in a real product. By working in the pathfinding programme, researchers became smarter about what kind of research would work in terms of the product context, making research more relevant and increasing the likelihood of their research seeing the light.
We typically have about 50 projects (pathfinding) at any point in time. The number is self-limited because we also realize that if you are not careful, you begin to gravitate towards things that we don’t need to. We looked at a dozen multinationals with $50-$100 billion range and realized they spend half of every research dollar on work that’s medium- to long-term projects.
And other half is spent on truly exploratory research without any priority destination. Some years we are 55:45, our three-year moving average is 50:50. Silicon photonics that we are now bringing to market is one project that we identified, pitched and got funded.
The 18 months had its origins in the Moore’s Law and that’s what was driving new semiconductor and manufacturing technologies. It became specific in terms of the nature of project. So, for instance, if you were looking to get a major new feature on microprocessor, that could be a three-year effort, if it’s a new feature of energy efficiency that might be done in less than a year.