Neuromorphic computing is the future
- Aadhaar mandatory linking: SC’s Constitution bench to hear case tomorrow
- Govt proposes special courts to fast track criminal cases against politicians
- Siddaramaiah to begin 1-month Karnataka tour today with eye on 2018 assembly elections
- Crack found in bullet train in first ‘serious incident’ for Japan
- US has elevated its relationship with India, says Rex Tillerson
Moore’s law, which gets its name from Gordon Moore, the co-founder of Intel Corp., posits that the number of transistors in an integrated circuit doubles approximately every two years. However, even as transistors become smaller and smaller so that more of them can be fitted on a single silicon chip to make computers more powerful, Moore’s law is sure to reach its limits.
Sensing this limitation, computer scientists have been experimenting for quite some time with quantum computing, which promises to revolutionize computing. Conventional computers use bits, or the basic unit of information in computing—zeroes and ones. A quantum computer, on the other hand, deals with qubits (short for quantum bits) that can encode a one and a zero simultaneously—a property that will eventually allow them to process a lot more information than traditional computers, and at unimaginable speeds.
Quantum computing, of course, will take a few years before we see meaningful applications. Meanwhile, there is other innovative architecture, such as field-programmable gate array (FPGA) based designs and graphics processing unit (GPU) accelerated systems that rely on deep-learning neural networks to make predictions.
However, it is neuromorphic computing that is getting a lot of press, with companies like Intel, International Business Machines Corp. (IBM), Applied Brain Research, Inc., Brain Corp., Hewlett Packard Enterprise, Samsung Electronics Co. Ltd and Qualcomm Inc. putting their might behind it.
The idea behind neuromorphic computing, as the word “neuro” suggests, is to increasingly try and develop computer circuits that behave like human brains. There have been quite some advancements in this field.
A team of University of Michigan scientists, for instance, announced on 22 May that they have developed a “memristor” computer circuit prototype which imitates the way mammals are guided by their brains.
According to Wei Lu, professor of electrical engineering and computer science at the University of Michigan, and lead author of a paper on the work published in the current issue of Nature Nanotechnology, memristors are electrical resistors with memory-advanced electronic devices that regulate current based on the history of the voltages applied to them. They can store and process data simultaneously just like the brain does, unlike conventional computers where the logic and memory functions are located in different parts of the circuit. This makes memristors more efficient than traditional computer systems.
A mammal’s brain can almost instantaneously recognize different arrangements of shapes. Humans do this using just a few neurons that become active, says Prof. Wei. Both neuroscientists and computer scientists call the process “sparse coding”.
Memristors use this very technique to detect patterns very efficiently. The University of Michigan researchers trained their system to learn a “dictionary” of images. Trained on a set of gray-scale image patterns, their memristor network was able to reconstruct images of famous paintings and photographs and other test patterns.
Moreover, since memristors can perform many operations simultaneously without having to move data around, they could enable new platforms that process a vast number of signals in parallel and are capable of advanced machine learning, the researchers noted. If their system can be scaled up, the University of Michigan researchers expect to be able to process and analyse video in real time in a compact system that can be integrated directly with sensors or cameras.
Similarly, Imec, a Belgium-based company that does research in nano-electronics and digital technologies, demonstrated the world’s first self-learning neuromorphic chip at the two-day Imec Technology Forum (ITF2017) from 16 May. The brain-inspired chip, based on OxRAM technology, has the capability of self-learning and has been demonstrated to have the ability to compose music.
The neuromorphic computing platform comprises two European systems—BrainScaleS and SpiNNaker (acronym for Spiking Neural Network Architecture)—based on the custom hardware architecture and their different ways of modelling neuron activity. The BrainScaleS system is based on physical (analogue or mixed-signal) emulations of neurons, while the SpiNNaker system is based on a numerical model that runs in real time on custom digital multi-core chips.
The neuromorphic computing market is poised to grow rapidly over the next decade to reach approximately $1.78 billion (around Rs11,570 crore) by 2025, according to a 10 April report by US-based Research and Markets. The reason is simple—the growing interest of companies in Artificial Intelligence, or AI, which can always do with more and more computing power.
According to market intelligence firm Transparency Market Research, the use of neuromorphic computing in self-driven and smart vehicles is expected to propel the growth of the automotive sector even as the use of neuromorphic chips in satellites for surveillance and aerial imagery is highly in demand in the defence sector.
Cutting Edge is a monthly column that explores the melding of science and technology.