Deep learning, an advanced machine-learning technique, uses layered (hence “deep”) neural networks (neural nets) that are loosely modelled on the human brain. Machine learning itself is a subset of Artificial Intelligence (AI), and is broadly about teaching a computer how to spot patterns and use mountains of data to make connections without any programming to accomplish the specific task—a recommendation engine being a good example.
Neural nets, on their part, enable image recognition, speech recognition, self-driving cars and smarthome automation devices, among other things.
However, the success of deep learning is primarily dependent on the availability of huge data sets on which these neural nets can be trained, coupled with a lot of computing power, memory and energy to function. To address this issue, says a 14 November press release, researchers at the University of Waterloo, Canada, took a cue from nature to make this process more efficient, thus making deep-learning software compact enough to fit on mobile computer chips for use in everything from smartphones to industrial robots.
In work presented during the International Conference on Computer Vision in Venice in October, the researchers achieved a 200-fold reduction in the size of deep-learning software used for a particular object-recognition task. When put on a chip and embedded in a smartphone, such compact AI could run its speech-activated virtual assistant and other intelligent features, greatly reducing data usage and operating without internet service, the researchers believe.
Other potential applications range from use in low-cost drones and smart grids to surveillance cameras and manufacturing plants, “where there are significant issues around streaming sensitive or proprietary data to the cloud”, according to the researchers.
“We feel this (the deep-learning software they developed) has enormous potential,” Alexander Wong, a systems design engineering professor at Waterloo and co-creator of the technology says in the release. “This could be an enabler in many fields where people are struggling to get deep-learning AI in an operational form.” The use of stand-alone deep-learning AI could also lead to much lower data-processing and transmission costs, greater privacy, and use in areas where existing technology is impractical due to expense or other factors, the researchers say.
“These networks evolve themselves through generations and make themselves smaller to be able to survive in these environments,” says Mohammad Javad Shafiee, a systems design engineering research professor at Waterloo and the technology’s co-creator. Wong and Shafiee have co-founded a company in Waterloo called DarwinAI to commercialize their AI software. They first attempted their approach to evolving this deep-learning AI about three years ago. “If it works, we keep going and push harder,” says Shafiee in the release.
In February 2016, Mitsubishi Electric Corp. had announced that it had developed a small-memory, compact AI that could “be easily implemented on vehicle equipment, industrial robots and other machines by reducing the computational costs for inference—a process including identification, recognition and prediction to anticipate unknown facts based on known facts”.
This, the company had then said in a press release, would enable a low-cost AI system that could perform high-level, high-speed inference in a highly-secured environment. Mitsubishi Electric Corp. estimates that the computational costs and memory requirements for image recognition can be reduced by 90%. Since May 2016, it has been conducting expressway-based field testing of its xAUTO vehicle and related autonomous-driving technologies for self-sensing and network-based driving.
The compactness means the AI can perform high-level inference even on embedded systems, basically building machines that have computers and intelligence inside them. On a vehicle system, for example, it could provide features that detect when a driver is distracted. On an industrial machine, it could analyse the actions of factory workers.
Similarly, computer systems developer and chip manufacturer Nvidia Corp. too has developed an AI computer made to support fully autonomous vehicles. The company plans to roll out the device in the second half of 2018. Called Nvidia Drive PX Pegasus, the contraption is just about the size of a licence plate, and consumes less power than the current AI chips in autonomous vehicles.
The market size of AI is expected to touch $200 billion (nearly Rs13 trillion)in 2020, according to a study at Ernst and Young Institute Co. Ltd. Compact AI software, which provides more security and speed at a lower cost, could help meet the demand.
Cutting Edge is a monthly column that explores the melding of science and technology.
Catch all the Business News , Breaking News Events and Latest News Updates on Live Mint. Download The Mint News App to get Daily Market Updates.