Home >News >Business Of Life >Smart factories keep an ear to the ground
Asquared IoT’s sound analytics edge computing device Equilips (seen at top right corner) deployed at a welding station.
Asquared IoT’s sound analytics edge computing device Equilips (seen at top right corner) deployed at a welding station.

Smart factories keep an ear to the ground

  • Asquared takes IoT into the untapped market of analyzing the sounds of machines to predict impending breakdowns
  • The neural network or AI brain is periodically trained with fresh data and learning captured on the edge devices

When you take your car to a mechanic, the first thing he might do is to rev it up. The sound of the engine can tell him straight away if something’s off.

It’s the same in factories, where experienced technicians can make out when a machine doesn’t sound right. But now, instead of relying on the human ear, the emerging field of industrial sound analytics uses microphones and artificial intelligence (AI) to replicate what an experienced mechanic does to spot trouble.

A startup based in Pune and Bengaluru, Asquared IoT, has developed machine learning models to detect anomalies that can predict failures in machines and processes. Japanese multinational company Konica Minolta and Pune auto parts manufacturer Sharada Industries are among early adopters of its edge computing device Equilips, whose embedded algorithms can analyze sounds on site without requiring connectivity to cloud-based processing.

Also Read | The curious case of the glowing beaches

“We do predictive maintenance," says Anand Deshpande, co-founder and chief executive officer of Asquared. “In many instances, when a machine is about to fail, it makes a characteristic sound. For example, electric motors and generator pumps make a whining noise."

Ambient noise

The biggest challenge to overcome in sound analytics is the ambient noise in a factory setting. Although there are techniques to cancel or reduce the noise, it can affect the underlying signal from the machine being monitored as well.

Another approach is to start with a recording of a clean signal from the machine, and then keep adding noises and training the neural network to distinguish the clean signal from the other sounds.

“We are using both approaches, but each one has its own challenges," says Aniruddha Pant, co-founder and chief data scientist at Asquared. “It will take a couple more years of research by everyone in this field for it to become as mature or widely deployed as a vision-based system."

Asquared has also been doing video analytics on the edge for a beverage company’s bottling plant in Thailand. “We’re looking to combine both vision and sound analytics because there are cases where just one or the other may not give adequate results," says Kanchan Pant, co-founder and business development head of Asquared.

The startup expects to receive US patents on deploying its sound analytics models on edge devices as well as for combining sound and video analytics. Its channel partner is Japanese systems integration multinational company NTT Data. It has also partnered with Intel to develop its solutions using the American giant’s edge computing stack and beta tools.

Building sound analytics models is an ongoing process, as the neural network or AI brain is periodically trained with fresh data and learning captured on the edge devices. An idiosyncrasy in this business is that you have to wait for a machine to fail in order to record what that sounds like in a specific setting.

“You can’t have an out-of-the-box algorithm that works everywhere because each factory has different sounds," explains Deshpande. “It’s easy to record data of how a machine sounds when it’s working well in the factory, but failure data is hard to get."

For example, one of Asquared’s clients is a large oil and gas company that needed to predict the failure of ball bearings in chemical reactors. The usual failure rate of these ball bearings is nine months, so imagine the wait involved to record the sounds of faulty bearings.

Supervised learning in AI requires labelled data to be inputted, which would have to include both the normal and anomalous sounds. In AI parlance, that’s a two-class or multiclass model. But Asquared ran into problems in getting access to recordings of anomalies or failures with most of its clients. Then it changed tack to build an unsupervised or one-class model. This records only the normal sound of the machine to train the AI model, which flags deviations from that normal. It’s harder to get a robust model this way, because it can easily become oversensitive or under-sensitive—that is, it will predict failures without sufficient basis or fail to spot trouble.

Trial and error

“We really had to play around with several parameters, but in 2020 we mastered the one-class model at a relatively low cost," says Deshpande. This means the model is robust enough not to give an unacceptable rate of false leads despite being trained only on the normal sound of the machine. The sound analytics devices can thus be deployed even if a client finds it hard to provide recordings of faulty machines.

Deshpande has a PhD in mechanical engineering from the University of Colorado and worked on AI modelling and reliability prediction at TCS, Motorola and Intel Labs before launching Asquared in 2017. Pant, with a PhD in control systems from the University of California, Berkeley, was already an entrepreneur by then. He founded Algo Analytics in 2008 whose initial focus was on providing investment advisory services based on machine learning. It then provided services to apply AI in multiple domains. Deshpande and Pant were buddies from their college years and decided to put their heads together to tackle an area of industrial IoT that no other Indian company had ventured into. Pant’s wife Kanchan is the proprietor of Sharada Industries, which became the first test bed for Equilips. Kanchan also came on board as a co-founder of Asquared.

Smart factories rely on sensors capturing data that can be analyzed. One of the challenges is wiring up legacy machines, which are still functional and costly to replace. This is where sound analytics has an advantage, because microphones can be placed three feet away, unlike sensors that are embedded in machines to capture vibration or other data.

A metal trading company in Japan, for example, deployed Equilips for its sawing machines which had no sensors that a modern IoT system would require. Sound analytics made it a smart factory, which included automatic logging of operations, doing away with the earlier manual books.

“Sound is one of the most untapped areas in manufacturing," says Deshpande.

The bootstrapped startup, which is cash-flow positive, has been exploring new areas where its sound analytics can be used. “Our USP is that we’re introducing analytics with data that had never been captured earlier," says Kanchan Pant. “For example, we’re doing a PoC (proof of concept) for a cement company by analyzing the sound of air leakage, which is also one of the major contributors to the power bills of malls and other places using central air-conditioning systems."

A booster to this nascent field will come from a collaborative effort to build a repository of industrial sounds, just as ImageNet—a large, hierarchical, annotated database of images—transformed the application of AI algorithms to visual data.

Sumit Chakraberty is a Consulting Editor with Mint. Write to him at

Subscribe to Mint Newsletters
* Enter a valid email
* Thank you for subscribing to our newsletter.

Click here to read the Mint ePaperMint is now on Telegram. Join Mint channel in your Telegram and stay updated with the latest business news.

Edit Profile
My Reads Redeem a Gift Card Logout