Home >Opinion >Columns >Data abundance is not a must for artificial intelligence

A hundred and thirty-eight years ago, almost to the day, Thomas Alva Edison switched on his coal-fired power plant at his Pearl Street station in lower Manhattan and provided commercial electricity for the first time in history to 59 homes within a square-mile area of his power plant. Many see this event as the birth of the modern era, as it marks the origin of our dependence on electricity to the extent that it is integral to almost everything we take for granted today.

But the kind of power generation that Edison pioneered on that September day in 1882 put us on a trajectory that has had unfortunate outcomes. He kicked into overdrive our reliance on fossil fuels for energy, allowing it to permeate all aspects of our lives—from the electricity we need to power our homes, offices and factories, to the petroleum we need to run our cars, ships and planes. This forced us down a path of high-energy consumption that has resulted in the rapid depletion of naturally occurring carbon-based fuel sources and inflicted near-irreversible damage on our planet.

Edison’s choice of coal as the fuel source for his power plant should not be taken as indicative of his support for fossil fuels as a source of energy. At least in the context of transportation, he believed that automobiles should run on electricity—not petrol—and even built a vehicle powered by alkaline batteries of his own invention.

But despite the fact that the battery technology he had developed went on to power electric trucks, railroad signals and even a submarine, it took so long for him to perfect the design of his electric car that it went on sale a full year after his good friend Henry Ford introduced the world to his low-priced, high-mileage Model T car. As a result, it is the internal combustion engine that powers the world today, and, instead of using renewable energy for our needs, we are stuck with our gas-guzzling lifestyles.

I cannot help but think how different things might have been had Edison beaten Ford to the market.

We find ourselves at a similar crossroads with artificial intelligence (AI) today. The dominant techniques that have delivered advances as miraculous as facial recognition, voice recognition and natural language processing, are voracious in their use of data. They depend on massive training datasets that comprise millions of individual elements of structured data to identify patterns not immediately evident to human senses.

As a result, leadership in artificial intelligence today is widely associated with an access to large volumes of structured information of the kind that is presently under the control of only the largest technology companies in the US and China. This has resulted in an arms race for the control of data, with countries around the world exerting their authority over the data of their citizens, regardless of where or under whose control it might be. The European Court recently tightened the grip of Europe’s data protection regulations over data that is transferred off its shores, while India has insisted that all sensitive personal data be localized and established a committee to look into how non-personal data might be harnessed to allow a better accrual of its value to the nation.

But the fact that these dominant machine learning models are incapable of accuracy unless they have sampled large volumes of data is a sign of their basic inefficiency. Even a child can identify objects seen just once before. We do not need to have trawled through vast libraries of bird pictures to know one when it flies by. What’s more, because our minds are capable of synthesizing and learning new object classes based on our existing information of different, previously-learned classes, we can identify an object as a bird even if we have never seen that particular avian species before.

What we need today are machine learning techniques that can achieve a higher level of sample efficiency and transferability than was previously possible. If we can do that, we will be able to shake ourselves free of our dependence on the data-guzzling models of artificial intelligence that we are currently wedded to.

Few-shot learning describes the exciting new techniques that allow models to be trained on small datasets. At present, it is largely being applied in the sub-fields of image classification, retrieval and segmentation, but is likely to have broader application in the areas of natural language processing, drug discovery and other such. While still in its infancy, it is clear that this is the direction in which artificial intelligence is headed.

That being the case, is there an argument to be made for us to completely rethink our current approach to data? If it is no longer necessary for us to amass vast stores of structured data to assume leadership in artificial intelligence, would we not be better-off focusing our energies on encouraging the use of these new and more efficient methods of computer learning?

Edison knew that electric transportation was the way to go and gave us a prototype electric vehicle well before internal combustion engines were even a thing. Had we chosen the electric option for cars, we could possibly have avoided the gas-guzzling century that followed.

We have to make a similar choice today: between the ravenous, data-guzzling models that are behind the AI models we know, and the few-shot learning models that are just about coming into their own. This time, I hope we will choose wisely.

Rahul Matthan is a partner at Trilegal and also has a podcast by the name Ex Machina. His Twitter handle is @matthan

Subscribe to Mint Newsletters
* Enter a valid email
* Thank you for subscribing to our newsletter.

Never miss a story! Stay connected and informed with Mint. Download our App Now!!

Edit Profile
My ReadsRedeem a Gift CardLogout