Wake-up call for AI makers: They should take the iPad ad’s backlash seriously
Summary
- The depth of feelings stirred up by the badly conceived advertisement reveals a wider anxiety that AI players dismiss at their own risk. They could address the problem by positioning their products as human friendly.
Apple has made some of the world’s most iconic advertisements ever. When the brand’s Macintosh computer was launched about four decades ago with its 1984 commercial, whose protagonist was shown throwing a sledgehammer to smash a giant screen depicting ‘Big Brother,’ the world applauded.
But when Apple Inc recently released its Crush! commercial for the iPad that showed several things integral to human lives being crushed by a huge industrial compressor, the world shuddered in fear. The predominant view expressed through social media was that ‘Big Brother has returned.
While launching the latest iPad commercial, Apple Inc’s chief executive Tim Cook tweeted, “Just imagine all the things it’ll be used to create." But instead of putting across the intended message of the device enabling creativity, the commercial ended up conveying a sense of the destruction technology could wreak on all that human hands can create. While watching that commercial, the first thought that went through my head was this: How could Apple have thought it fit to release this advertisement?
Months of strategic discussions happen before an idea for a commercial is finalized. Pre-testing the advertising idea is often done before the final story board goes into production. It is very difficult to believe that at no stage of this long production process did Apple’s marketing department receive any negative feedback on the storyline of this commercial. While the exact reason why Apple released this advertisement is hard to understand, one thing is clear. The public response to it should serve as a signal for the artificial intelligence (AI) industry.
For all the good things AI machines bring into this world, the fear that these modern contraptions will take over the world and all that is human could one day be subservient to them is an anxiety that has expanded its presence in the world’s non-conscious. The reaction to the iPad commercial was immediate and widespread, just as we would expect if the world were to suddenly realize that all that’s human will soon be replaced by machines.
In these times of social media, such intense fears spread like a wildfire across the world. If such non-conscious fears of what AI will do to human roles persist, it could lead to even stronger pushback against the global AI industry.
This resistance is unlikely to take the form of a few protests here and there, displays of anger where a few technology products are destroyed in public places, as Luddites did to spinning mills at the start of the Industrial Revolution. The pushback could be significant enough for the AI industry to stay on guard.
Protests will probably aim for the Achilles heel of the industry. Questions have arisen on the privacy of the vast data used by the AI industry to train its models, as also over the ownership of this intellectual property. The huge carbon footprint of AI training is another target of critics, just as job losses caused by AI has been flagged as a major problem.
These are issues on which the AI industry is yet not in a strong position to defend itself. Rather, it is on the back-foot on all these. A pushback on any of these issues could slow down an industry that has shown tremendous growth momentum in recent years. This is why AI players across the world should be careful about the language they speak and positions they take.
Let’s take the example of an AI product developed to read X-ray images and detect signs of tuberculosis. How should such a innovative AI product be positioned? It can be positioned as a tool that can read X-ray photos with far greater accuracy than a human radiologist. One could easily work out the productivity gains a hospital would make by automating this task. But this positioning stance has a weakness: the AI product would be portrayed as heroic, while the human professional will get cast as inadequate.
The same AI product can also be positioned as a product that helps detect tuberculosis among people who have no access to a radiologist. This positioning is all about meeting an unmet human need, an end-game where humans emerge as winners. There is no scope for doubt which of these two approaches taken to the market will win public approval and which is likely to run into resistance.
AI products that try to complement human roles instead of replacing them will be welcomed. This calls for a deeper understanding of need-gaps in existing products and developing innovative products to fulfil those unmet needs.
It is easy to develop a machine that fits into an air-conditioned room of a city hospital, but a machine that can help read X-rays of people in remote villages located at high altitudes in the Himalayan mountains, or in rural settlements of the Thar desert, speaks of innovation driven by human needs.
Recent turns in history remind us that humans have not let inventions flourish that are seen as detrimental to human existence. Take the case of nuclear energy. It offered humanity the benefit of power generation free of fossil-fuel emissions. But when people realized its larger harmful effects and risks, public rejection and legislation followed that have kept the nuclear-power industry under stiff restrictions.
The iPad advertisement’s backlash should remind the AI industry to get its language right. The pushback is real, and if AI is seen as a threat to humanity, it will only grow.