Regulators are focusing on real AI risks over theoretical ones. Good

(Illustration: Michael Haddad)
(Illustration: Michael Haddad)

Summary

  • Rules on safety may one day be needed. But not yet

I’m sorry Dave, I’m afraid I can’t do that." HAL 9000, the murderous computer in “2001: A Space Odyssey" is one of many examples in science fiction of an artificial intelligence (AI) that outwits its human creators with deadly consequences. Recent progress in AI, notably the release of ChatGPT, has pushed the question of “existential risk" up the international agenda. In March 2023 a host of tech luminaries, including Elon Musk, called for a pause of at least six months in the development of AI over safety concerns. At an AI-safety summit in Britain last autumn, politicians and boffins discussed how best to regulate this potentially dangerous technology.

Fast forward to today, though, and the mood has changed. Fears that the technology was moving too quickly have been replaced by worries that AI may be less widely useful, in its current form, than expected—and that tech firms may have overhyped it. At the same time, the process of drawing up rules has led policymakers to recognise the need to grapple with existing problems associated with AI, such as bias, discrimination and violation of intellectual-property rights. As the final chapter in our schools briefs on AI explains, the focus of regulation has shifted from vague, hypothetical risks to specific and immediate ones. This is a good thing.

AI-based systems that assess people for loans or mortgages and allocate benefits have been found to display racial bias, for instance. AI recruitment systems that sift résumés appear to favour men. Facial-recognition systems used by law-enforcement agencies are more likely to misidentify people of colour. AI tools can be used to create “deepfake" videos, including pornographic ones, to harass people or misrepresent the views of politicians. Artists, musicians and news organisations say their work has been used, without permission, to train AI models. And there is uncertainty over the legality of using personal data for training purposes without explicit consent.

The result has been a flurry of new laws. The use of live facial-recognition systems by law-enforcement agencies will be banned under the European Union’s AI Act, for example, along with the use of AI for predictive policing, emotion recognition and subliminal advertising. Many countries have introduced rules requiring AI-generated videos to be labelled. South Korea has banned deepfake videos of politicians in the 90 days before an election; Singapore may follow suit.

In some cases existing rules will need to be clarified. Both Apple and Meta have said that they will not release some of their AI products in the EU because of ambiguity in rules on the use of personal data. (In an online essay for The Economist, Mark Zuckerberg, the chief executive of Meta, and Daniel Ek, the boss of Spotify, argue that this uncertainty means that European consumers are being denied access to the latest technology.) And some things—such as whether the use of copyrighted material for training purposes is permitted under “fair use" rules—may be decided in the courts.

Some of these efforts to deal with existing problems with AI will work better than others. But they reflect the way that legislators are choosing to focus on the real-life risks associated with existing AI systems. That is not to say that safety risks should be ignored; in time, specific safety regulations may be needed. But the nature and extent of future existential risk is difficult to quantify, which means it is hard to legislate against it now. To see that, look no further than SB 1047, a controversial law working its way through California’s state legislature.

Advocates say the bill would reduce the chance of a rogue AI causing a catastrophe—defined as “mass casualties", or more than $500m-worth of damage—through the use of chemical, biological, radiological or nuclear weapons, or cyberattacks on critical infrastructure. It would require creators of large AI models to comply with safety protocols and build in a “kill switch". Critics say its framing owes more to science fiction than reality, and its vague wording would hamstring companies and stifle academic freedom. Andrew Ng, an AI researcher, has warned that it would “paralyse" researchers, because they would not be sure how to avoid breaking the law.

After furious lobbying from its opponents, some aspects of the bill were watered down earlier this month. Bits of it do make sense, such as protections for whistleblowers at AI companies. But mostly it is founded on a quasi-religious belief that AI poses the risk of large-scale catastrophic harm—even though making nuclear or biological weapons requires access to tools and materials that are tightly controlled. If the bill reaches the desk of California’s governor, Gavin Newsom, he should veto it. As things stand, it is hard to see how a large AI model could cause death or physical destruction. But there are many ways in which AI systems already can and do cause non-physical forms of harm—so legislators are, for now, right to focus on those.

© 2024, The Economist Newspaper Ltd. All rights reserved. From The Economist, published under licence. The original content can be found on www.economist.com

Catch all the Business News, Market News, Breaking News Events and Latest News Updates on Live Mint. Download The Mint News App to get Daily Market Updates.
more

topics

MINT SPECIALS