America innovates, Europe regulates. Just as the world is starting to come to grips with OpenAI, whose boss Sam Altman has both leapfrogged the competition and pleaded for global rules, the EU has responded with the Artificial Intelligence Act, its own bid for AI superpower status by being the first to set minimum standards. [A draft was approved by the European Parliament]. Yet, we’re a long way from the deceptively simple world of Isaac Asimov’s bot stories, which saw sentient machines deliver the benefits of powerful “positronic brains” with just three rules: Don’t harm humans, obey humans and defend your existence. AI is too important to not regulate thoroughly, but the EU must work to reduce the Act’s complexity while promoting innovation.
The AI Act has some good ideas on transparency and trust: Chatbots must declare whether they’re trained on copyrighted material, deep-fakes must be labelled as such, and a raft of new obligations for generative AI-type models will require a big effort to catalogue data-sets and take responsibility for how they’re used.
Lifting the lid on opaque machines that process huge swathes of human output is a good idea. As Dragos Tudorache, co-rapporteur of law, told me, the purpose is to promote “trust and confidence” in a technology that has attracted huge amounts of investment and excitement, yet also produced dark failures. Self-regulation isn’t an option—neither is “running into the woods” and doing nothing out of fear that AI could wipe out humanity one day.
The Act is very complex, however, and runs the paradoxical risk of setting the bar too high to promote innovation but not high enough to avoid shock outcomes. The main approach is to categorize AI applications into buckets of risk, from minimal (spam filters, video games) to high (workplace recruitment) to unauthorized (real-time facial recognition).
That makes sense from a product-safety point of view, with providers of AI systems expected to meet rules and requirements before putting their products out. Yet, the category of high-risk applications is broad, and the downstream chain of responsibility in an application like ChatGPT shows how tech can blur product-safety frameworks. When a lawyer relies on AI to craft a motion that unwittingly becomes full of made-up case law, are they using the product as intended or misusing it?
It’s also not clear how exactly this will work with other data-privacy laws like the EU’s GDPR, which was used by Italy as justification for a temporary block on ChatGPT. And while more transparency on copyright-protected training data makes sense, it could conflict with past copyright exceptions granted for data mining back when AI was viewed less nervously by creative industries.
All this means there’s a real possibility that the actual outcome of the AI Act might entrench the EU’s dependency on big US tech firms from Microsoft to Nvidi. European firms are chomping at the bit to tap into the potential productivity benefits of AI, but it’s likely that the large incumbent providers will be best-positioned to handle the combination of estimated upfront compliance costs of at least $3 billion and non-compliance fines of up to 7% of global revenue.
Adobe has already offered to legally compensate businesses if they’re sued for copyright infringement over any images its Firefly tool creates, according to Fast Company. Some firms may take the calculated risk of avoiding the EU entirely: Alphabet Inc. has yet to make its chatbot Bard available there.
The EU has a lot of fine-tuning to do as final negotiations begin on the AI Act, which might not come into force until 2026. Countries such as France that are nervous about losing more innovation ground to the US will likely push for more exemptions for smaller businesses. Bloomberg Intelligence analyst Tamlin Bason sees a possible ‘middle ground’ on restrictions. That should be accompanied by initiatives to foster new tech ideas such as promoting ecosystems linking universities, startups and investors. There should also be more global coordination at a time when angst around AI is widespread—the G7’s new Hiroshima AI process looks like a useful forum to discuss issues like intellectual property rights.
Perhaps one bit of good news is that AI is not about to destroy all jobs held by human compliance officers and lawyers. Technology consultant Barry Scannell says that companies will be looking at hiring AI officers and drafting AI impact assessments, similar to what happened in the aftermath of the GDPR. Reining in these new robots requires more human brainpower—perhaps one twist you would not get in an Isaac Asimov story. ©bloomberg
Lionel Laurent is a Bloomberg Opinion columnist covering digital currencies, the European Union and France
Catch all the Business News, Market News, Breaking News Events and Latest News Updates on Live Mint. Download The Mint News App to get Daily Market Updates.