The euphoria mustn’t overwhelm the ethical questions on AI
4 min read 16 Feb 2023, 10:15 PM ISTWe must discuss its climate impact, plagiarism and biases amid all the amazement over its abilities

The Colorado State Art Fair in 2022 was abuzz with angry artists. One lamented, “Imagine spending hours upon hours on a piece of work to proudly present it for a competition and being beaten by somebody who pressed ‘generate’ on a screen." Another ranted: “This thing wants our jobs, its actively anti-artist." This ‘thing’ was Midjourney, one of the plethora of generative AI products dropping like confetti on us hapless humans. Midjourney was used by artist Jason Allen to create a painting listed as Jason M. Allen via Midjourney, and it won the prestigious Blue Ribbon award for emerging artists. Allen was unrepentant, saying: “This isn’t going to stop. Art is dead, dude. It’s over. AI won. Humans lost." While transformers like Midjourney, DALL E2, GPT3 created ripples in the tech world, the launch of ChatGPT a mere two months back has taken the world by storm. It raced to a count of 100 million users in 2 months, which Twitter took 5 years for and even the world wide web took 7. There is enormous nervous excitement on how it could change everything, replace search, reshape Big Tech, accelerate Artificial General Intelligence. Somewhere in the middle of this adulatory cacophony are lost the dangerous ethical issues around OpenAI’s ChatGPT in particular and generative AI in general. Let me address three big issues that are in need of immediate attention.
Generative AI models are terrible for the environment: Our planet is already on the edge with climate change, and these models could only accelerate that trend. The cloud and these AI models running on it take huge amounts of energy. For instance, training a transformer model just once with 213 million parameters can spew out the same carbon emissions that 125 New York-Beijing round flight trips would, and GPT3 has 175 billion parameters! Most of these models live on the cloud. ChatGPT lives on Microsoft Azure, for instance. The ‘cloud’ is composed of hundreds of data centres around our planet, guzzling water and power in alarming quantities. An article in The Guardian revealed that data centres currently consume 200 terawatt hours per year, roughly equal to what South Africa does. The cloud is no fluffy cotton-wool thing, but as author Kate Crawford writes, “The cloud is made of rocks and lithium brine and crude oil."
Generative AI models plagiarize: Getty Images is suing Stable Diffusion in the London High Court, accusing it of using its images illegally. A clutch of artists are doing the same in the US. If you got Stable Diffusion or DALL-E 2 to comb the web and combine multiple images (say, a Pablo Picasso Mona Lisa), who owns it—you, the AI model, or Picasso and Da Vinci? OpenAI claims ownership of all DALL–E created images, and paid users can reproduce, sell and merchandise the images created. This is a legal quagmire: the US Copyrights office refused to grant a copyright to a composition created by a AI model, Creativity Machine. However, both Australia and South Africa declared that AI can be considered as an inventor. There is the associated fear of AI models taking your job. A ’generate’ button could theoretically replace artists, photographers and graphic designers. The model is not really creating art, it is only crunching and manipulating data with no idea or sense of what and why it is doing so. That is certainly not art.
The models are inherently biased: OpenAI has ethical guard-rails around ChatGPT and it does not spout racist or sexist content. But as Gary Marcus writes, these guard-rails are thin, the model is amoral and we are sitting on a ethical time bomb. Currently, ChatGPT does not crawl the web, but later versions (like the Bing integration) will, and the whole swamp that is the internet will be open to it. AI researcher Timnit Gebru was with Google when she co-wrote a seminal research paper calling these LLMs “stochastic parrots", because they just repeated words without understanding their meaning and implications. ChatGPT does not really understand what it says, and it is designed to be plausible rather than truthful. GPT3 has been trained on Reddit and Wikipedia: 67% of Reddit users in the US are men, less than 15% of Wikipedians are women. DALL-E 2 tilts towards creating images of Caucasian men and sometimes over-sexualizes images of women. It is trained on the vast open-source images on the internet, and the imagery and language on it is still overwhelmingly Western, male and sexist.
As these models become increasingly powerful, these ethical issues will become even more dangerous. The early signs are not encouraging: Timnit Gebru, after publishing her prescient paper on these dangers, was summarily fired from Google.
Jaspreet Bindra is a technology expert, author of ‘The Tech Whisperer’, and currently pursuing his Masters in AI and Ethics from Cambridge University