The rise of artificial intelligence (AI) has been rapid—going from science fiction to reality in no time.
On the one hand, it can help you book an Uber or write your job application. On the other, it is being used to design cancer drugs of tomorrow and help space telescopes find signs of life on exoplanets light years away. Add generative AI to the mix, and you can see how AI is changing the way we live. In the middle is the “human” question: Are we aware of the consequences of AI entering our lives?
This is just one of the questions that London-based Madhumita Murgia, the first AI editor of Financial Times, tries to answer in her book Code Dependent: Living in the Shadow of AI through 10 stories of individuals whose lives have been affected by AI and AI systems.
These stories from across the world look at everyone from gig workers (systematically underpaid and undercut by AI algorithms) to doctors and activists (who are profiled using facial recognition AI). In India, we learn about a doctor using an AI app to analyse patient X-rays and estimate the risk of tuberculosis.
Elsewhere, Murgia documents AI sweatshops in Nairobi, Kenya where young workers categorise and label graphic text and snippets (that describe child sexual abuse, murder, suicide, and other harmful topics), which help train AI engines to identify, block and filter such user queries. They also screen distressing content for clients like Meta, the social media giant that owns Facebook and Instagram. She touches upon the trauma that content moderators face after viewing hours and hours of such content. As the book illustrates, it is this outsourced work that ensures AI recommendation engines on social media apps don’t spew poisonous content.
Murgia was clear that she wanted to look beyond Silicon Valley which, as she says in the book, is the nexus of technological power. “I wanted to travel and bring to life stories from places that other people don’t... I wanted to be as geographically broad as possible,” says Murgia, who was recently in India to promote her book, in an interview with Lounge.
“The most challenging part was figuring out who would make the best stories. Because in many cases, people are either unaware that they’ve been affected by AI systems, or if they’re aware, and if they’ve been harmed by it, they don’t want to talk about it. They want to move on.”
Murgia’s book comes at an interesting time. The pace at which AI has developed over the last three-four years, around the time she started working on the book, is exciting as well as alarming. “The big change in the last two years has been the pace of development of the technology, and how quickly it’s been rolled out simultaneously. There’s such a short gap between the two that there are a lot of misconceptions. It’s hard to be aware when things go wrong, because it’s all happening very quickly,” says Murgia.
“The challenge is throwing some cold water, taking a step back and trying to show the big picture, because right now there’s just a lot of hype and excitement (around AI),” she says.
Edited excerpts from the interview:
I was always fascinated by how science impacts people, which is why I chose immunology for my master’s. I was curious about what happens when science and society cross over. My first job as a journalist was at Wired magazine, where I got to know some of today’s best-known entrepreneurs. Since then, the lens through which I’ve written about technology has been: who are the people behind it? How does it affect us as a society?
All of the media focuses on Silicon Valley and the big tech companies and the people we put on a pedestal, like Elon Musk, Mark Zuckerberg and now Sam Altman. Even in India, it’s about the people behind the big companies. I wanted to look at the rest of us— how is it (AI) changing our work, education, health, the way that we live.
Science and health are the areas where AI will have the biggest impact. The company that I focus on in the book, QureAI, which is Mumbai-based, is going to do an expedition up to Everest Base Camp and use their AI system to diagnose people in Nepal with specific chest conditions that go undiagnosed most of the time. That’s just one example of how this type of technology can reach people who don’t have access to the care the rest of us do.
In terms of negative impact, where I see the failings of AI the most is when you use it for social decision-making systems. Criminal justice, for instance. Should somebody get bail, should somebody be arrested? You see this in government social services.
Often when we use a computer system, we tend to rely on it more than we do on humans. We don’t notice when things go wrong, or we trust it too much.
The technology has become so sophisticated that even people around the world who are aware of this cannot tell the difference.
The issue then becomes: how does anybody know what’s real and what’s fake? Especially when we live most of our lives online.
I think the result is going to be firstly, a huge flood of fake news. Not in the sense that it’s false information, but websites that are entirely generated (using AI).
Social media sites are the ways in which we get our news now, especially young people with TikTok, Instagram and Snap, and so there will be a lot less trust in institutional news. The goal will be for us to figure out how to carve out a niche of trust in an era where anything could be fake: images, videos, audio.
There are so many types of bias and discrimination.
When you look at criminal justice, you have communities in the US which are over-policed—African Americans, for example. There’s more data about them in the system, so the AI system is more likely to identify an African American as a risk.
This is all societal behaviours, prejudices that get woven and coded into the system.
Now, we’re replacing humans with the AI systems. So, when they do go wrong, nobody spots it because there’s nobody who’s accountable.
Evaluating AI systems is something nobody knows how to do.
Definitely. That’s always been the way with technology. You have the walled gardens like Apple and iOS, but then you also have open-source alternatives like Android. Both are making money and competing in the global market, but they have very different ways in which they approach the same technology. That is the case with AI as well.
Open source isn’t necessarily just small companies. Mistral is a great example of a start-up. Meta is also doing an open-source version of AI and they are hardly a start-up. But I think because of what they’ve done and what Mistral is now doing—there are others like Hugging Face—you will have more participants.
What we could do with AI could be amazing. Solving medical mysteries—things like that excite me.
An example I talk about in the book is about pain. This is a US-specific example, but African Americans always experience pain differently to Caucasian ethnicity patients. Nobody’s been able to figure out why. But I spoke with Ziad Obermeyer, a physician and AI scientist, who looked at scans of people who had self-annotated levels of pain for their knee joints. He found that AI systems could much better predict levels of pain, particularly for African Americans compared to human doctors. This shows that we can use it to solve problems we haven’t been able to as humans.
If we can address issues in healthcare, or climate and energy (using AI), or find a way to build a quantum computer, then that’s real progress for me.
Catch all the Business News, Market News, Breaking News Events and Latest News Updates on Live Mint. Download The Mint News App to get Daily Market Updates.