Home / Opinion / Views /  AI parental aids cannot credibly claim expertise
Back

It takes a village to raise a child. Without one, parenting can be quite a hard slog. Artificial intelligence (AI), though, has been claiming it has the answer. A couple of years ago, a company launched iPal, a robot that offered to wake up a child, talk, play games, read out stories and even guide her morning routine. Another company sells a smart bassinet, Snoo, that promises to coo and rock a baby to sleep if its AI sensors find the child fussing. Nanit is an HD-camera-fitted baby monitor that tracks a baby’s sleep through the night and gives parents a report in the morning. And ChatterBaby maps an infant’s cries against a database of wails to interpret why the tiny one is howling. Hungry? Scared? Or angry at being left alone? Indeed, AI parental aids are a fast-emerging market. Many of them are aimed at parents of older children too. As kids grow up glued to digital devices linked to the internet, they inevitably become vulnerable to addiction, cyber-bullies, sexual predators and other perils. Understandably, parents want underage digital natives monitored in some way to keep them out of harm’s way. Enter, AI, with an expanding suite of surveillance tools.

Keeping kids under watch is no longer only about blocking adult content or capping their screen time. An AI-powered Bosco app that has begun advertising in India claims to alert parents about shifts in their children’s mood by an analysis of their phone calls (it also tracks their location). Another app, Bark scans social media usage, texts and emails to flag any sign of a suicidal tendency. All of this raises many questions. While this is an age of tech solutions for just about everything, what happens when we rely on AI for judgement calls within intimate human relationships? Can any software, whatever database it may be fed with, be good enough at a job involving such high cognitive complexity? The assertion that people are broadly all alike in their emotional responses has been made often enough, but does that make AI a reliable reader of hearts and minds? At a time when AI is yet to acquit itself of input biases in far more elementary decisions, any pretence to psychiatric wisdom opens a new frontier that should disturb us. Too often do machines evoke awe for doing what we thought they could not. It may suit tech evangelists to peddle the myth of these tools being foolproof, in contrast with error-prone humans, but evidence of the casual prejudices they soak up in their ‘learning’ process cannot be brushed aside. It has taken long for societies to accept that every child cannot be force-fit into the great bulk of the bell curve. Not every child speaks or behaves the same way and the cues that parents must pick up are usually unique. This sensibility could be undone if hawkers of AI parental aids get their way.

As the line gets blurred between human agency and algorithm prompts, and as software reshapes our lives in almost every sphere, it is for users to resist manipulation and stay aware of technology’s limits. While AI can beat humans at board games, hold conversations, sing songs, create what seems like art and also write what may pass for poetry, they are not human but workshop creations. These tools are not sentient. They are programmed to act as they’re told, which means relying on them requires us to be convinced of the validity of their instructions. In many usage fields, trials are harmless. In intimate spaces like parenting, however, both AI tool makers and users should pause and think before it gets too late.

Catch all the Business News, Market News, Breaking News Events and Latest News Updates on Live Mint. Download The Mint News App to get Daily Market Updates.
More Less

Recommended For You

Trending Stocks

×
Get alerts on WhatsApp
Set Preferences My ReadsWatchlistFeedbackRedeem a Gift CardLogout