Over the past few years, we have seen a rapid proliferation of smart home devices. These gadgets understand the sentences we target at them well enough to respond with actions or conversations with a level of intelligence that has been steadily getting better and more human with each passing year. The more we use them, the more they are getting to learn about us, and the world we each uniquely inhabit, until eventually they will be able to respond so convincingly to everything we tell them that they will become indistinguishable from human companions. Considering the rate at which they are progressing, we can already imagine a future where the use of touch-based input devices will be as quaint and charmingly old-fashioned as handwritten letters. To an entire generation, typing will be a skill they would have never needed to master. That future is not as far away as it might seem.
As we have become more and more trusting of our smart devices and they have begun to understand exactly what it is we mean when we say the things we say, the uses to which they can be put have started to exceed even our wildest imaginings of what might have been possible. Elderly people, particularly those with failing mental faculties, have begun to lean on these devices for answers, knowing that even if they are asking it a question for the 100th time, they will receive the same patient response, something no human caregiver could be expected to provide. As these conversational devices become the hub for all the connected devices in a home, they will be able to actively monitor the well-being of its inhabitants and guide them verbally on what to do in case of an emergency.
We have already seen the ease with which children interact with devices. We have all been guilty of encouraging them to have long conversations with the smart assistants on our phone when we want to keep them occupied because we are busy with something else. Toy manufacturers such as Mattel have seized this opportunity to produce interactive toys that can actively engage with their young owners. The Hello Barbie doll uses cloud-based Artificial Intelligence (AI) to converse with children on topics as divergent as music, fashion and careers as well as abstract emotional topics such as how the child is feeling.
While this all sounds positive, these new developments in conversational intelligence are likely to throw up a whole host of challenges, the likes of which we have previously never had to consider. As good as these devices are as caregivers, no one has studied the long-term psychological effects of interaction with social machines, particularly in the context of the very old or very young. Before we allow these devices to firmly ensconce themselves in our world, we’d do well to evaluate how our increased reliance on these seemingly intelligent devices is going to affect us.
A couple of years ago, police officers investigating a murder in Bentonville, Arkansas, issued a warrant requesting all electronic data in the form of audio recordings, transcribed records or other text records from a smart home device at the scene of the crime. Though they knew that the device was not always listening and recording, they were hoping that the device might have been intentionally woken up to play a song at an opportune moment and they were hoping that an analysis of the background audio might offer evidence of an argument or fight. As much as requests like this raise questions of privacy and freedom of speech, it is likely that courts will allow investigators access to this information if it could help solve a crime.
This is just the tip of the iceberg. As we use these smart devices for a wider range of purposes, we are going to be faced with a number of legal and ethical questions about what we need to do about what these devices are told. For instance, what should one do when a child tells his smart home device that his uncle is touching him in an unwelcome way? Does the manufacturer of the device have a moral obligation to report this information to law enforcement? Does that obligation assume greater urgency if there is credible risk that the child might be harmed? If the entity that had information about that threat and consciously chose not to report it, will it liable if the child in question does get harmed, or worse, dies? How does all of this square with the general and overarching obligation to be particularly sensitive of a child’s right to privacy?
These are questions that manufacturers of smart devices are already having to answer as they build an ever-increasing library of responses to the conversations that their smart devices are having. As users have become more and more comfortable with confiding in their smart assistants, they have already begun to ask their artificial companions for help in dealing with suicidal feelings and clinical depression. In such situations, the right response from the smart device can mean the difference between life and death. Conversational AI programmers have had to collaborate with psychologists to figure out what those responses should be, but they will inevitably make mistakes as they grapple with increasingly complex situations.
It is clear that liability in the age of conversational AI is going to become layered and far more complex. Perhaps the time has come for us to create a brand new framework within which questions such as these should be asked and answered.
Rahul Matthan is partner at Trilegal and author of ‘Privacy 3.0: Unlocking Our Data Driven Future’
Catch all the Business News, Market News, Breaking News Events and Latest News Updates on Live Mint. Download The Mint News App to get Daily Market Updates.