A robot that can open doors and find electrical outlets to recharge itself. Computer viruses that no one can stop. Predator drones, which, though still controlled remotely by humans, come close to a machine that can kill autonomously.
Impressed and alarmed by advances in artificial intelligence, a group of computer scientists is debating whether there should be limits on research that might lead to loss of human control over computer-based systems that carry a growing share of society’s workload, from waging war to chatting with customers on the phone. Their concern is that further advances could create profound social disruptions and even have dangerous consequences.
As examples, the scientists pointed to a number of technologies as diverse as experimental medical systems that interact with patients to simulate empathy, and computer worms and viruses that defy extermination and could thus be said to have reached a “cockroach” stage of machine intelligence.
While the computer scientists agreed that we are a long way from HAL, the computer that took over the spaceship in 2001: A Space Odyssey, they said there was legitimate concern that technological progress would transform the work force by destroying a widening range of jobs, as well as force humans to learn to live with machines that increasingly copy human behaviours.
The researchers—leading computer scientists, artificial intelligence researchers and roboticists who met at the Asilomar Conference Grounds on Monterey Bay in California—generally discounted the possibility of highly centralized superintelligences and the idea that intelligence might spring spontaneously from the Internet.
But they agreed that robots that can kill autonomously are either already here or will be soon.
They focused particular attention on the spectre that criminals could exploit artificial intelligence systems as soon as they were developed.
The researchers also discussed possible threats to human jobs, such as self-driving cars, software-based personal assistants and service robots in the home.
The meeting on artificial intelligence could be pivotal to the future of the field. Paul Berg, who organized a meeting in 1975 of the world’s leading biologists—also at Asilomar— said it was important for scientific communities to engage the public before alarm and opposition becomes unshakable: “If you wait too long and the sides become entrenched like with GMO,” he said, referring to genetically modified foods, “then it is very difficult. It’s too complex, and people talk right past each other.”
©2009/THE NEW YORK TIMES