Fears about AI’s existential risk are overdone, says a group of experts
- Blaise Agüera y Arcas and his co-authors argue that tackling more immediate concerns will mitigate long-term threats
IN THE PAST year, as the startling capabilities of artificial intelligence (AI) have emerged into public view, attention has been drawn to the existential risk, or “x-risk", that the technology may pose. The concern is that computers endowed with superhuman intelligence might destroy most or all human life. The majority of researchers raising the alarm are sincerely motivated by concern about AI-related risks, present and future. However, calls to action to mitigate superintelligent-AI x-risk may both impede the development of beneficial uses of AI—of which there are many—and distract regulators, the public, companies and other researchers from addressing important shorter-term risks.