AI's ability to predict death may not be as scary as it sounds

Will doctors and hospital administrators put too much faith in the decisions or forecasts of AI because it’s fast and sounds confident?
Will doctors and hospital administrators put too much faith in the decisions or forecasts of AI because it’s fast and sounds confident?

Summary

  • AI chatbots mustn’t wield greater authority than they deserve and we shouldn’t confuse probability-based forecasts with lived reality.

When headlines recently said that artificial intelligence (AI) can be used to create a ‘death calculator’ that predicts the day you’ll die, it sounded like something from a terrifying science fiction story. The reaction showed how readily people believe that AI has magical fortune-telling powers. The reality was not as far-fetched. The paper that spawned the fracas, in Nature Computational Science, did involve using AI to predict death, but it wasn’t very precise. Using both economic and health data on thousands of people in Denmark, an AI-based system was able to predict with about 78% accuracy which people would die within the next four years.

The algorithms used to create actuarial tables already do this kind of statistical forecasting, but the new system, called life2vec, is more accurate and works in a different way. The lead author on the paper, University of Copenhagen complexity science professor Sune Lehmann, said life2vec predicts life events much the way ChatGPT predicts words.

This matters not because they might create a scarily accurate ‘death calculator,’ but because of how the forecasts could be used. Such algorithms could be used for ill, to discriminate or deny people healthcare or insurance, or for good, by highlighting factors that affect lifespan and helping us live longer. Or they might improve lifespan calculations, which some people use to plan their retirements.

It was “wild to see how the results were misrepresented," Lehmann said. “People said this AI can predict the second you will die with incredible accuracy." This is because people don’t understand the technology. At the same time, hospitals are incorporating AI to do all sorts of jobs. Will doctors and hospital administrators put too much faith in the decisions or forecasts of AI because it’s fast and sounds confident? Can the medical system use AI responsibly if people have unrealistic ideas about what it can do? Lehmann said his work in this area is aimed at testing the powers of prediction for all kinds of life events, including job changes, income changes and moving. He’s looking for a more coherent scientific understanding of the way algorithms can predict complex phenomena. Often their workings are treated as a mysterious black box. The researchers didn’t choose death out of any morbid preoccupation, but because it’s something that’s precisely measured and recorded.

In groups of young people, the question is too easy. You’ll be mostly correct if you predicted that nobody dies over the next four years. And predicting death within one year isn’t too hard—you’d just have to know who was sickest. The further you go, the harder the future is to predict, until you get far enough ahead that almost everyone will have died. At this stage, AI isn’t likely to surprise anyone on life expectancy. If you’re healthy and not very old, it will predict you’ll live more than four years. It can’t foresee that you’ll get in a freak accident, or predict you’ll die in 10, 15 or 20 years, said Andrew Beam, a professor of biomedical informatics at Harvard Medical School. There’s a risk that AI could prompt humans to be misled by authority bias: “If you think someone is smarter than you or has access to information that you don’t have, there’s a real tendency to turn off critical thinking and believe anything that comes out— whether it’s a person or an AI," he said. ChatGPT synthesizes information, but it’s not very selective and may fold in bad studies and flawed data. “So, if you’re in an area where the science is unsettled or the human knowledge is just not there yet," he said, “ChatGPT is going to be just as bad if not worse than a person.... We need to be careful when we’re asking it to do things that are still clearly sci-fi."

Sometimes fiction can provide a reality check by reminding us that our actions influence the future. Consider what happened in Charles Dickens’ A Christmas Carol. The Ghost of Christmas Future gave Ebenezer Scrooge a terrifying preview of loneliness, grief and death. Scrooge then asks a smart, critical question: “Are these the shadows of the things that Will be, or are they shadows of things that May be, only?" If the reporters trying to scare people with life2vac had asked that question, they would have gotten the same answer Scrooge did from the ghost: Of course our actions can change the future. A forecast doesn’t seal our fate in stone.

This new system reinforces what other studies have shown—that income and job type can affect the length of your life. Being poor and having a job where others have power over you is correlated with premature death. Dickens recognized it long ago. Maybe AI can turn this observation into real-life scenarios that will motivate today’s Scrooges to address the inequalities that shorten so many lives. ©bloomberg

Catch all the Business News, Market News, Breaking News Events and Latest News Updates on Live Mint. Download The Mint News App to get Daily Market Updates.
more

MINT SPECIALS

Switch to the Mint app for fast and personalized news - Get App