Home / Opinion / Online-views /  ‘Artificial’ diagnosis

Dr R.V. Parameshwaran, head of the department of Nuclear Medicine at Manipal Hospitals, is a close friend of mine—more to my own good fortune and not so much the good doctor’s! Apart from being a brilliant physician, Param is possessed of an insatiable curiosity with all things technical, and passionately believes that technology can be harnessed by the medical profession in order to better serve the sick. He and I have collaborated on assessing investments in start-ups that are focused on the healthcare space in India.

He recently passed on an article to me which was focused on the use of artificial intelligence (AI) in medicine—from the website Futurism; founded last year, it is focused on emerging technologies, and attempts to take complex content and break it down into components that are easily understood. It has an unusually large following on Facebook—over 2 million followers.

The article reported that IBM has recently announced that it plans to acquire Merge Healthcare Inc., a company that today helps medical workers to access and store images. The target firm is valuable to IBM not because of its underlying technologies, but because it owns 30 billion medico-radiological images, including X-rays, computerized tomography and magnetic-resonance-imaging scans.

IBM hopes that one day it will be able to build ‘deep-learning’ software that looks for patterns in these images, in the same way that Facebook or Flickr recognize your face in a photograph. Deep-learning involves the manipulation of vast amounts of data in order to be able to provide pattern recognition and a variety of other computational methods in order to come to a decision.

Over time, this type of deep-learning software might someday lead to advances in medical radiology so great that it can streamline the diagnostic processes around heart disease, cancer and other illnesses, thereby making IBM and its Watson software an important player in the over $7 trillion (yes, trillion) healthcare market.

At least one other player, Enlitic, has claimed that its software was 50% more accurate than a panel of four radiologists in detecting pulmonary (lung) tumours, but it took the company a year to find enough anonymized medical images to put its software to test. The acquisition of Merge Healthcare will immediately give IBM access to a treasure trove of images to work with.

Even if IBM gains access to these images, building enough AI capability to provide diagnoses for the entire human body, which is itself a universe in its own right, will not be without challenges. Even Enlitic, which is further along, has only made claims in the detection of tumours in one of the body’s organs.

John Eng, a professor of radiology at Johns Hopkins University, who is quoted in the article, says that there is a lot of ambiguity and fuzziness in medico-radiological images, and since it is ‘messy’ data, the data hoard is itself going to be a limiting factor in what IBM can do with Watson.

In the end, it takes the knowledge of a skilled practitioner like Param to cognitively correct for this ‘data mess’ and accurately diagnose a disease. Often, the practitioner simply needs to go with his or her gut, and intuit what the correct diagnosis is.

While there is no doubt that a deep-learning computer can assist doctors with pattern recognition, a good doctor seldom thinks like a computer. His or her intelligence needs to be multifaceted when it comes to making a diagnosis since diagnosis cannot be based solely on pattern recognition and on the software world’s ‘yes/no’ and ‘if/then’ type of logic.

And then, there is the need for human interaction. I grew up with parents who were both successful doctors, and was often witness to how they dealt with their patients. A good bedside manner is key to being a trusted doctor. My father, who was a physician and cardiologist in an age where cardiology as a specialty was in its infancy, and diagnostic tools and aids were minimal, had a marvellous bedside manner. He could switch from being ebullient and jocular when he knew that the patient had presented with a curable condition, to being empathic and supportive when that was not the case. Either way, the patient and their family left feeling better, at least psychologically.

He had the remarkable ability to exhibit what in Sanskrit and other Indian languages is known as ‘vatsalyam’—which is a word that can’t be quite fully translated into English. English, being a younger language, simply hasn’t yet developed the deep-learning nuances needed to fully encompass that Sanskrit word into a single word of its own. As is often the case with English, it will simply appropriate the word if needed, thereby keeping lexicographers busy.

While English will further develop its lexicon over time, a computer will never be able to develop to the point where it can stand in for the interchange between two human beings, especially when empathy or encouragement need to be displayed. Such sensitivity is beyond the pale of any machine. Imagine being told that you have an incurable cancer by a robot or a machine—or Siri or Cortana. As a patient, you would rebel against the very idea of being forced to interact with a machine rather than with a human being.

As the American broadcaster Edward R. Murrow said, “In the end, the newest computer can merely compound, at speed, the oldest problem in the relations between human beings, and in the end the communicator will be confronted with the old problem, of what to say, and how to say it."

Siddharth Pai is a management and technology consultant.

Subscribe to Mint Newsletters
* Enter a valid email
* Thank you for subscribing to our newsletter.

Never miss a story! Stay connected and informed with Mint. Download our App Now!!

Edit Profile
My ReadsRedeem a Gift CardLogout