Active Stocks
Thu Apr 18 2024 15:59:07
  1. Tata Steel share price
  2. 160.00 -0.03%
  1. Power Grid Corporation Of India share price
  2. 280.20 2.13%
  1. NTPC share price
  2. 351.40 -2.19%
  1. Infosys share price
  2. 1,420.55 0.41%
  1. Wipro share price
  2. 444.30 -0.96%
Business News/ Science / Health/  Is there AI doctor in the house?
BackBack

Is there AI doctor in the house?

AI is the sleeping giant of healthcare, raising crucial questions around diagnosis and privacy issues

Israeli scientists have created a human heart that completely matches all the anatomical properties of a human patient, using a 3D printer (Getty Images)Premium
Israeli scientists have created a human heart that completely matches all the anatomical properties of a human patient, using a 3D printer (Getty Images)

Bengaluru: Seven years ago, when Dr Naresh Trehan of Delhi’s Escorts Heart Institute and Research Centre (he’s now chairman and managing director of Medanta) used a robotic arm with an endoscopic camera attached that could provide a three dimensional (3D) image of organs, the news was received with a sense of awe.

Today, cardiac surgeons routinely use 3D printers to generate replicas of the hearts of patients to strategize for complex procedures. Moreover, hospitals like All India Institutes of Medical Sciences (AIIMS), Max Super Speciality Hospital, Apollo and Fortis Healthcare all use 3D printed organ replicas. Jawaharlal Institute of Postgraduate Medical Education and Research (JIPMER) doctors, for instance, have also used 3D printers to fix skull deformities.

Robots, too, routinely do surgery—not the kind that you see in Terminator or Rajinikanth movies—but these are robotic assistants being used across private and public hospitals throughout the country.

It was just last year that Dr Tejas Patel, chairman and chief interventional cardiologist at Apex Heart Institute, Ahmedabad, performed the world’s first in-human telerobotic coronary intervention on a middle-aged woman with a blocked artery, who was in the operation theatre of his hospital, while he was 32 kilometres away.

Robotics and 3D Printing are simply cases in point. Healthcare is no longer just about doctors. “Healthcare is becoming a non-doctor domain. Tomorrow’s doctor could be an engineer, a robotic engineer, or could be just a guy who is so good in mathematics and is able to unwind pieces of DNA together and create something out of this. I imagine, tomorrow’s doctors would be trained differently, the skill set which is very different from what we possess," says Dr Rajakumar Deshpande, director neuro and spine surgery at Fortis.

He has a point. While on the consumer side, smartphone penetration and advances in image recognition are turning phones into powerful at-home diagnostic tools, cutting-edge technologies are helping doctors, researchers and technology companies revolutionize healthcare.

Doctor + technology

Consider how Tata Consultancy Services uses virtual reality (VR) to help children with neuromuscular disabilities become more self-dependent. For this, it teamed up with Barclays to launch the TCS VHAB (virtual habilitation) solution at the ZEP Rehabilitation Centre in Pune. VHAB uses motion sensors, progressive analytics, gesture analysis, finger mapping and real-time simulation in an immersive VR environment to create a series of personalized simulated environments that children can interact within. This will help them develop skills that will enable them to carry out tasks closely related to real life.

The internet of things (IoT) trend is also deeply influencing healthcare with smartphones and wearables like wristbands and smartwatches that keep track of your overall health with the help of sensors, and even alerting doctors when needed.

King Abdullah University of Science and Technology (KAUST) researchers, for instance, are trying to make it easier to analyse sweat for critical biomarkers (measurable indicator of some biological state). According to a April 25 press statement, they have developed a wearable system that can handle the rigors of skin contact and deliver improved biomarker detection. Changes in glucose levels could also be tracked as accurately in sweat as it is in blood.

Researchers the world over are also experimenting with a gene-editing tool known as CRISPR-Cas9 to address, among other things, beta (ß)-hemoglobin or ß-thalassemia and sickle cell anaemia type of genetic blood-related disorders for which there’s no cure as yet. CRISPR (clustered of regularly interspaced short palindromic repeats), along with the protein Cas9 (CRISPR-associated, or Cas)—an enzyme that acts like a pair of molecular scissors that is capable of cutting strands of DNA—is being used to treat diseases like AIDS, amyotrophic lateral sclerosis (ALS) and Huntington’s disease.

Sudhakar Varanasi, a healthcare consultant, says, “I see CRISPR-based possibilities, they are truly there. If we take a little bit regulatory frameworks context in today’s life, enormous possibilities exist in what can be done with biology."

The power of big data

These technologies may sound impressive, and they indeed are. However, it’s also the ability of doctors and technology companies being able to sift through humongous amounts of data—the phenomenon well known as Big Data—that is at the core of the healthcare revolution. It’s here that artificial intelligence (AI) in conjunction with IoT (read smartphones, sensors, wearables, etc.), robotics, VR, and the like, is playing a very important role.

“Big Data is going to transform medical science, and I think it’s one of the areas where we (doctors) have to learn a few things," says Dr Deshpande. But he explains that doctors are tradition bound, and they have to operate in a sphere of what you call a compartment. To really go out of the compartment and do something different requires a different kind of training and learning, which is not happening. “There are certain exceptions where we try to make a doctor understand what can today’s technology do. For example, a typical doctor would like to draw some statistical output about his patients but that is mundane given the possibilities," he adds.

Radiologists, for instance, use MRI to detect and assess the aggressiveness of malignant prostate tumours. However, it typically takes practising on thousands of scans to learn how to accurately determine whether a tumour is cancerous or benign, and to accurately estimate the grade of the cancer. UCLA researchers have developed a new AI system to help radiologists improve their ability to diagnose prostate cancer, according to a April 16 press note.

The system, called FocalNet, helps identify and predict the aggressiveness of the disease evaluating magnetic resonance imaging, or MRI, scans, and it does so with nearly the same level of accuracy as experienced radiologists. In tests, FocalNet was 80.5% accurate in reading MRIs, while radiologists with at least 10 years of experience were 83.9% accurate, according to UCLA researchers.

NVIDIA researchers, on their part, have generated synthetic brain MRI images for AI research—the idea is to help doctors learn more about rare brain tumours.

Eric Horvitz, technical fellow and director at Microsoft Research, calls AI the “sleeping giant for healthcare". Last year, Microsoft embarked on Healthcare NExT, an initiative which aims to accelerate healthcare innovation through AI and cloud computing. The company is working on tools like Adaptive Biotechnologies to create the antigen maps, and Project EmpowerMD to create medical notes of the conversation between a physician and patient by leveraging AI.

As part of Microsoft’s AI Network for Healthcare initiative, Microsoft India had announced a partnership with Apollo Hospitals for AI-powered Cardiovascular Disease Risk Score API in India. Microsoft is also bringing its capabilities and applying AI to devices for early detection of diabetic retinopathy to prevent blindness.

Spotting risk factors

Researchers at Google used a deep learning neural network (deep learning is a machine learning, or ML, technique, which itself is considered a subset of AI) trained on retinal images to find cardiovascular risk factors, according to a paper (go.nature.com/2EXV2Ae) published in Nature last year.

The research revealed that not only was it possible to identify risk factors such as age, gender, and smoking patterns through retinal images, but it was also “quantifiable to a degree of precision not reported before". In January 2018, a Google patent was published that was aimed at using ML to analyse cardiovascular function from a person’s skin colour or skin displacement.

Moreover, a number of ML-as-a-service platforms are integrating with US Food and Drug Administration-approved home monitoring devices, alerting physicians when there is an abnormality, according to a 2019 CB Insights report. Researchers at Duke University, for instance, developed an “Autism and Beyond" app that uses the iPhone’s front camera and facial recognition algorithms to screen children for autism.

On its part, Defence Advanced Research Projects Agency has spent millions of dollars on its advanced prosthetics programme, which it started in 2006 with John Hopkins University to help wounded veterans. Researchers have now begun using ML to decode signals from sensors on the body and translate them into commands that move the prosthetic device. Last June, researchers from Germany and Imperial College London used ML to decode signals from the stump of the amputee and power a computer to control the robotic arm. The research on the “brain-machine interface" was published in Science Robotics.

The problems with AI

This does not mean that AI can dispense with doctors. AI companies need medical experts to annotate images to teach algorithms how to identify anomalies. The approach here has been largely collaborative. Technology firms and government agencies that are investing heavily in annotation are making the datasets publicly available to other researchers.

DeepMind Technology’s AI for detecting eye disease, for instance, involved rigorous work to make sure the data is accurate and in the right format. For example, around 1,000 scans were graded by junior ophthalmologists, and any disagreements in labelling were resolved by a certified senior specialist with over 10 years of experience. But AI algorithms can also be biased, and when diagnosing a patient, it could be a matter of life and death. Hence, companies need medical experts to annotate images to teach algorithms how to identify anomalies.

Bad actors could game AI for financial gain. For example, hospitals could make alterations to patient scans in order to generate diagnoses that drum up higher payouts from payers. Or doctors could tweak their language to produce an intended diagnosis, whether or not it’s accurate.

Further, according to the CB Insights report, unintended misdiagnoses generated by AI could lead to costly medical errors. Medical errors cost the US health system about $20 billion annually and account for 100,000 to 200,000 deaths each year. Errors brought about by ill-trained AI tools could add to these losses, with hospitals bearing the brunt of the consequences.

Slight tweaks can dupe AI systems into seeing something that isn’t there. Researchers revealed that a quick alteration of a few pixels in an image of a benign skin lesion led to a misdiagnosis that the lesion was malignant, for example. The AI system also incorrectly diagnosed when the scan was rotated. Further, insurers and startups are beginning to use AI to compute a car owner’s “risk score", analyse images of accident scenes, and monitor driver behaviour. For instance, China’s Ant Financial, an Alibaba Group affiliate, uses deep-learning algorithms for image processing in its “accident processing system".

Despite the limitations of AI, Dr Deshpande believes this is “an amazing time for healthcare if you look at positively. What we think as an impossible diagnosis at the early stage, can be done. For example, cancer can be picked out even before it can be scanned or you can find out if you’re going to get Alzheimer’s 20 years from now."

Moreover, federated learning is emerging as an approach to train AI with sensitive user data while simultaneously protecting privacy. Simply put, patient data never leaves the hospital premises and is not sent to a central cloud server. The model is updated locally (on-premise at the hospital using local data) and only these updates (and model updates from other participating hospitals) are sent to the cloud.

In conclusion

According to Ravi Ramaswamy, Senior director and head, health systems at Philips Innovation Campus, the challenge is to make sense of the humongous amounts of data (standardized and non-standardized) and also “come up with financial models which make sense for each of the participants".

Dileep Mangsuli, chief technology officer of Wipro GE Healthcare, cites the example of family doctors in China who get paid a certain amount of money every year. However, if anyone in the family falls sick, some amount is deducted from the disbursement because a doctor’s responsibility is to keep the family healthy, and not to let them fall sick and then treat them.

“What’s changing is the data that is available today," explains Mangsuli. “Google is continuously collecting all the data from us, all the devices are collecting data, all the diagnostic equipment are collecting data. Can all this be connected and made some sense out of? That’s what all companies are looking at," he says, adding that the need of the hour is to create “a solution that is going to actually make a family doctor out of all this".

Dr Deshpande concludes aptly about the challenges ahead and why doctors are here to stay. “There’s so much AI, ML and IoT can do but you need to bring in an emotional quotient that can make a patient comfortable. Healthcare is all about satisfaction." For now at least, technology cannot replicate the doctor’s bedside manners.

Unlock a world of Benefits! From insightful newsletters to real-time stock tracking, breaking news and a personalized newsfeed – it's all here, just a click away! Login Now!

ABOUT THE AUTHOR
Leslie D'Monte
Leslie D'Monte specialises in technology and science writing. He is passionate about digital transformation and deeptech topics including artificial intelligence (AI), big data analytics, the Internet of Things (IoT), blockchain, crypto, metaverses, quantum computing, genetics, fintech, electric vehicles, solar power and autonomous vehicles. Leslie is a Massachusetts Institute of Technology (MIT) Knight Science Journalism Fellow (2010-11), author of 'AI Rising: India's Artificial Intelligence Growth Story', co-host of the 'AI Rising' podcast, and runs the 'Tech Talk' newsletter. In his other avatar, he curates tech events and moderates panels.
Catch all the Business News, Market News, Breaking News Events and Latest News Updates on Live Mint. Download The Mint News App to get Daily Market Updates.
More Less
Published: 08 May 2019, 09:32 PM IST
Next Story footLogo
Recommended For You
Switch to the Mint app for fast and personalized news - Get App