Home >Opinion >From ontology to oncology: Deep science for medical AI
While many current AI uses focus on mundane everyday activities like finding a location, some applications of AI affect our well-being and safety in profound ways. Photo: iStockphoto
While many current AI uses focus on mundane everyday activities like finding a location, some applications of AI affect our well-being and safety in profound ways. Photo: iStockphoto

From ontology to oncology: Deep science for medical AI

Safe engineering backed by deep science is especially needed in hazard-prone areas that affect our lives

I have dwelled in this column before on the need for us to allow mathematicians to pierce the frontiers of artificial intelligence (AI). More recently, I wrote about Cyc, an attempt to create a vast ontological representation of common-sense knowledge, for use in everyday applications. Ontology, though a big word, simply means a set of concepts and categories in a subject area or domain that shows their properties and the relations between them. The top-down approach of machine learning, which allows the machine to better its own AI applications as more data arrive, should still use a bottom-up ontological approach such as Cyc in order to be truly useful in everyday applications.

While many current AI uses such as Google Assistant largely focus on mundane everyday activities like finding a location, some applications of AI affect our well-being and safety in profound ways. Allowing computer programmers to write such systems without first relying on a solid body of specialized knowledge about hazards in fields such as medicine and engineering can end in disastrous outcomes. Much like an ontology for common sense, an ontology based on deep science around cognitive thinking for medicine is of paramount importance.

The dictionary defines “cognitive" as ‘‘the mental action or process of acquiring knowledge and understanding through thought, experience, and the senses". Sadly, the word ‘cognitive’ is bandied about by some of today’s AI firms’ hyperactive marketing departments who are becoming increasingly casual in their use of the word. Tall claims about the capability of machines that are little more than superfast pattern-matching engines, such as “this machine will find the cure for cancer", can be seriously psychologically damaging to patients who desperately hope for a cure for their disease.

I have been communicating with John Fox, a professor in the department of engineering science at the University of Oxford, who has generously given me plenty of information about the use of deep science in technology applications. Fox is an interdisciplinary scientist working on reasoning, decision-making, planning and theories of natural and artificial cognition. He applies the ensuing results to the design and engineering of advanced AI technologies. Fox feels that the synergy between basic research and practical application development is key to achieving sound theoretical foundations in cognitive science, and for the safe engineering of critical AI applications. He should know. He started out with a doctoral degree from Cambridge in cognitive psychology and has applied deep science techniques to technology over a four-decade career.

Safe engineering backed by deep science is especially needed in hazard-prone areas that affect our lives—structural engineering for example, which dictates how safe the buildings we inhabit and the roads we travel on are, and in bioinformatics that inform the practice of medicine, which will affect us all at some point or another in our lives. Fox has devoted his entire career to medicine, which is a challenging domain in which to understand both human and artificial thinking. The tolerance for ambiguity is an integral part of medical practice, and allowing cognitive science to better inform medical decisions and treatment plans can both boost medical education as well as disseminate good medical practice.

Fox and his colleagues have developed a framework ‘stack’ for the use of systems at the point of care, called CREDO. CREDO allows not just for diagnosis, but also for treatment plans, decisions and follow-up. Fox has recently published a paper on CREDO in Elsevier’s respected Journal of Biomedical Informatics, which unlike some of the journals on PubMed, relies on stringent scientific peer review. CREDO relies on an ontological representation of human knowledge to deploy systems. This is vastly different from a machine that speed-crunches millions of medico-radiological images to come up with a ‘best possible’ answer.

With Fox’s permission, I have drawn heavily on his work to explain how cognitive science, and its associated ontologies as they relate to human thinking work when deployed with machines. This is just a summary view; I will follow up with more in-depth columns as my dialogue with the good professor continues. Nonetheless, this summary covers many complex ideas and will need an attentive read, for which I beg your patience!

CREDO’s roots draw on concepts from cognitive science and it has contributed new results to cognitive theory while using an understanding of human expertise and knowledge-based AI. This type of theory is not what marketers call ‘cognitive’, it truly relies on cognitive psychology as the primary scientific tradition. Cognitive scientists have developed a number of explanatory paradigms for theorizing about human cognition, of which CREDO uses four:

•Statics: the study of modular information processing architectures in which specialized cognitive components such as short- and long-term memory, perception, reasoning, action and motor control, are organized to support coordinated mental functions in humans.

•Dynamics: in which behaviour is explained computationally (as in mathematical computation). Much theoretical work in cognitive modelling has used simulations of mental processes involved in tasks like mental arithmetic, reasoning, language understanding, decision-making and learning.

•Epistemics: the scientific study of knowledge. Cognitive models of ‘the knowledge level’ were originally informed by theories of human memory for semantics and concept learning, but increasingly draw on formal ideas about knowledge representation from AI and knowledge engineering.

•Anthropics or functionalism: Traditional psychology and philosophy of mind have always seen internal mental states as key to understanding human cognition and autonomy.

In recent years, theories of autonomous agents have emerged in AI which have sought to understand and formalize notions like intentions, beliefs, desires, goals and plans. It is this machine ‘autonomy’ that many fear, especially since this autonomy may one day reach singularity—the point where humans can no longer control machine intelligence. Thankfully, that day is very far off.

Fox has now moved on to using his work at a couple of start-ups. One is Deontics, which is a leader in decision technology, and another is OpenClinical, a crowd-sourcing and knowledge-sharing platform for clinicians and technologists.

One of his earlier start-ups, InferMed, was acquired by Elsevier in 2015. How I wish I were an early investor in it!

Siddharth Pai is a world-renowned technology consultant who has personally led over $20 billion in complex, first-of-a-kind outsourcing transactions.

Subscribe to Mint Newsletters
* Enter a valid email
* Thank you for subscribing to our newsletter.

Click here to read the Mint ePaperMint is now on Telegram. Join Mint channel in your Telegram and stay updated with the latest business news.

Close
×
My Reads Redeem a Gift Card Logout