Herbert Simon of Carnegie-Mellon, Tom McCarthy, and others are credited with having founded the field of artificial intelligence (AI) on the claim that human intelligence “can be so precisely described that a machine can be made to simulate it". According to computer scientists Stuart Russell and Peter Norvig, the term “artificial intelligence" is applied when a machine mimics “cognitive" functions that humans associate with other human minds, such as “learning" and “problem solving". The goal of “machine learning" is that an ideal AI computer program should then have the ability to change itself to take actions that maximize its chance of success at performing a task.
Sharp minds are at work at places such as Alphabet (Google’s) Deep Mind who claim that their final goal is to reach “artificial general intelligence", or AGI. Others talk about the concept of a “singularity", the moment in time when AGI becomes smart enough to be an intelligence that can make itself better without human intervention. In other words, machine learning will completely take control away from its human creators.
While I defer to these great minds, I would still argue that both those concepts are hollow. The danger lies in assuming that AGI is human intelligence, since human intelligence is not “general". Learning is not intelligence. To expect that learning machines can make themselves more intelligent (as opposed to more efficient at performing tasks) is farfetched.
In my opinion, all that AI has been able to do until now is take algorithmic concepts that have long been known, and efficiently apply these to large volumes of data. To be specific, these fall in the areas of pattern-matching and predictive analyses. Hence, the term “data scientist" and the huge number of job openings for those who understand some of the basic concepts of statistics, such as regression analysis that checks how variables are linked, the Box-Jenkins model that studies data in a time series, and the Bayes’ equation used for estimating the probability of various outcomes.
Philosophers have held that while tasks can be automated, there is one thing that a soulless machine can never do, and that is having living “consciousness". If you doubt this, then simply ask yourself who is listening to these words as you read them to yourself. Is it your “human learning" neurons, or some other, larger field of consciousness into which words and thoughts like these come and go and are understood? If a voice arises in your head that disagrees with what you are reading, who is it that is aware of the voice?
It would behoove us then to better understand where consciousness exists. I recognize that I am now venturing onto thin ice. There is no consensus between philosophers and scientists or indeed within academia on what consciousness is. The one thing everyone does agree on is that the phenomenon does, in fact, exist. Nonetheless, in a recent article in Scientific American, Christof Koch cites neurological studies that seem to have established that conscious awareness exists in the highly integrated and complex cerebral cortex in our brains, and is not to be found in the more primitive cerebellum, which governs our motor activities. Mankind has the largest cerebral cortex relative to other forms of life on earth. According to Koch, very little happens to consciousness if a cerebellum has been operated upon by a surgeon. This is because the cerebellum, unlike the cerebrum, is exceedingly uniform and parallel.
So far, so good. The finding relates to many philosophical schools of thought that say that man is the most evolved and sentient of all beings, at least on earth, and is the only animal capable of realizing that he, in fact, possesses the faculty of consciousness. Eastern philosophers would have it that the recognition of this consciousness as being both a limited aspect as well as the full expression of an all-pervading universal consciousness is the (spiritual) goal of life.
Switching to consciousness in information technology, there are two rival schools of thought, one called the global neuronal workspace (GNW), and the other called integrated information theory (IIT), posited by Koch and his collaborators. GNW holds that consciousness rises from information being processed in a specific manner. It says that AI programs process a sparse, shared repository of information; all the while, this information is also concurrently shared by a host of subsidiary processes in the system. According to GNW, once such a sparse set of information leaves the AI program’s processing space and is replaced by another set of sparse information, the new information can also simultaneously be broadcast to the subsidiary processes that can suo motu make changes to handle their own subsidiary tasks. It is at this point, according to GNW, that the information becomes “conscious".
In contrast, IIT has an “outside-in" view of consciousness, since it starts at the experience and works backwards from there to find the conscious “experiencer". Each experience is unique and exists only for the experiencer. IIT theorists postulate that any complex and interconnected mechanism whose structure encodes a set of cause-and-effect relationships will have these properties, and so will have some level of consciousness. In other words, it will feel like something from the inside. However, if the mechanism is anything like our cerebellum, it would lack integration and complexity, and will not be aware of anything. IIT says that consciousness is an intrinsic causal power associated with complex mechanisms such as the cerebral cortex. Programming for consciousness will never create a conscious computer.
So, consciousness cannot be computed; it has to be built into the structure of the system. This will take decades, as we still need to observe and probe the vast groups of highly heterogeneous and dissimilar neurons that make up the cerebral cortex of our brain to further isolate and understand the precise signifiers of consciousness. It will be quite a while yet before we have a Frankenstein’s monster to deal with.
Siddharth Pai is founder of Siana Capital, a venture fund management company focused on deep science and tech in India