Black muddy river: of jobs and computers
It is widely accepted that technology-driven automation, whether routine or “intelligent”, will lead to large shifts in employment during this century, much like the industrial revolution did in the 1800s. The industrial revolution also concurrently caused an unprecedented rise in the overall standard of living for many people, leaving them time to focus on work that was more creative. At least, that is the currently accepted version of history.
I recently watched a short, but information-packed video by Andrew McAfee, associate director of the Centre for Digital Business at the Sloan School of Management at the Massachusetts Institute of Technology. The video runs on a site called bigthink.com which claims to help organizations get smarter and faster by fostering debate and conversations around business success in this century.
In the video, McAfee addresses what most of us would think of as one of the last bastions of humanity’s holdout against technology: Creativity. He first defines creativity as being the eureka moment that allows a human to come up with a powerful, legitimately valuable, useful, and most importantly, novel idea. He claims that our view that computers are little more than mimics is not entirely correct, and that there are indeed things that computers can do today that fit into his definition of creativity. He also concedes that there are some creative things they cannot yet do, and proceeds to lay out the differences.
For instance, technology is capable of a technique called “generative design”, where it can, on its own, design parts that are usable after having been given specifications of how these parts need to function in the physical world. Curiously, many of these parts, which are produced by printing them on a 3-D printer, look like skulls and skeletons. These machines mimic natural evolution in their design; nature has produced efficient working systems through skeletal adaptations in kingdom Animalia. McAfee concedes that these parts look nothing like a human designer would have envisaged, but nonetheless are capable of the function they were designed for. A second area he delves into is the composition of music. McAfee claims that enough is known about what constitutes “good” music for a computer to be programmed in a manner where it can compose engagingly. According to him, when human subjects are told that they are listening to computer- generated music, they classify it as lacking emotional impact and zest, but when the fact that the music was artificially created is withheld from them, they classify the music as enjoyable.
I consider myself something of a music aficionado, and so, while McAfee’s data may well support this finding, I was incredulous. I know that it is possible for a trained ear to pick up on the lack of emotional content in a performance, even if the performance is itself technically flawless. Students of Indian classical music are taught that there are three components to making music—the tune, the tempo, and a certain je ne sais quoi which completes the performance, called “bhaava” or emotion. This third is always a variable since it is the performer who adds it to the equation, even when the tune and the tempo are pre-composed and rigidly codified. The fact that most Indian classical music is either devotional or romantic in nature allows the performer to identify with the lyrics and therefore impart emotion, even if there is no vocal performance or improvisation by the artiste involved in the performance.
Later in the talk, in an attempt to explain the discrepancy, McAfee goes on to clarify that he was talking only about instrumental music. Well, the greatest Western instrumental composers poured emotion into their pieces, and while their pieces may not have had lyrics, they were often written for specific occasions or people. For instance, Mozart’s horn concertos were written for his friend Joseph Ignaz Leutgeb, an accomplished player of the horn, and Mozart interspersed his original score with friendly insults aimed at his friend, probably to get him to play the horn with gusto. It all came together for me, however, when McAfee ended the talk by saying that when computers are asked to add lyrics to a song, they can only produce gibberish, much as they do when asked to write long-form pieces of creative writing, such as a short story, or this column. He concludes that this is because computers know nothing of the human condition, and therefore lack the emotion and the sensitivity that a human writer uses to reflect the interpretation of the human condition back to the reader or listener. In fact, he is emphatic that he doesn’t see machines developing this capability at all.
So, there is hope. While machines might even be able to generate nuanced tasks, there will always be the need for a human being to translate the generated pieces into products that are pleasing to the human eye or ear. This includes the design engineer who adds the aesthetic sense, thus not allowing computers to litter the landscape with buildings that look like skulls and crossbones; and the doctor who uses a computer-generated diagnosis to sensitively treat a patient. It also needs a Robert Hunter to pen the lyrics to Black Muddy River, a wonderful song by the Grateful Dead, even if the guitar riffs were left to a machine instead of to the genius of Jerry Garcia. And Garcia first performed this song after emerging from a diabetic coma, which just adds to its pathos in a way no computer can ever replicate.
Staying focused on the human condition will be the magic key to continued employment and prosperity in the 21st century.
Siddharth Pai is a world-renowned technology consultant who has personally led over $20 billion in complex, first-of-a-kind outsourcing transactions.