On 9 October 2017, as the world commemorated the 50th anniversary of Ernesto “Che” Guevara’s assassination, my mind went back to 28 January 2008. It was the day I got my first job offer, deferred my joining date by a couple of months, landed in Cusco, Peru, hopped on a motorcycle, and hit the road, a la Che Guevara. My four-and-a-half year stint in the neuroscience PhD programme at the Johns Hopkins University had ended disastrously with a Masters. But now, inspired by the movie The Motorcycle Diaries, I was ready to explore South America.
There was only one catch. I didn’t speak any Spanish.
Having led a sheltered life in academia, this was my first brush with irrationality. Everything I was doing was possibly wrong, undoubtedly stupid, and potentially life-threatening. On my first day on the road, I lost my way in the Peruvian Andes and spent a night in a hospital bed in a village of not much more than 50 people.
As if that were not enough, I rode through a desert storm in the Atacama desert—the driest in the world—that offered stunning views of the soaring Andes ranges on one side and the deep blue Pacific on the other, bribed a cop with sign language to help a penniless hippie—riding with me without a helmet—cross police checkpoints, and narrowly escaped a late night pub brawl.
Along the way, I also learned enough Spanish to get by, but more importantly, I learned a lot about myself and the lives of people from different walks of life. Academic mentors and peers occasionally have fascinating personal stories, but most of them have similar goals in life and choose predictable strategies to reach their destinations. Plus, most of science is incremental and too much risk tends to be avoided.
And there I was, ploughing through the back roads of Peru, Chile, and Argentina; with no plan and no knowledge of the people and cultures I was encountering. The backpackers I was bumping into were telling me that it was okay to quit your job and go sailing for a few weeks, sell everything and live out of a car for a few months, only to see your girlfriend break up with you, or blow your savings on a round-the-world trip.
For the first time in my life, I was a bum—searching for nothing in particular.
That nothing in particular turned out to be a latent desire to write. The neurotech start-up I joined after the trip was the perfect choice for a rationalist. We were developing a tool to collect electrical signals from the brain, albeit superficially in the form of electroencephalograms (EEGs), while patients performed specific cognitive tasks. Our challenge was to figure out whether we could detect early stages of cognitive dysfunction in Alzheimer’s disease.
As an electronics engineer, biophysicist and neuroscientist, I was using everything my rational mind had learned in academia. I was also learning some valuable lessons in business and entrepreneurship; things that academia rarely appreciates and mostly looks down upon.
The love-hate relationship between the traditionally liberal academia and conservative business world was interesting to observe too, but the irrational part of my mind had already veered off on a tangent. With minimal knowledge of Spanish and limited interaction with the locals, I had managed to internalize so many details of my bike trip that I started writing about it.
The rational part of me was convinced that writing about a six-week trip shouldn’t take more than six weeks. It took me a year just to write the first draft! In my first attempt at creative writing, I was probably using parts of my brain I had never used before.
After the first wave of mass migration of Indians to the US in the 1960s and ’70s, which my dad, a doctor, was part of, I religiously joined the second wave of mass migration in the 1990s. This wave was dominated by engineers. While most of my peers stuck to computer science for graduate school, I took a minor diversion. As the black sheep in a family of doctors, I decided to venture into biophysics to make amends for the hitherto lack of biology in my life. As an electronics engineering student, I had spent four years studying transistors and microprocessors. Instead of analysing inanimate circuits that breathe life into computers, I started analysing streams of data coming out of MRI machines which were, in turn, trying to understand how human computers work.
I was still on the periphery of neuroscience and still playing with numbers. While the brains were busy doing their magic tricks—firing up different parts in different sequences to accomplish amazing cognitive tasks—I was busy crunching numbers to figure out whether we could enhance the confidence in our statistical inferences about those brains.
The “Aha” moment, which was a comedy of errors, came towards the end of my first semester in graduate school.
The annual Society for Neuroscience (SfN) conference, the largest gathering of about 30,000 neuroscientists, was a month away. I had no data to present and was not planning to attend it. With my advisor’s grant period about to run out, he had some money to spare and offered me a trip to New Orleans, where SfN was being held that year.
SfN is as close a neuroscientist ever gets to a carnival, but it was my first time attending it and I had no idea what to expect. I pounced on the opportunity because it was a free trip to New Orleans! As if flying from cold and dreary Urbana-Champaign to the sticky, warm air and French architecture of Bourbon Street were not enough, the plenary session of SfN was about to change my life.
It was the early 2000s and deep brain stimulation (DBS), the technique that allows scientists to insert electrodes in specific parts of the brain to electrically stimulate them, was fairly new in the world of neuroscience. A neurologist stood up and showed a before-after video of a Parkinson’s patient on DBS treatment. The patient, with shaky hands and legs, who couldn’t even take a few steps properly, was back to his normal gait with a flip of a switch.
For the rationalist in me, it was more than magic. Scientific ethics do not allow nerds to poke around electrodes in normal human brains and alter their electrical activity. One has to demonstrate that there is something abnormal in the brain and that DBS is going to benefit the patient to justify it. But if you can alter actions of humans—to some extent—by artificially stimulating the brain, the idea of altering human thoughts was just a thought experiment away.
All of a sudden, computer brains were passé. The human brain was the in-thing. What are the equations underlying a perfectly executed pirouette? What are the physical underpinnings of a thought? Of memory? Of future planning? Of love? It seemed like I was entering a rationalist’s gold mine! Pretty soon, it seemed, we would bust the myth of god, reduce violent crime, cure drug addiction, bring people and societies together, and, most importantly, challenge free will.
Ironically, it didn’t take too long for that quest to unravel. On the one hand, by applying the latest mathematical tools, neuroscience has taken some major strides in understanding how our brains process information. We understand how we plan and execute actions to the extent that we can directly read them as electrical impulses from the brain and reasonably replicate them with robotic limbs. We have a bird’s eye view of the hierarchy of visual information processing.
To some extent, we know how touch, heat, various tastes and smells are processed. Over the last five decades of neuroscience research, we have come a long way. Jumping into a PhD programme in neuroscience and working on a tiny part of a mouse brain that serves as its internal compass gave me a momentary sense of purpose as well. I went from crunching numbers to sacrificing mice, making thin slices of their brains, sticking electrodes in neurons one at a time; all in the hope of understanding their electrical properties. I managed to add my drop in this burgeoning ocean.
Along the way, it was interesting to notice the star status accorded to us on and off the campus by people from other walks of life. Comfortable in our academic bubble, we could afford to make a few jokes about the job prospects of liberal arts and history majors.
On the other hand, I learned a lot about the illogical side of academia. I noticed that the way neuroscience research was being conducted—at least in the American system—was ironic. Studies galore showed how working long hours, sleep deprivation and lack of exercise lead to high stress levels, loss of attention and focus, and, in extreme cases, depression. And yet, that was the normal state of a neuroscience graduate student. The higher up you went in academia, even the concept of what constituted a doctoral thesis had been so rigidly quantified that the pursuit of new knowledge had lost its sheen.
Regardless of the scientific import and validity, you had to have a certain number of research papers published to get a PhD. The competitive bidding system of research grant allocation made researchers chase what was hot rather than where their heart was, because they had to cross a certain threshold of a grant score to get money. Most importantly, the world of neuroscience—tasked with understanding the mind—was plagued with the same mind games, iniquities and insecurities as those of non-neuroscientists. It was a bittersweet goodbye to academia.
My first job was interesting. In addition to the diagnostic tools we were developing for Alzheimer’s, I was hobnobbing with other neuroscientists applying their knowledge to solve some vexing problems. One guy I met was developing MRI-based lie detectors. The idea was to show the person some specific pictures or videos of a crime scene and see how the brain involuntarily reacts. A concept fraught with lots of practical issues, but intriguing nonetheless. The holy grail was to get it accepted in a court of law, but in the meantime, he discovered that one of his largest customer segments was people suspecting their spouses were cheating on them!
Another company was developing MRI-based tools to help patients with conditions like chronic pain by showing them the real-time activity of their own brains. Apparently, if you show patients with chronic lower back pain live images of the part of their brains that lights up when experiencing the pain, they can develop strategies to reduce activity in that part of the brain. It was like mind-over-matter; neuroscience bringing me closer to meditation and yoga—practices firmly rooted in my Indian heritage.
These rational and intriguing extensions of our rudimentary understanding of the brain, although exciting, pale in comparison to the chaos and serendipity of unplanned backpacking and travel writing. A wildly successful four-year stint at the start-up was enjoyable, but my irrational mind had already started yearning for something intangible.
Another solo backpacking trip happened. This time, it was an year-long, round-the-world trip to 36 countries. Another book happened. At the end of the trip, when I decided to move to India, I knew that I would be kissing neuroscience and American six-figure salaries goodbye. Other than a handful of academic institutions, very few people in India work in neuroscience.
I have still managed to keep my one leg in the technology and start-up world. But in my first week back in India, I met a movie director for the first time in my life. I had spent most of my life staying away from the film industry, thinking that it was full of inflated egos, insecurities, scandals and a lot of negativity. But this director seemed different.
A common friend had introduced us while I was bumming around on my round-the-world trip. After I emailed him about it, he was curious to hear my stories. The first meeting was a disaster because he was expecting a lot of photographs and video footage from around the world, and I—eternally camera-challenged—had virtually none. However, he liked my stories enough to suggest that we could make a movie together if I agreed to backpack in India.
During my 36-country trip, I had managed to burn a huge hole through my savings. Where was the money going to come from? Where would we travel in India? Would anything interesting come out of it? None of that mattered.
Along with a daredevil American backpacker girl, a UC Berkeley student of anthropology, I managed to backpack through the backroads of India, capturing some of the social, cultural and economic contradictions of modern-day India. In fits and starts, I managed to raise the funds needed to make the film. A group of filmmaking pros, from the director all the way to the subtitles writer, took two people who had never been in front of a camera and turned it all into a movie nearly two hours long.
Before we could celebrate completing the film, we were staring at the challenge of reaching our audience. Documentary is not a popular genre in India and we didn’t have the funds to market the film in North America and Europe. The film went to a few festivals and we keep having screenings across India. Those who come to the screenings are glued to the screens for the entire duration of the film and the organizers routinely kick us out because post-screening Q&As go on forever, but how do we get more people to the screenings with no marketing budgets? If you think selling technology is challenging, try selling art.
Ironically, it is that higher level of risk and the challenge of bringing together such a diverse pool of talent—arguably unmatched in the start-up world—that I now find alluring. This seemingly nonsensical foray into filmmaking that I am contemplating has thrown up a lot of personal dilemmas.
What if I had studied creative writing or literature instead of engineering and neuroscience? Then again, regardless of what I think, my degrees have enabled me to work a few hours a day and make ends meet, giving me the time to explore the topsy-turvy world of cinema with some level of comfort.
I remember reading an interview of Nawazuddin Siddiqui, a well-known Indian actor, in which he said that he left his home town in eastern India and moved to Mumbai, thinking, “If I have to die of hunger, I might as well do so in Mumbai.” For every success story like that, Mumbai is full of at least a hundred broken dreams. With those odds, would the rational part of my brain have allowed the irrational part to do anything crazy like that? And if I did not have that kind of a risk appetite, would I have had any reasonable shot at success as an artist?
Secondly, the impact of any technology is easier to quantify in terms of money, man hours or energy saved. But how do you quantify the impact of art? Slowly but surely, I have started doubting the claims of start-ups and new technologies about making our lives better. Having a reasonably good idea of the neuroscience that goes into the design of Snapchat, or the algorithms that populate Facebook walls, lines are blurring fast between utility and addiction.
Still, an entrepreneur has numbers to show to a potential investor. Historically, art has thrived because of the patronage of thriving kingdoms. Making a business out of art is quite a new phenomenon and, in a way, closely tied to the advent of democracy. The constant debate over the funding of the National Endowment of Arts in the US is quite instructive. When people have the power to decide how to spend their tax dollars and it is difficult to assign a dollar value to various art forms, why fund them?
A couple of years ago, when I attended my first film festival, I argued with a bleeding-heart artist that the movie industry does not make evolutionary sense. Storytelling has some value in transferring knowledge and nurturing social mores, but it can be done through oral and written media. Would humanity cease to exist if the movie industry suddenly vanished tomorrow? The girl replied that everyone around us would be offended by that observation. Point noted, but when you can’t put a price tag on art and humanity can survive without it, who should fund it in non-kingdoms?
A recent article in The Washington Post, elucidating the neuroscience of art, mentioned that social art forms like ballets or plays can evoke some of the most intriguing patterns of neural activity in humans. It says: “Neuroscientist V.S. Ramachandran proposes several universal laws of art, or common patterns found in works of art across time and cultures. These principles powerfully activate our visual centers. In theory, they tap into evolved survival responses.”
With an abundance of caution, it concludes: “Art has emerged from the human brain for tens of thousands of years, and every human culture makes it. Yet scientists are only beginning to understand how the brain perceives and produces art, and why. Like so many artworks, the brain is largely an object of mystery. One secret yet to be discovered is how the fragile folds of matter locked inside our skulls can not only conceive art, create it and contemplate it, but can also experience being transported by it, out of the head, out of the body, out of space and time and reality itself.”
The financial return on investment for our film is still being worked out, but having fielded all kinds of questions after screenings of our film, I can vouch for that singular experience of art. Plus, going from recording electrical activity of single neurons to evoking some of the most intriguing patterns of neural activity—or at least attempting it—doesn’t sound bad any more.
Filmmaking seems much more interesting when you look at the new crop of start-ups that is obsessed with artificial intelligence and the ability to predict human behaviour to the point of boredom; not just for the customer, who is being robbed of a sense of wonder and surprise, but also to the developers, who are fretting over minutiae of crunching big data to make human existence as banal as possible. Is that the subconscious allure of this seemingly wrong turn in my journey?
When the film was 80% done and I was onto my second round of crowdfunding, some donors noticed my book about the South America trip. They planted a seed in my head: I should make a movie based on my book. My first thought was that it was downright narcissistic to produce a film about my own life. But a small group of angel investors was willing to write the initial cheques.
Still, a jump from neuroscience to filmmaking sounded like a huge leap of faith. Visual search and artificial intelligence in India, although a few years behind the American start-up ecosystem in terms of being cutting edge, was the natural and safe choice. Why should I jump into the film industry? The angel investors were promising to invest only a miniscule portion of the film budget. Where was the rest of the money going to come from? Would I ever manage to get any director or actor interested in my story?
On the other hand, perhaps that was my delayed gratification for the thankless job of doing that solo trip and writing the book. And if a few investors felt that the story was worth telling on the screen and were willing to put their money where their mouths were, who was I to call it a narcissistic endeavour? After a few months of sleepless nights, I set up my own production company, found a writer whose previous film had won the Crystal Bear at Berlinale, one of the top three European film festivals, and I was walking in and out of the ministries of tourism and culture of South American countries to figure out the logistics of making the film.
Fast-forward to the present day. After preliminary discussions with some prominent directors and producers, it seems like things are slowly falling in place. It is anybody’s guess whether the film will ever see the light of day. In the meantime, I have already written half-a-script for another movie, have a synopsis ready for a third movie, and have conceptualized a couple of other film projects. I have enough creative material to set up a film fund and embrace filmmaking as a career!
Coming from the start-up world, where terms like pro-forma financials, Ebitda, valuations and preferred shares were thrown around like loose change at every investor meeting, what am I willing to offer to investors funding my films? If I am promising a sense of wonder and red-carpet appearances, with a level of uncertainty of financial returns much higher than the start-up world, should anyone bother investing? And yet, art has been the defining feature of all human civilizations and has had longer shelf life than any technology humans have ever developed.
It was a strange twist of fate that Richard Thaler, a behavioural economist, was awarded the Nobel Prize on Che’s 50th death anniversary. Psychology, neuroscience and economics are joining hands to demonstrate that humans are fairly irrational creatures. It has huge implications for capitalism, libertarianism, and, in turn, democracy itself.
In a country like the US and an institution like the University of Chicago—bastions of economic conservatism—Thaler can argue that it is in the society’s larger interest for the government to open individual retirement accounts for everyone and let them decide whether they want to opt out. Behavioural economics is still a nascent field, but if the modern tools of neuroscience end up demonstrating that we go about living our lives fairly irrationally, I would have to dig deep into the world of literature to find a richer irony.
After reading about Che’s death anniversary and Thaler’s Nobel Prize on the same day, these are the issues that my rational mind is grappling with. I will wait for Google to develop an AI algorithm capable of connecting those dots and an automatically curated Facebook post on my wall to take me on a trip of self-examination and nostalgia. In the meantime, as my neuroscientist friends are celebrating the latest scientific breakthroughs at the ongoing SfN in Washington, DC, the irrational part of my brain is on the other side of the world in Mumbai, quite happy writing long articles and screenplays, taking unplanned trips, and embracing the film world.
Mauktik Kulkarni is a neuroscientist, traveller, author and a filmmaker. He is the author of A Ghost of Che and Packing Up Without Looking Back and the co-producer and co-anchor of a travel film titled Riding on a Sunbeam.
Comments are welcome at firstname.lastname@example.org