We’re getting de-FANGed and Uber-ed14 min read . Updated: 01 Jul 2017, 11:53 PM IST
That artificial intelligence is here to stay and is intrusive is one thing. Question is, should Homo Sapiens regulate it to survive the onslaught?
That artificial intelligence is here to stay and is intrusive is one thing. Question is, should Homo Sapiens regulate it to survive the onslaught?
After I got past security at Bengaluru airport this Thursday to get back home, I got myself a sandwich. The man at the cash counter asked that he see my boarding pass.
“Do you want my credit card or boarding pass?"
“Both, sir," he said.
“We’ve been instructed to scan everybody’s boarding pass."
“To serve you better."
“I don’t want to be served better."
“Boarding pass and credit card, sir."
“Will you refuse me a sandwich if I decline to allow you scan my boarding pass?"
“It is not mandatory sir."
“Then shut up, swipe my card, and give me the food."
But I got curious. Heck! Why didn’t this occur to me before? Why does everybody at an airport retail outlet insist on scanning our boarding passes? Why do we meekly comply? That these questions occurred may have had something to do with the conversations my colleague N. Ramnath and I had just had over a week in the city conversing with a lot many people. These included some very fine minds like Nandan Nilekani, former chairman of Uidai, Rahul Matthan, a lawyer and partner at Trilegal, Nitin Pai, co-founder of the Takshashila Institution and professor M.S. Sriram, a fellow at IIM Bangalore, among many others.
While their ideologies and world views differ given the backgrounds they come from, all of them concur on that urgency ought to be injected into the thinking about securing privacy rights for Indian citizens and that their data be secured. Because as things are, pretty much everybody can be compromised by either the state or any private entity. Their point is, as things are, this is unacceptable. India is at an inflection point as it migrates from being a data poor to a data rich nation and policymakers must take immediate cognizance of it. Else transgressions of the kind I witnessed at the airport are par for course.
That is why I waited a while and watched while I munched on an overpriced, gawdawful sandwich of the kind airports serve. Much like an automaton, the cashier asked everyone for their boarding pass with much the same authority any security personnel could summon. Everybody meekly complied. He’d run a QR code scanner over it, even as he took cash or swiped their cards. Nobody uttered as much as a peep. I don’t think the poor bloke knows what he is doing. Nor do most folks passing through what is apparently the “tech capital" of India. Because clearly, few people know what is going on.
That QR code on everyone’s boarding passes contains data—in this case, the name of the passenger, the date they travelled on, what airline they chose and their seat preference, among other things. It can be argued this is benign data and is of little significance. But that is missing the woods for the trees. In the first instance, why should a private entity have access to that information? When populated over time, it will evolve to create intimate portraits of people.
As things are, while the current QR code on boarding passes contain benign data, there is no framework in place yet that prevents these entities from capturing a lot more. In fact, there are airlines that do in other parts of the world much like businesses outside airlines routinely do to capture information that includes phone numbers, email addresses, the company people work for, and any additional information that can eventually be deployed to track an individual down.
It boils down to this. In much the same way that natural resources like minerals and oil were mined in earlier decades, everyone’s data is being mined. It is now the most precious resource. And it can be mined from no place else but you and me. We are the new coal mines or oil fields if you will.
And as businesses get more competitive, what is to stop an airline from being as benign as it is now? The funny thing is, most people who get to the airport nowadays are the apparently tech-savvy people who using taxi-hailing apps like Uber. And that taxi-hailing app isn’t as benign as an airline boarding pass is.
Before I delve any farther, a disclaimer. I still am to make my mind up about the beast that is Uber. Is it good, bad, evil, I don’t know. I have not engaged with any stakeholder at the company. I know of the entity as a user and have only engaged with drivers on the platform.
That out of the way, may I use Uber as a metaphor for a new world? I do that often because to my mind, it represents change, affects everybody, and is a brand everyone recognizes. Uber is the new four-letter word. It can either be used as a cuss word or in jest. It’s all a matter of perspective. But I’m willing to punt, in all seriousness, a decade from now, if not earlier—most of us will be Uber-ed. If you’re in your late thirties or early forties now, you’re staring at a jobless future. Unless you’ve begun prepping yourself.
Surely, you must be joking
I’m not at liberty to disclose the name of the entity because I haven’t formally engaged with the leadership team. It is one of the largest employers in the country and a job there is considered a prized one—a haven to risk-averse people. It carries the impression of being a staid monolith. But this much I can state with confidence: the top brass at the entity is at work to grow exponentially and downsize the workforce. Not because they are brutal people, but because the world is changing and if they don’t adapt, the entity they now head will die.
Just that you may get an idea of the scale, it envisions a future where the work 100 people currently execute can be done by no more than 7-10 people. It will mean layoffs in the tens of thousands and a stunned Indian middle class.
In much the same way, the chairman of a large Indian conglomerate who has always sworn by history, anthropology and psychology as the lubricants to understand people and power growth, has his mind on overdrive. He wants data and artificial intelligence embedded into the systems that have gotten his entity this far. Our last conversation was full of pregnant pauses. Knowing him, I am reasonably sure something tangible will emerge in a year—job losses included. What, how and when, nobody, including him, knows.
Put yourself in their shoes though: if you were to be the CEO, what would you do? As head of this entity, do you save the ship from sinking even if it means culling thousands of jobs? Or do you embrace the future because it is both inevitable and uncertain? What about the hand-wringing in desperation of the thousands who will be casualties in this churn?
I am acutely aware these statements are at complete variance with what I had asserted as recently as January this year, that despite exponential leaps in machine learning and artificial intelligence, us humans will continue to be relevant. But what I hadn’t factored in then is the ethics of it all. To that extent, I would imagine this is the modern-day version of an ethical dilemma Arjuna faced before he had to get into the battlefield of Kurukshetra. At least he had Krishna to turn to that he may ask for wise counsel. Whom will these people turn to?
If Krishna’s advice is any pointer, these CEOs must simply accept the inevitable and go about doing what is their dharma. When looked at from this fatalistic prism, our lives are on the verge of getting Uber-ed, lost in a virtual Amazon, bamboozled by Google, or being drip-fed by what Facebook thinks apt for each of us through its customized newsfeeds powered by algorithms.
For the sake of this argument, I’ll stick to Uber. There is no taking away that it is currently embroiled in controversy and in spite of its much touted $70 billion valuation, there is much animosity against it. It may or may not last the current battle. Depending on what side of the fence you are on, you may want to call it financial skullduggery of the worst kind. Be that as it may, it is an entity that continues to surprise and is creeping into our lives in insidious ways.
Why, for instance, would Uber want to launch UberEat—a food delivery service? Ostensibly so the drivers on the platform may not have to be idling for a passenger. Instead, the time can be used to deliver food to somebody who may want it from a restaurant in the vicinity. While it may collect a small fee, the more pertinent thing is Uber will have gained something more valuable. A piece of data on what I like to eat and, over time, my dietary preferences. How do you measure that? What value do you ascribe to it?
That Uber is experimenting with driverless cars, helicopters and other such assorted services, is well documented. What happens when all of this come into play and companies like these continue to innovate in ways we haven’t thought of until now? My limited submission is, contrary to what is popularly believed, Uber is neither a taxi aggregation service or a transportation company. It is a technology platform powered by big data. That is why I remain circumspect about passing verdict on what sense to make of the state of its finances. Because the issue is, how do traditional models of finance and accounting reconcile with disruptions like these that were unforeseen?
Even the so called new e-commerce metrics like customer acquisition cost (CAC) and gross merchandise value (GMV) sound hopelessly outdated. What we do know with some degree of certainty is that Uber is a loss-making company, with cash to burn. While many epitaphs have and are being written to it, the fact is, losses are shrinking and revenues are rising.
The question that ought to be asked then is: what is Uber? Is it a transportation company? A taxi aggregator? Or an entity built on the back of artificial intelligence (AI) and powered by big data? If it is an AI-based entity and a mining resource of a new kind, it cannot be looked at from the prism of economics as we understand it now; or apply morality or ethics of the kind we are used to. They are overturning everything we take for granted.
That is why entities like Facebook, Amazon, Netflix and Google must be looked at differently as well. Haresh Chawla, partner at True North, a private equity fund, calls this the FANG quartet. Add Apple and Microsoft to this club, and we have a bunch of entities now worth over $3 trillion—worth more than any other powerhouse in oil, banking and retail—all of whom are getting upended in any case.
Everybody gets it. That is why they also want to be among the first conquistadors. But I’m not giving myself up without resisting. And I want the government I elected to do all it can to ensure it stays on my side. That is what makes me human. But unfortunately, humans are also stupid.
Where is the intelligence?
As we understand it, humans are intelligent for three reasons:
1. Humans evolve. It is a long process, but over time, we learn, weed out what we don’t need, keep what is needed, and that in turn is encoded into our DNA. That makes us what we are.
2. Then there is knowledge acquired on the back of experience.
3. And finally, humans learn by talking to other people, reading, and other such interactions.
But now, there is a new source of knowledge. And that lies in the data. But the amount of data being generated is so much, there is only so much our brains can cope with. Computers and algorithms that power it on the other hand, which we built and programmed, are extremely good at gleaning knowledge from raw data. And because of the processing power built into these machines, they acquire it fast.
The raw processing power these machines contain allow it to not just capture knowledge, but convert it into experience and wisdom without having to age like we do. In turn, they are beginning to look and sound increasingly like us. I have eluded to this earlier in this series of dispatches on how algorithms are learning—we always believed these are uniquely human traits.
If that be the case, it raises a set of issues. What happens as these algorithms begin to get closer to species Homo Sapiens? Will a new species emerge? Homo Alogoritimus? If that be the case, is it possible then it can feel emotions much like sapiens does? And if it feels emotion, it is entirely possible this creature may feel anger as well at being “controlled" by an “inferior being"—in this case, humans, or sapiens, and may just turn against its creators in a “revolution". Is it possible then this species can drive sapiens into extinction? What is it that our generation and that of our children are up against? As researchers call it, these are “complex adaptive systems". Much of my understanding is on the back of conversations with researchers working at the frontiers of neurosciences and mathematics.
I do not want to get into the minutiae. But they suggest I think of them as “children learning math". If they make a mistake in their computations, the systems are built in a manner that the algorithms are “punished"—the human equivalent of a rap on the knuckles. The severity of the punishment depends on the error. On the other hand, if the algorithm does well, or exceeds expectations, the system “rewards" it in unexpected ways. The human equivalent of a child receiving a “bonus chocolate". Quite honestly, I still am to wrap my head around that an algorithm can be penalized, rewarded, feel pain, happiness or anger.
To put that into perspective, a few days ago, I used Google to look up what are the best set of headphones I can get right now after I lost mine while in transit. Google, which started life out as a search engine, prompted me towards a Sennheiser CX180 as the best option, based on my behaviour in the past.
But that crafty fox called Amazon, which first emerged as an online bookstore, nudged me to choose a brand I hadn’t heard of before, called Sound Magic. It looked good and the reviews sounded even better. But the Sapiens in me thought it too expensive. Don’t ask me how and why. Somehow, a 30-minute flash sale happened and I felt compelled to buy Sound Magic for Rs1,600 odd as against the stated sticker price of Rs3,200. By Jove, it’s the best damn headset I’ve owned and I’m hooked to it. Heck, I’m even recommending it here.
My researcher friends in artificial intelligence tell me Google’s algorithm’s may have figured I didn’t go for the suggestions it threw at me and opted for something from a competitor instead. Is it possible the algorithms may be penalized then and feel pain like humans do? I’m speculating. But it cannot be ruled out.
Like I said earlier, I’m still trying to come to terms with it. But that’s how it is. The lines between what is human and what is artificial are blurring. I suspect those “creatures" or “kids" or “algorithms" or whatever it is at work inside Google may be waiting for fresh “search" inputs from me to figure out what can they do and learn about me to hook me better. If they do well, they might just be “rewarded".
Like I said earlier, when looked at from this prism, the monies companies like these are spending now and are being viewed as subsidies to grow the market are not subsidies. It is investing in an education, much like you would for your child that they go to the best school. In case of the algorithms, higher the investments, the better data it will capture and the longer way it will go. Because at the end of the day, any algorithm is only as good as the quality of data on hand.
All these entities are experimenting with driverless cars for instance. Because driverless cars, much like the phone in your pocket now, stands to gain the most number of insights about who you and I are and what is it we really do. The entity that controls access to this data will be the biggest gainer.
If you think this is too far into the future, then consider this. When you get on board a transcontinental flight to take you 10,000 miles across the ocean, the technology exists to do it minus any humans. But it is psychologically reassuring to passengers that they hear the crackling voice of somebody who describes himself as “captain" from the cockpit. After a flight has taken off, charge is handed over to automated systems. Those in the cockpit have nothing to do. They take over only when it is time to land. But these are completely unnecessary. It is proven that take offs and landings are safer when handled by automated systems.
Our minds aren’t willing to buy into this reality yet. But it’s only a matter of time before our minds do. And with it, airline pilots their jobs. As will many of us.
Charles Assisi is co-founder and director, Founding Fuel.
His Twitter handle is @c_assisi
Comments are welcome@email@example.com