By now, everyone and his pet dog knows that Artificial Intelligence (AI) or AI-powered robots are going to take over our lives one day. They are going to run our factories, diagnose our illnesses, drive our cars, provide enjoyable company and even sex for the lonely, and replace large numbers of us in our jobs. But if these AI gigs mess up, can we sue them and ask for damages, like we can do with a human being? As things stand today, no.

So, Hong Kong tycoon Samathur Li Kin-kan is doing the closest thing possible—he is suing the man who sold him on a stock investment AI program that lost Li a lot of money. In 2017, Li met Raffaele Costa, who told him about an AI-powered hedge fund his company Tyndaris Investments was setting up. A supercomputer, K1, would search real-time news and social media to gauge investor sentiment and predict US stock futures, then instruct a broker to execute trades.

Li gave K1 $2.5 billion (around 17,493 crore) to manage. But soon, K1 was regularly losing money—it lost over $20 million in a single day, a loss that Li’s lawyers argue could have been avoided if K1 were as sophisticated as Costa had claimed. Li is now suing Tyndaris for allegedly exaggerating what the supercomputer could do. Tyndaris denies the charges. The trial begins next April in London.

Several global fund management companies have started using AI in the last few years. This is different from software that most investment firms have been using for decades to analyse data, using certain pre-set criteria, and help fund managers make buy-sell decisions. These are static software, bound by the rules they were created with, and thus keep performing their tasks in exactly the same way, till someone tweaks their rules.

An AI program is given a massive database, some basic rules and a certain goal. As more information keeps coming in, the program keeps learning how better to achieve its goal, and continuously updates its problem-solving methods. This is “deep learning", the greatest triumph of which is supposed to have been when, in 2017, Google DeepMind’s AI program, AlphaGo, beat world champion Ke Jie at Go, the most complex strategy game known to mankind. Fed the rules of the game and records of thousands of games played, AlphaGo had taught itself to be the best player on the planet.

A spokesperson for BlackRock, the world’s largest asset management firm, told The New York Times that BlackRock uses AI to tease out patterns that might evade human eyes and brains. Like identifying non-intuitive relationships between securities or market indicators, perusing social media “to gain insights on employee attitudes, sentiment and preferences", and monitoring search engines for words being entered on particular topics. But, said the spokesperson, final sell-buy calls are taken by flesh-and-blood managers.

Do AI programs manage investments better than human experts? It’s too early to tell. AI Powered Equity ETF (AIEQ), launched in October 2017, was the first fund that relies entirely on AI for decision-making (incidentally, AIEQ’s creator EquBot is headed by Chidananda (Chida) Khatua, an Indian Institute of Science, Bengaluru, alumnus). Its performance has been middling, consistently underperforming the benchmark S&P500 index.

Sceptics point out that, both in the short term and the long, AI has some problems. In the short term, markets are often irrational, and move on the basis of rumours or “market sentiment". There can also be totally out-of-the-blue incidents. AI programs make their decisions by studying history—they suss out past patterns and expect these to continue. If there are no obvious similar events in their databases, they would be stumped.

In the long term, there may be an unlimited number of factors at play—from geopolitics to the climate—with the rules changing all the time. Go may be an incredibly complex game, but it still has iron-clad rules. And how would AI ever understand the “value investing" philosophy of visionaries like Warren Buffett that you are not buying merely a stock, but a company?

But the most crucial aspect is something else. After a certain point, the AI program’s creators can no longer make out the logic their deep-learning whizkid is following. For instance, in 2017, Facebook had to shut down two of its AI programs when they started talking to each other in a language they had invented that was incomprehensible to humans. So, how does a fund manager explain losses to an investor, when he himself has no idea why certain decisions were made? And, who is responsible for the losses? The AI is now an independent decision-making entity, much like a person whose parents have only paid for a basic education and lifetime unlimited internet access. And these questions apply not only to investments, but to every field where AI is set to make deep inroads—where fatalities or mass misery may be involved.

So, whose fault will it be then? Samathur Li Kin-kan’s fraud-claims case merely scratches the tip of an iceberg, which human laws are currently ill-equipped to handle.

Sandipan Deb is former editor of ‘Financial Express’ and founder-editor of ‘Open’ and ‘Swarajya’ magazines

Close