When Justice Frank H. Easterbrook was asked, in 1996, to deliver a lecture on “Property in Cyberspace”, he titled his talk—“Cyberspace and the Law of the Horse”. His curious choice of title was his way of calling out the foolishness of trying to formulate laws to address new technologies when general principles could, just as well, suffice.
There were a number of cases, he said, that dealt with the sale of horses, and even more where the courts have been approached to address the injuries suffered by people who have been kicked by horses. But this doesn’t mean that one needs to “collect these strands into a course on “the Law of the Horse”. All we need to do is study how the general law of property, torts and commercial transactions applies to the horse trade.
In the same way, there was no need to create a new law for cyberspace. All we have to do, he said, is see how to apply our existing laws to this new domain.
In its recent report, the Union government’s task force on Artificial Intelligence (AI) has recommended that all the legal provisions applicable to the users (individuals or body corporate) of AI should equally apply to autonomous machines. This seems to be a page out of Easterbrook’s book but, I would wonder whether this is the correct approach to take given the peculiar nature of AI.
One of the fundamental questions in the context of regulating AI is personhood and many have chosen to look at this problem in the context of intellectual property law. Since AI algorithms are already capable of creating poetry, music and art of their own accord, and without human intervention, the question is whether we should be amending our copyright laws to vest intellectual property rights in the AI that created these works. While this is certainly one aspect of personhood, granting copyright to algorithms for the works they create seems to be little more than an attempt to anthropomorphize computers for their seemingly human-like creativity.
This hardly seems to be the sort of issue we should be spending our time on. When we gave corporations personhood, we did so because it served an economic objective. The separate legal identity of a corporation protects shareholders from liability, vesting directly in the corporation, the right to sue and be sued. If we are to grant personhood to AI, it should be for a similar reason—to meet some desired social outcomes—and not because they can draw well.
It is perhaps more appropriate to look at this question in the context of liability. Let’s take autonomous vehicles for example. Today, drivers are responsible for the accidents that occur when they are in control of their vehicle. When these become autonomous, drivers will no longer control the cars they sit in. Since our current laws affix liability on the person in control—if strictly applied, it will be the autonomous car itself that is liable. In the absence of a law of personhood—that recognizes AI as a separate legal entity—the victim will have no one to sue. This is clearly not a desirable social outcome.
One obvious regulatory solution would be to shift the liability upwards—to the programmer who wrote the specific lines of code that resulted in the harm. However, the reason most fully autonomous AI work so well is precisely because the algorithms they use to make their decisions do so in a black box where the exact rationale behind how and why these decisions were made are incapable of being adequately explained in human terms. This makes it very hard to attribute liability to a specific line of code or a particular programmatic choice.
It is for this reason that we will need to evolve a new and, perhaps, bespoke regulatory framework for AI if we want to achieve our social outcomes.
The best neural networks sacrifice the explainability of their decisions for the accuracy of their outcomes. This works very well in finding solutions for certain types of problems, like accurately detecting cancer at an early stage or predicting the weather. In these circumstances, we don’t really care to understand how the AI was able to detect the cancer or why it suddenly changed the forecast to rain, as long as it got it right.
However, in situations where an AI algorithm can impinge upon our lives —for example, in the context of an Artificial Intelligence based criminal sentencing program—it is less acceptable that these decisions are made in a black box.
If we allow AI algorithms to decide on these matters, we must ensure that the algorithmic choices they suggest are explainable. If, as a result, the algorithm is less accurate, then this is a trade-off that would, in my opinion, be acceptable, given the social necessity to ensure that no human being is deprived of life without reason.
We cannot, therefore, blindly apply our traditional principles of law to the regulation of AI. Instead, we would do well to develop a regulatory framework that takes into account the particular nature of AI and find a way to regulate it in the context of the social outcomes that we want to achieve.
This might require us to come up with a new set of principles that we apply differently to different types of AI and the different circumstances in which they are used. I would rather do that than assume that our general principles will serve us well. Perhaps this is one of those situations where it is sensible to develop a Law of the Horse.
Rahul Matthan is a partner at Trilegal. Ex Machina is a column on technology, law and everything in between. His Twitter handle is @matthan.
Comments are welcome at views@livemint.com
Catch all the Business News, Market News, Breaking News Events and Latest News Updates on Live Mint. Download The Mint News App to get Daily Market Updates.