Generative artificial intelligence (AI) has redefined the standards of artificial intelligence, revolutionising sectors, industries, driving innovation and transforming our everyday work and play. Generative AI uses algorithms to produce novel textual, audio and visual content by responding to user cues and new audio, visual and textual media based on user prompts. While generative AI has tremendous potential in a variety of applications, such as video game design, marketing campaigns and generation of software codes, it is not just limited to creation of unique content.
The use of Generative AI in the medical sector is also being explored. The tech can improve accuracy of diagnostic imaging, or summarize key data points for medical consultations, on the basis of family history and lifestyle. In the legal sector as well, Generative AI can help in drafting contract provisions. We can agree to have sheepishly googled to check whether Generative AI is likely to put us out of our jobs!
Though generative AI, or AI, broadly delivers "human-like intelligence" at a significantly reduced cost, it has unlocked concerns around IP ownership, cyber security, privacy and is known to perpetuate bias.
Since generative AI models rely on existing content on the internet, it is likely to inherit stereotypical biases of the internet if left unchecked. For instance, Stable diffusion, which generates images using AI, is known to take racial and social biases to extremes. As per Stable Diffusion, women are seldom engineers, lawyers or doctors. Dark males are shown as criminals, while brown skinned are shown as immigrants. Given that various creative giants are already looking to harness generative AI for content creation, callous use of AI is likely to perpetuate bias and stall any growth towards rightful representation.
An ongoing battle with generative AI is deepfakes and impersonation attacks. Generative AI makes it possible to initiate phishing scams and hoaxes, identity thefts, manipulation of elections and financial frauds. You would have heard of calls, impersonating a friend in a crisis, who wants help only in the form of immediate fund transfer. What is unnerving is that these hoax calls are not just limited to voice calls and include video calls, using face-swap technology.
Generative AI is also known to be prone to errors when it comes to questions around practical functioning of the world. When generative AI misinterprets a question, it makes up an answer with factual errors, these are commonly known as “hallucinations”. As can be understood from the term, these erroneous responses are quiet convincing and authentic. For instance, there have been instances of generative AI tools generating convincing incorrect responses to medical questions. In another instance, it created false stories of sexual harassment against a US professor. All of this raises impending questions around the reliability of Generative AI.
Another risk associated with generative AI is the breach of intellectual property rights. Since generative AI relies on volumes of data, content generated by AI could be a result of infringed intellectual property of artists. The artist community is increasingly anxious about generative AI like Midjourney and Dall-E, which is not only capable of generating high quality images, suitable for commercial usage, but also mimic the expression and style of the artists. An ongoing debate is whether the current legal framework on intellectual property, addresses the concern around AI-generated content, both in terms of work being used in AI training, as well as the ownership of the AI generated content.
Recently, in the case of Thaler v. Perlmutter, a US district judge upheld the decision of the US Copyright Office to deny copyright application for an art which was generated by AI. The art was autonomously generated by AI and lacked human involvement. The Copyright Office denied the application on the grounds that the work lacked human authorship and reiterated their earlier position of human creativity being an essential part of a valid copyright claim.
Earlier also, images generated by Midjourney was rejected by the US Copyright Office. The US Copyright Office, in a guideline, said that copyright protection depends on whether the AI’s contributions are “the result of mechanical reproduction”, such as in response to text prompts, or if they reflect the author’s “own mental conception”. The office added that, “The answer will depend on the circumstances, particularly how the AI tool operates and how it was used to create the final work”. They reiterated that copyright protection also depends on the level of human involvement in creation of the work.
There are endless questions arising due to the ambiguity around the copyrightability of AI generated content. The most pertinent question, which is being actively debated is, who owns the IP for AI- generated output- which leads us to the basic question of whether AI generated content is even qualified as intellectual property? Given the growing popularity of generative AI and the increasing number of persons invested in this sector, the uncertainties and debate around the copyrightability of AI need to be settled soon.
The future likely involves humans and AI working collaboratively, with generative AI assisting in tasks that require creativity, problem-solving, and pattern recognition. While generative AI offers incredible possibilities, the associated risks should not be overlooked. Ongoing research and development will drive advancements in generative AI, resulting in more sophisticated models, improved understanding of human creativity, and refined algorithms.
Like any other emerging technology, the AI space is extremely dynamic and will always be ahead of the existing laws. While countries and organizations are coming up with discussion papers to tackle the issues associated with generative AI and build a regulatory framework, the obscurity and inscrutability of AI make it challenging to draft an all encompassing law. Therefore, the existing legislations should be modified to build guard rails around the different AI risks. To begin with, content generated by AI models should be moderated and a mechanism should be built in to suspend violative/ illegal content. Data for training purposes should be sourced lawfully and safety nets should be built in for publically available sensitive personal information. The regulatory approach should lean towards a trusted and responsible adoption of Generative AI with open consultations with stakeholders, which does not hinder innovation.
Vivek Kathpalia is MD and CEO – Singapore, Cyril Amarchand Mangaldas; and Shambhawi Mishra is principal associate designate at Cyril Amarchand Mangaldas.
Catch all the Business News, Market News, Breaking News Events and Latest News Updates on Live Mint. Download The Mint News App to get Daily Market Updates.