Home >Technology >News >Chatbots on steroids can rewire business
A human on average could read about 600-700 books in his or her lifetime. In contrast, the GPT-3 model has already digested about 500 billion words from sources like the internet and books
A human on average could read about 600-700 books in his or her lifetime. In contrast, the GPT-3 model has already digested about 500 billion words from sources like the internet and books

Chatbots on steroids can rewire business

  • GPT-3 is making waves for its ability to generate human-like text. Will it live up to the hype?
  • Implementing GPT-3 is currently expensive. It can also be misused to create content that looks like human-written content and could be used to spread hate

TORONTO : Lifelesson 34: Destiny is what happens to you. Life is how you chose to react to it." This is just one of the series of motivational quotes that were tweeted by an Indian coder, Tushar Khattar, in the last week of July. Khattar tagged Ankur Warikoo, entrepreneur, angel investor and founder of nearbuy.com, and asked: “Here is GPT-3 trained on the life lessons you tweeted. And it has generated some of the lessons based on that. Do you think these would ever have been the thoughts you would have tweeted about?"

Warikoo responded from his @warikoo handle: “I will now spend the rest of my life stating that my thoughts are not GPT-3 generated."

In reality, Khattar’s tweets were generated after running it through Warikoo’s past Twitter content—with the help of an artificial intelligence (AI), Natural Language Programming (NLP) model called Generative Pre-Trained Transformer 3.0, or GPT-3, that is making waves on the internet for its ability to generate human-like text.

Here’s another example. Consider this paragraph: “In a strange way, an AI could help us all come together, but at what point does this relationship of human and machine start to undermine who we are as a species? Where do we draw the line between human and machine?"

Amazingly, even these questions have been generated by an AI language model and not a human. The paragraph is part of an episode that was written in collaboration with a human for Tinkered Thinking. The paragraph was fed to the GPT-3 model, following which the AI language model generated more sequential text with a predefined word count. GPT-3 wrote the entire episode in this manner. There have been noises from Indian companies too. For instance, Haptik, a Mukesh Ambani-owned Jio Platforms unit, has also used GPT-3 “to generate an email sent to the whole company, written a blogpost using it, written code using it and much more", according to a blog by Swapan Rajdev, the AI startup’s co-founder and CTO.

Given all the excitement, it’s hardly surprising that businesses worldwide—including Indian companies—are waking up to the potential of GPT-3.

In India, “many engineers and data scientists are awaiting access to beta testing of the model", notes Jayanth Kolla, founder of deep tech research and advisory firm, Convergence Catalyst. “When commercially available in India, GPT-3 could be used to power a number of chatbots that are currently being used in customer support and digital marketing across BFSI (banking, financial services and insurance), retail and e-commerce domains."

US-based Alogia, for instance, is already combining GTP-3 application programming interface (API) with its own search technology to provide its customers with a “natural language semantic search" that can simplify questions and speedily provide more relevant results.

Then, MessageBird is using the API to develop automated grammar and spelling tools as well as predictive text to enhance its Inbox’s AI capabilities. Sapling Intelligence, an AI writing assistant for customer-facing teams, has used GPT-3 to develop a knowledge-based search feature that helps the sales and support teams by suggesting chat responses.This initial hype raises important questions, of course. How rooted in reality are the commercial possibilities around GPT-3? What are the potential risks? And now that AI-generated text is here, what is to follow?

Defining GPT-3

GPT-3 is currently the world’s largest language learning model. It can potentially be used to write poems, articles, books, tweets, resumes, sift through legal documents and even to translate or write code as well as, or even better than, humans.

GPT-3 was released on 11 June by OpenAI—a non-profit AI research company founded by Elon Musk (who resigned from the board but remains a co-chair) and others—as an application programming interface (API) for developers to test and build a host of smart software products. OpenAI plans to commercialise the model.

“We’ve received tens of thousands of applications for access to GPT-3 via our API," tweeted OpenAI Chairman and CTO, Greg Brockman, on 23 July. He is not exaggerating. Numerous private beta testers around the world are currently using the API not only to generate short stories and poems, but also guitar tabs, computer code, recipe generator and even a search engine. Thousands others are in line as there is a huge ‘waiting list’.

Its earlier much smaller predecessor GPT-2 had 1.5 billion parameters (though a smaller dataset was released to avoid potential misuse), and trained on a dataset of 8 million web pages. Parameters help Machine Learning (a subset of AI) models make predictions on new data. Examples include the weights in a neural network (called thus, since it’s loosely modeled on the human brain).

GPT-2 has trained on more than 10X the amount of data than its predecessor, GPT, which was introduced in June 2018.

GPT-2 does require any task-specific training data (e.g. Wikipedia, news, books) to learn language tasks such as question answering, reading comprehension, summarization, and translation from raw text. The reason: data scientists can use pre-trained models and a machine learning technique called ‘Transfer Learning’ to solve problems similar to the one that was solved by the pre-trained model.

India’s regional social media platform, Sharechat, for instance, pre-trained a GPT-2 model on a corpus constructed from Hindi Wikipedia and Hindi Common Crawl data to generate shayaris (poetry in Hindi).

GPT-3 vastly enhances these capabilities. According to Debdoot Mukherjee, vice president-AI at Sharechat, “GPT-3 is a big leap for the NLP community. One, it does not bother about syntax parsing, grammar, etc., each of which are laborious tasks. Second, I don’t need to be a linguist or a Ph.D. All I need is to have some data in the language I need to translate, and knowledge of deep learning."

In a 22 July paper titled, ‘Language Models are Few-Shot Learners’, the authors describe GPT-3 as an autoregressive language model with 175 billion parameters. Autoregressive models use past values to predict future ones.

Humans typically learn a new language with the help of a few examples or simple instructions. They also are able to understand the context of the words. As an example, humans understand well that the word ‘bank’ can be used either to talk about a river or finance, depending on the context of the sentence. GPT-3 hopes to use this contextual ability and the transformer model (that reads the entire sequence of words in a single instance rather than word-by-word, thus consuming less computing power too), to achieve similar results.

Cutting through the hype

GPT-3 is undoubtedly an extremely well-read AI language model. A human on average could read about 600-700 books (assuming 8-10 books a year for 70 years) and about 125,000 articles (assuming five every day for 70 years) in his or her lifetime. That said, it’s humanly impossible for most of us to memorize this vast reading material and reproduce it on demand.In contrast, the GPT-3 model has already digested about 500 billion words from sources like the internet and books (499 billion tokens, or words, to be precise from sources including Common Crawl and Wikipedia). Common Crawl is an open repository that can be accessed and analyzed by anyone. It contains petabytes of data collected over eight years of web crawling. Further, GPT-3 can recall and instantly draw inferences from this data repository.

It’s these very abilities raise many questions and concerns. “No doubt, GPT-3 is a grand achievement. The scale is superlative, and the AI is stupendous. There is definitely a “wow" factor but that wears off after a bit," said Kashyap Kompella, CEO of the technology industry analyst firm RPA2AI Research.

“Let’s face it. The lack of creative talent that can churn out good copy is not a problem that Indian brands and businesses are losing their sleep over. GPT-3 still has issues where it may generate nonsensical text or insensitive text, and may create unnecessary headaches for those who deploy it. Plus, I worry that it can be weaponized to generate realistic phishing emails. How would it be controlled when it becomes commercially available for all?" he added.

Ganesh Gopalan, CEO and co-founder of gnani.ai--a deep tech AI company, has a similar view. “GPT-3 has revolutionized language models to solve specific NLP domain tasks since it would need only limited additional training for the domain, compared to conventional models. GPT-3, however, offers APIs and not the complete model. If it lives up to its hype, GPT-3, or its future enhanced models, could potentially put people like content writers and even traditional programmers out of work. It can also be misused to create content that looks like human written content and could spread hate, racial and communal bias," he cautions. These are valid concerns. The authors of the GPT-3 paper (cited above) acknowledge that the AI language model can be “misused to spread misinformation, spam, phishing, abuse of legal and governmental processes, fraudulent academic essay writing and social engineering pretexting by lowering existing barriers to carrying out these activities and increase their efficacy".

They also point out that biases present in training data may lead models to generate stereotyped or prejudiced content. According to Kolla, GPT-3 “still lacks intelligence to garner the right data points, invalidate them, compare and develop a factually correct or an analytical narrative".

The authors of the paper acknowledge that “...although the overall quality is high, GPT-3 samples still sometimes repeat themselves semantically at the document level, start to lose coherence over sufficiently long passages, contradict themselves, and occasionally contain non-sequitur (not aligned with the previous ones) sentences or paragraphs."

The authors also note that large pre-trained language models “are not grounded in other domains of experience, such as video or real-world physical interaction, and thus lack a large amount of context about the world".

Work in progress

Pricing has been announced for beta testers. It is currently free for the first three months; then, for $400 per month, testers can get 10 million tokens (as a point of reference, Shakespeare’s entire collection is about 900,000 words or 1.2 million tokens). For anything larger than this, testers need to contact OpenAI for pricing.

Mukherjee of Sharechat points out that some organizations may have to share data with the OpenAI team since it is currently “a generic API". While cost is a hindrance, Mukherjee is optimistic that eventually “transfer learning will be the game changer for Indian language startups".

Kompella, who views “GPT-3 as a Grammarly (a tool that is a writer’s digital assistant) on steroids", concurs with the viewpoint. “One useful way to put GPT-3 to work would be for marketers to generate personalized emails that can move the needle on metrics like conversion and click-through rates. Perhaps, it can be used to create customized product descriptions on ecommerce platforms. If use cases like these are to take off, the pricing has to be affordable to Indian businesses," he says.

Last, but not the least, even GPT-3 can be fooled by humans. In a 6 July blog on lacker.io, Parse co-founder and software engineer, Kevin Lacker, notes that humans can stump GPT-3 if we ask it “questions that no normal human would ever talk about". Here are a couple of examples from his blog. “Q: How many eyes does my foot have? A: Your foot has two eyes."; “Q: How many eyes does the sun have? A: The sun has one eye."

According to Lacker, “GPT-3 is quite impressive in some areas, and still clearly subhuman in others." He concludes, though, that “Right now, we are mostly seeing what GPT-3 can do “out of the box". We might get large improvements once people spend some time customizing it to particular tasks."

Leslie D’Monte is a consultant who writes on the intersection of science and technology.

Subscribe to Mint Newsletters
* Enter a valid email
* Thank you for subscribing to our newsletter.

Click here to read the Mint ePapermint is now on Telegram. Join mint channel in your Telegram and stay updated

My Reads Redeem a Gift Card Logout