Trouble viewing this email? View in web browser

Friday, January 20, 2023
By Leslie D'Monte

Why ChatGPT will only become stronger with GPT-4... but why has Google’s Sparrow not gained traction?

Hardly a day goes by without a mention of ChatGPT, the wunderkind artificial intelligence (AI)-powered chatbot from OpenAI that has taken the online world by storm. Microsoft, for instance, announced on 16 January that it would add ChatGPT to its Azure cloud services in the near future. Microsoft chairman and CEO Satya Nadella tweeted, “ChatGPT is coming soon to the Azure OpenAI Service, which is now generally available, as we help customers apply the world’s most advanced AI models to their own business imperatives.

ChatGPT is powered by Microsoft Azure, and Microsoft is reportedly in talks to invest $10 billion in OpenAI, according to a report by online media outlet Semafor, even as it is mulling the integration of ChatGPT with its search engine Bing. Microsoft, incidentally, already invested $1 billion in OpenAI in 2019.


ChatGPT was released to the public on 30 November for testing and feedback, after which netizens were overwhelmed with this smart chatbot’s prowess at engaging with them while answering questions, writing code, poems, and essays, among other things. It is a GPT-3.5 series model. To be sure, even the third iteration of Generative Pre-trained Transformer (GPT-3) with 175 billion parameters impressed many with its potential to write human-like poems, articles, books, tweets, resumes, and even code.

Limitations GPT-3 is trained to predict the next word on a large dataset of internet text, but it can also generate untruthful and toxic comments, spread misinformation and spam, and write fraudulent academic essays. OpenAI, co-founded by Tesla, SpaceX and Twitter owner Elon Musk (who is no longer associated with OpenAI), is attempting to address these limitations with ChatGPT by using Reinforcement Learning from Human Feedback (RLHF) to make it “more truthful and less toxic” with the help of human supervisors.

ChatGPT does have limitations. OpenAI points out that the models may have knowledge of current events since the default models were trained on data till the end of 2021. OpenAI acknowledges that ChatGPT “sometimes writes plausible-sounding but incorrect or nonsensical answers”.

Regardless, content creators and voice actors have their work cut out with intelligent software mimicking their writings, art, voice, and emotions.

Consider these developments. If OpenAI’s DALL-E can generate realistic art and images from plain text prompts, and ChatGPT can write poems, articles, books and even code, Microsoft’s text-to-speech AI model, VALL-E, can simulate a person’s voice with just a 3-second recording.

Initial results show that VALL-E can also preserve the speaker’s emotional tone. According to the paper’s authors, VALL-E was pre-trained on 60,000 hours of English speech data, which the paper claims is “hundreds of times larger than existing systems”.

OpenAI’s WebGPT prototype uses a text-based browser to submit search queries, follow links, scroll web pages, and also cite sources.

How they work: Large language models (LLMs) like GPT-3 and chatbots like ChatGPT are trained on billions of words from sources like the internet, books, and sources, including Common Crawl and Wikipedia, which makes them more knowledgeable than most humans. LLMs use transformer neural networks to read many words (sentences and paragraphs, too) at a time, figure out how they relate, and predict the following word.

However, while LLMs such as GPT-3 and models like ChatGPT may outperform humans at some tasks, they do not understand what they read or write, unlike humans. Moreover, these models use human supervisors to make them more sensible and less toxic.

OpenAI explains on its website that ChatGPT is a sibling model to InstructGPT, which is trained to follow the instruction in a prompt and provide a detailed response. The model uses Reinforcement Learning from Human Feedback (RLHF), using the same methods as InstructGPT (from the GPT-3.5 series) but with tweaks to the data collection setup. ChatGPT finished training in early 2022. ChatGPT and GPT-3.5 were trained on an Azure AI supercomputing infrastructure. GPT-3.5 refers to the series of models trained on a mix of text and code before the fourth quarter of 2021, according to OpenAI.

ChatGPT, as I explained above, is based on the GPT-3.5 series. Alphabet-owned Deepmind (which Google acquired in 2014) has a similar AI-powered chatbot called Sparrow, which has not made waves. Sparrow, too, is described as a “dialogue agent that’s useful and reduces the risk of unsafe and inappropriate answers. The AI agent is designed to talk with a user, answer questions, and search the internet using Google when it’s helpful to look up evidence to inform its responses”.

DeepMind acknowledges that training a conversational AI is challenging because it’s hard to decipher what makes a dialogue effective. DeepMind used a form of reinforcement learning (RL) based on people’s feedback to address the issue. The preference feedback of the participants was used to train a model for the usefulness of its answer. The model was also trained to desist from making “threatening statements” and “hateful or insulting comments”. DeepMind researchers also provided rules around possibly harmful advice and not claiming to be a person.

Sparrow currently focuses on English, but it is evident that DeepMind researchers are working to ensure similar results across other languages and cultural contexts. And though it was released in September 2022, Sparrow is not well known as it has not been released for public feedback and testing as was ChatGPT (which is free for testing, but we may have to pay for it soon), which made the latter a darling of the masses.

What next? LLMs are increasing in size. Google’s BERT, for instance, was trained with 340 million parameters, while GPT-3 has 175 billion parameters. Megatron-Turing NLG, a model released in 2022 by Nvidia and Microsoft, is trained with 530 billion parameters. Google’s Pathways Language Model (PaLM) consists of 540 billion parameters, while Google Brain’s open-sourced ‘Switch Transformer’ natural language processing (NLP) AI model scales up to 1.6 trillion parameters. A team from OpenAI, creators of the GPT-3 model, found that NLP performance does scale with a number of parameters--essentially parts of the model learned from historical training data and those that define the skill of the model on a problem, such as generating text. A model can acquire more granular knowledge and improve its predictions with an increase in parameters.

All these developments indicate that such AI-powered chatbots will transform content creation methods even as they make many content creators redundant, especially if the latter does not figure out ways to complement and use these tools. Institutions and lecturers around the world have begun urging authorities to review the way in which courses are assessed over concerns that students are using ChatGPT to write papers that can be tantamount to plagiarising since there is no mention of sources and links. It’s also important to note that just as the internet rewired our brains, tools such as ChatGPT and Sparrow may do likewise. With the advent of calculators, for instance, few can do maths in their heads today.

The release of GPT-4, which is expected to significantly outperform GPT-3 with its rumoured 100 trillion parameters, will only raise more such questions by making tools like ChatGPT write better than most humans. But there will be room for original ideas and content creators. Well! At least for now.


Which company currently holds the record for building the world’s largest quantum computer?

  • Microsoft
  • Nvidia
  • IBM
  • Google
  • D-Wave Systems

(The correct answer is given below)


‘Tragedy Paradox’: Can listening to sad music help in times of sadness?

It’s intuitive that music affects our emotions, and that many people turn to music to boost their mood or to help them through a tough time. But why do we sometimes seek out sad music, even when we try to minimize sadness in our lives? IIT Mandi researchers may have an answer to this so-called “tragedy paradox” that has puzzled philosophers for centuries. “We wanted to find out how the brain reacts when listening to sad music after having an adverse experience or memory,” said Prof. Laxmidhar Behera, now Director of IIT Mandi, who led a study to answer this riddle.

Picture courtesy of IIT Mandi

The researchers used electroencephalography (EEG) to measure the brain activity of 20 people under different conditions. EEG measures the electrical activity of the brain, commonly known as the brain waves. The researchers focused on brain regions involved in emotion and memory processing: the cingulate cortex complex and the parahippocampus. The participants had no musical training to avoid pre-programmed reactions to the selected music.

IIT Mandi, in its press release, underscores that brain waves are of five types – alpha, beta, gamma, delta, and theta, representing a different mood/state of mind. Within the current context, Alpha is, for example, associated with cognitive information processing, while gamma is associated with episodic memory processing. The researchers found that when recalling a sad experience (i.e. during SAR), there is enhanced gamma wave activity, while listening to sad music leads to increased alpha brain activity. The finding shows that the brain’s activity when listening to sad music is unique and distinct from both the SAR state and the baseline resting state.

The coping mechanism of sad music arises from better processing of emotions and memories under the alpha state. “The coping effects are not simply due to the aesthetic appeal of the music, as previously believed, but an inherent property of sad music”, explains Ashish Gupta, a PhD scholar. This work has significance for music therapy, music training, etc. where music is used or can be used as a therapeutic tool. These observations were recently published in the open-access journal PLOS One in a paper co-authored by Ashish Gupta, Prof Braj Bhushan, and Prof Behera.

Swiss startup Apostrophy to present Android-based OS at WEF

A number of mobile phone industry veterans working on a Swiss startup, Apostrophy, is set to present a new privacy-centric smartphone operating system (OS) dubbed AphyOS, at the World Economic Forum in Davos. Headed by Petter Neby, who runs a mobile phone venture named Punkt that focuses on offering a low-distraction mobile phone usage experience, Apostrophy is reportedly working on creating a rival to the established global duopoly of Google’s Android OS and Apple’s iOS, according to a report by Bloomberg.

AphyOS is based on an open-source version of Android called GrapheneOS. It seemingly plans to charge vendors who run phones on their software a subscription fee, which would make for their primary revenue model. The company also plans to raise $11 million in funding in order to further its plan. As part of the focus on privacy, Apostrophy’s AphyOS plans to segregate apps running on the software platform in terms of how they interact with each other — a move similar to how Apple restricted cross-app user tracking with its move to promote app privacy.

To be sure, AphyOS will support apps that run on Android OS but do not come with Google’s Mobile Services preinstalled. This is also not the first attempt at making a privacy-centric take on Android OS. In 2021, Gael Duval, the founder of Mandrake Linux, created /e/ OS — a privacy-focused fork of open-source Android that also claimed to segregate Android and not run on Google Mobile Services while supporting Android apps. You may read more about this here.

The answer to the Quiz:

c) IBM currently holds this record with its quantum computer christened Osprey, which has 433 qubits (quantum bits). This is more than triple the size of its own 127-qubit computer and more than eight times larger than Google’s 53-qubit computer Sycamore. D-Wave’s quantum computers have more than 5,000 qubits, but they use a different technology (quantum annealing) and hence are not comparable with the ones that IBM and Google have built (gate-based).

I hope you folks have a great weekend. And do remember, we welcome your feedback. Write to me at

Download the Mint app and read premium stories
Google Play Store App Store | Privacy Policy | Contact us You received this email because you signed up for Mint newsletters or because it is included in your subscription. Copyright © HT Digital Streams. All Rights Reserved