Trouble viewing this email? View in web browser

Friday, January 27, 2023
techtalk
By Leslie D'Monte

Will ChatGPT rewire our brains? -- and danger of a ChatGPT Winter

I am bullish on online delivery platforms but prefer buying milk in person from my nearby kirana (mom-and-pop) shop. Call me old-fashioned, but it gives me a good reason to step out of the house while also providing me with the opportunity to chat with the owner--a middle-aged gentleman who is unassuming but can talk about practically any subject under the sun, be it politics, climate change, cricket, or even spirituality.

My estimate is that he has more than 500 regular customers. And what amazes me is that he remembers each one of them by face, knows their choices, and can even offer alternatives. For instance, I like full-cream milk of a specific brand. And he typically keeps the brand ready much before I reach the counter if he spots me from a distance. At times, he also recommends that I should sample some bread, cakes, and even biscuits. While I don’t always accept his requests, I like the personal attention. I believe he has an outstanding photographic memory to remember faces and personal choices, and the whole exercise makes me want to return to his shop.

     

Today, advanced data analytics and artificial intelligence (AI)-powered software yield better results for retail supermarkets with hundreds of thousands of customers. You may also soon have kiosks and androids greeting you by name when you enter the shop and know your every choice and move (not exactly comforting), but I always wonder if machines will be able to express the range of emotions that my milk vendor can, or ever give me the comfort level, that the latter does.

As a bonus, if I ever forget my phone and wallet at home (highly unlikely, but possible), my milk vendor will never deny me the goods but will only ask me to pay the next time -- all with a smile on his face. I have yet to see an analytics-driven supermarket do so (here’s a business opportunity for someone to allow this, based on one’s buying patterns and credit standing), which all boils down to trust.

That said, my milk vendor cannot afford analytics or AI-powered software, and given the size of his shop, he may not even require it (he does not run a franchise). But he can remember names and faces just like our parents (and those born before the internet era) remembered phone numbers and many other details.

Dual-edged smart tools: Can make us smart or dumb

If my milk vendor decides to use AI tools to do the analytics for him, it will certainly ease his work, but there is also a good likelihood of his brain not wanting to clutter itself with recollecting faces and choices. Likewise, will artificial intelligence (AI)-powered tools like ChatGPT rewire our brains and weaken our reading, writing, comprehension skills, and communication skills?

Picture courtesy of Freepik

People always raise such questions when new technologies are introduced in the public domain. Nicholas Carr, the author of the book ‘The Shallows – What the Internet Is Doing to Our Brains’, which was a Pulitzer Prize finalist in 2010, is one such person. He is a visiting professor of sociology at Williams College in Massachusetts and a former executive editor of the Harvard Business Review. When writing the book more than a decade back, he had argued that even if people get better at hopping from page to page (referring to the multiple internet links), they would still be losing their abilities to employ a “slower, more contemplative mode of thought.” Carr updated the book in 2020, but his thoughts about the internet have not changed.

Clay Shirky shared another perspective. The author of ‘Here Comes Everybody: The Power of Organizing Without Organizations’ says in his book: “When we change the way we communicate, we change society.”

The fact is that new technologies--right from the wheels, cars, steam engines, aeroplanes, submarines, radars, guns, nuclear power, space telescopes, space satellites, rockets, and smartphones, to name a few—have progressively changed our world view. While some educators and parents may see smartphones as distractions and as devices that corrupt the youth with fake news and pornography, the fact is that millions of young people have also learnt new skills by watching videos on YouTube, such as dancing, playing instruments, singing, editing, and making movies, etc. So, while our brains forget certain skills like multiplication or addition due to lack of practice, they learn new ones.

Thinking aloud

In the communication space, the transformations are more sweeping since they have a bigger impact on how we think, organize, and reproduce ideas—hence, they may alarm people much more.

Consider these developments. Faculty at various colleges across the world and in India are discouraging first-time coders and school and college students from using AI-based platforms that can generate code or text by themselves. As an example, GitHub Copilot is a tool co-developed by Microsoft-owned code repository GitHub and AI research firm OpenAI. It assists programmers by generating computer code from the natural English language or auto-completing a code block, etc. Similarly, Blackbox allows developers to copy code from videos or turn any question into code. On the other hand, ChatGPT (and so does GPT-3) can write prose, college essays, poetry, books, and even code without any human intervention.

According to faculty at various colleges, the use of such tools in colleges can hamper how much students actually learn. For instance, coders need to learn logical thinking to become good coders, which is likely to be difficult if an automated tool is generating the code for them. The tricky part is that ChatGPT can replicate the style of any popular writer such as P.G. Wodehouse or Ernest Hemingway, giving students a shortcut of mindlessly copying such work without comprehending the depth. This can also result in plagiarism.

Software tools, including Moss, Turnitin, Urkund and GPTZero, can be used to keep plagiarism in check. GPTZero is already being used by the likes of Harvard University, Yale, and the University of Rhode Island, to track ChatGPT-driven plagiarism, according to a 16 January report by the New York Times. But these are not foolproof methods. Rapid advancements in AI code generation will eventually make it very tough for tools like GPTZero to detect ChatGPT-generated content.

Instead of thinking of AI tools as the enemy, we must ask: Why are students comfortable presenting a ChatGPT-generated paper as their own when they would perhaps squirm at having a more intelligent student write their paper for them? It’s another matter that many students do outsource their papers to writing outlets globally.

As an example, Grammarly says on its website that it “...supports streamlined and effective writing. Our suggestions help identify and replace complicated sentences with more efficient ones, refresh repetitive language, and uphold accurate spelling, punctuation, and grammar”. I have typically noticed that those who push for search engine optimization or SEO-friendly articles and headlines like to use such tools. These tools can certainly help thousands of people write better and faster in English, but does that mean that one should not be teaching grammar in schools (remember Wren & Martin?)?

That said, parents and educators must instil in students that even if their ChatGPT-generated papers escape scrutiny, the students would not have learnt how to think and be creative--lessons that will stand them in good stead all their lives. To paraphrase a famous proverb: “Give a man or woman a fish, and you feed him/her for a day. Teach them to fish, and you feed them for a lifetime.”

ChatGTP, for instance, can complement writing skills if used judiciously--say, to understand the background of a particular subject or refine points for a presentation, write a letter or even a judicial document.

Consider these cases in point. According to research conducted by a professor at the University of Pennsylvania’s Wharton School, ChatGPT-3 was able to pass the final exam for the school’s Master of Business Administration (MBA) programme. Prof. Christian Terwiesch, who authored the research paper ‘Would Chat GPT3 Get a Wharton MBA? A Prediction Based on Its Performance in the Operations Management Course’ said the bot scored between a B- and B on the exam.

He summarized the “academic performance” of ChatGPT in his paper thus:

“First, it does an amazing job at basic operations management and process analysis questions, including those that are based on case studies. Not only are the answers correct, but the explanations are excellent.

Second, Chat GPT3, at times, makes surprising mistakes in relatively simple calculations at the level of sixth-grade Math. These mistakes can be massive in magnitude.

Third, the present version of Chat GPT is not capable of handling more advanced process analysis questions, even when they are based on fairly standard templates. This includes process flows with multiple products and problems with stochastic effects such as demand variability.

Finally, ChatGPT3 is remarkably good at modifying its answers in response to human hints. In other words, in the instances where it initially failed to match the problem with the right solution method, Chat GPT3 was able to correct itself after receiving an appropriate hint from a human expert.”

ChatGPT’s performance here, according to Prof. Terwiesch, has “important implications for business school education, including the need for exam policies, curriculum design focusing on collaboration between human and AI, opportunities to simulate real-world decision-making processes, the need to teach creative problem solving, improved teaching productivity, and more”.

Likewise, researchers who evaluated ChatGPT’s ability to perform clinical reasoning by testing its performance on questions from the US Medical Licensing Examination (USMLE is a very tough exam, and Step 3 is taken by physicians who typically have completed at least a 0.5-1 year of postgraduate medical education), were surprised to discover that “ChatGPT is able to perform several intricate tasks relevant to handling complex medical and clinical information”. You may read the whole paper here.

Inspired by this performance of ChatGPT on the USMLE, clinicians at AnsibleHealth--a virtual chronic pulmonary disease clinic--have begun to experiment with ChatGPT as part of their workflows. “Inputting queries in a secure and de-identified manner, our clinicians request ChatGPT to assist with traditionally onerous writing tasks such as composing appeal letters to payors, simplifying radiology reports (and other jargon-dense records) to facilitate patient comprehension, and even brainstorming freely in a bid to kindle insight when faced with nebulous and diagnostically challenging cases. Overall, our clinicians reported a 33% decrease (future publication) in the time required to complete documentation and indirect patient care tasks,” the authors say.

Voices of Reason: Please don’t kill ChatGPT by over-hyping it

“To be clear: I’m not criticizing OpenAI’s work nor their claims. I’m trying to correct a perception by the public and the media who see chatGPT as this incredibly new, innovative, and unique technological breakthrough that is far ahead of everyone else. It’s just not,” Yann LeCun, Professor at NYU and Chief AI Scientist at Meta, tweeted on Tuesday. Some, of course, may argue that a rival is trying to underplay ChatGPT.

Critics, if any, should remember that LeCun developed convolutional neural networks (CNNs), which along with recurrent neural networks (RNNs), lay the foundation for myriad AI applications in today’s world. CNNs, for instance, are used when dealing with large amounts of complex data, such as image data. As an example, CNNs can help doctors examine thousands of pathology reports to visually detect the presence or absence of cancer cells in images.

A 20 December note on ‘Generative AI’ by McKinsey rightly cautions that the “...awe-inspiring results of generative AI might make it seem like a ready-set-go technology, but that’s not the case. Its nascency requires executives to proceed with an abundance of caution. Technologists are still working out the kinks, and plenty of practical and ethical issues remain open.” The kinks, according to McKinsey, include the fact that ChatGPT can generate “entirely inaccurate information in response to a user question and has no built-in mechanism to signal this to the user or challenge the result”; its filters are not yet effective enough to catch inappropriate content; and intellectual property (IP) questions are up for debate. For instance, the McKinsey note asks: “When a generative AI model brings forward a new product design or idea based on a user prompt, who can lay claim to it? What happens when it plagiarizes a source based on its training data?”.

In an interview, Sam Altman too expressed surprise over ChatGpt’s excitement. One of the things, he insisted, is to put out AI tools like ChatGPT in a more responsible way and give people, institutions, and policymakers more time to understand the implications, and give society time to “update to massive changes”.

GPT4, according to Altman, “will come out at some point...when we are like confident that we can do it safely, responsibly...” In general, we will release technology much more slowly than people will like”. When asked by the interviewer whether GPT-4 will have 100 trillion parameters as rumoured, Altman called it “complete bullshit”. He added, “I don’t know where it all comes from... people are begging to be disappointed...We don’t have an AGI (artificial general intelligence), and people seem to be expecting it.”

I had earlier written about ‘What does GPT-4 have in common with Rajnikant’? Not much has changed since then. My simple point is that if the hype around ChatGPT continues, there’s a danger of an impending AI Winter, which occurs when general interest in the AI industry cools--both in terms of funding and public attention--simply because expectations get out of sync with ground realities.

That’s clearly not a desirable outcome. Further, regardless of whether ChatGPT becomes stronger or not with GPT-4, institutions and policymakers have to step in to devise the right frameworks for these tools.

Just for laughs (at least, for now)

I asked ChatGPT: When will chimpanzees start using ChatGPT? Are there experiments going on?

Its response: “It is currently unlikely that chimpanzees or any other non-human primates will be able to use ChatGPT or any similar technology in the near future. While chimpanzees and other primates are intelligent animals with advanced cognitive abilities, they do not possess the level of linguistic or technological proficiency required to operate such technology. There are ongoing research and experiments in the field of animal communication and language, and some studies have shown that certain primates are capable of understanding and using simple forms of symbolic communication, such as American Sign Language (ASL) or lexigrams. However, these forms of communication are very different from the type of language that ChatGPT is designed to understand and generate.”

DID YOU KNOW?

What is Helion Energy known for (Hint: Has something to do with ChatGPT)?

  • Carbon sequestering
  • Weather forecasting
  • Nuclear fusion
  • Space travel
  • Wind energy

(The correct answer is given below)

CUTTING EDGE

Can GPT-3 help to detect early signs of Alzheimer’s disease?

GPT-3 might someday be able to help doctors detect Alzheimer’s Disease in its early stages, according to research from Drexel University’s School of Biomedical Engineering, Science and Health Systems, which recently demonstrated that OpenAI’s GPT-3 program could identify clues from spontaneous speech that are 80% accurate in predicting the early stages of dementia. Reported in the journal PLOS Digital Health, the Drexel study is the latest in a series of efforts to show the effectiveness of natural language processing (NLP) programs for early prediction of Alzheimer’s. “We know from ongoing research that the cognitive effects of Alzheimer’s Disease can manifest themselves in language production,” said Hualou Liang, PhD, a professor in Drexel’s School of Biomedical Engineering, Science and Health Systems and a co-author of the research. The researchers tested their theory by training the program with a set of transcripts from a portion of a dataset of speech recordings compiled with the support of the National Institutes of Health, specifically for the purpose of testing natural language processing programs’ ability to predict dementia. The program captured meaningful characteristics of the word-use, sentence structure and meaning from the text to produce what researchers call an “embedding”-- a characteristic profile of Alzheimer’s speech. They then used the embedding to retrain the program by asking it to review dozens of transcripts from the dataset and decide whether or not each one was produced by someone who was developing Alzheimer’s. “Our results demonstrate that the text embedding, generated by GPT-3, can be reliably used to not only detect individuals with Alzheimer’s Disease from healthy controls but also infer the subject’s cognitive testing score, both solely based on speech data,” the researchers wrote.

Apple patent for crack-resistant foldable display

Tech major Apple secured a patent on Monday for a new foldable display that is designed to be crack-resistant. The patent, which was granted by the United States Patent and Trademark Office (USPTO) and has patent number US-20230011092-A1, is said to address one of the key concerns of foldable devices, which is the fragility of these devices. The information was first reported by PatentlyApple, which pointed towards a possible upcoming foldable iPhone that could be in the very early stages of development. In September 2022, Apple obtained another patent for a foldable self-healing display. The technology was reportedly designed to help a foldable display recover from scratches as well as dents. Apple’s latest patent for a crack-resistant foldable display uses multiple layers, a flexible substrate and a protective layer that prevents this type of display from withstanding cracks in day-to-day use. You can read more about this here.

Microsoft invests multi-billion dollars in OpenAI

Microsoft has announced, “the third phase of our long-term partnership with OpenAI through a multi-year, multi-billion dollar investment to accelerate AI breakthroughs to ensure these benefits are broadly shared with the world”.

The answer to the Quiz:

c) Helion is building the world’s first fusion generators. Fusion occurs when two atoms combine to form a single atom, releasing energy. It is the process our sun and other stars use to make energy. Fusion power can be used to make electricity and will be a critical tool in the fight against climate change. Fusion does not produce a chain reaction, so fusion itself is not weaponizable. In November 2021, Helion Energy announced a $500 million funding round led by Sam Altman (Open AI CEO now famous for ChatGPT), who put $375 million into Helion, his largest investment in a startup ever. Helion Energy plans to use the $500 million to complete the construction of Polaris, its 7th generation fusion facility, which it broke ground on in July, and which it aims to use to demonstrate net electricity production in 2024.

I hope you folks have a great weekend. And do remember, we welcome your feedback.

Download the Mint app and read premium stories
Google Play Store App Store
Livemint.com | Privacy Policy | Contact us You received this email because you signed up for HT newsletters or because it is included in your subscription. Copyright © HT Digital Streams. All Rights Reserved