Trouble viewing this email? View in web browser

Friday, June 02, 2023
techtalk
By Leslie D'Monte

What boardrooms should know about Generative AI; Can AI make humans extinct?

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” reads a new statement published by the Center for AI Safety. The statement was signed by leading industry officials, including OpenAI CEO Sam Altman; the “godfather” of AI, Geoffrey Hinton; top executives and researchers from Google DeepMind and Anthropic; Kevin Scott, Microsoft’s chief technology officer; Bruce Schneier, the internet security and cryptography pioneer; climate advocate Bill McKibben; and the musician Grimes, among others.

On 28 April, more than a thousand people, including Elon Musk, Yoshua Bengio, Stuart Russel, Gary Marcus and Andrew Yang, called for a six-month moratorium on training systems that are “more powerful than GPT-4”, arguing that such systems should be developed only when the world believes it can contain the risks. You may want to read: We must rein in the precocious Generative AI children. But how?

     

What boardrooms should know about Generative AI

Love it or hate it, generative artificial intelligence (AI) tools like OpenAI’s ChatGPT are impossible to ignore. These tools have enamoured millions of individuals and small companies with their ability to write blogs, make short films, draw images, make videos, generate software code, and even provide templates for marketing campaigns without any human intervention, even as bigger companies are treading this ground with a lot of caution.

The alarm bells notwithstanding, Generative AI is finding mention in global boardrooms. While AI was discussed by 17% of CEOs in the March quarter, spurred by the release of ChatGPT and the discussions around its potential use cases, Generative AI was specifically discussed by 2.7% of all earnings calls, and conversational AI was mentioned in 0.5% of all earnings calls--up from zero mentions in the October-December quarter, according to the latest ‘What CEOs talked about’ report by IoT Analytics, a Germany-based markets insight provider.

ChatGPT application programming interfaces (APIs allow applications to talk to each other) can help employees use their company’s search engine to access data in a more natural language format, analyze the context, and learn from the user’s search history to provide better results.

Human resources (HR) tasks like onboarding, training, performance management, and employee queries and complaints can also be automated using ChatGPT. AI can also help with compliance, credit risk management, investment research, and legal document processing in the financial sector.

Tang Yu, an AI-powered virtual humanoid robot, works as the rotating CEO of NetDragon Websoft Holdings Ltd, a Hong Kong-listed mobile and online gaming developer. [Photo/China Daily]

But integrating the APIs with the business workflows of other units has its own set of challenges for companies. You have to continuously monitor, re-train, and fine-tune to ensure that the models continue to produce accurate output and stay up-to-date (do read: Are you familiar with these terms? below).

You may read more about how Indian companies cautiously embrace ChatGPT here.

According to Goldman Sachs estimates, Generative AI could raise global GDP by 7% ($7 trillion) and increase productivity by 1.5% in the next decade. PwC estimates that the openness to tap into the power of Opportunities Generative AI will likely only continue to grow with an estimated $15.7 trillion of potential contribution to the global economy by 2030.

Who’s using ChatGPT?

More Asian, Black, and Hispanic than White adults: Only about six-in-ten US adults are familiar with OpenAI’s ChatGPT, but relatively few have tried it themselves, according to a Pew Research Center survey released this month. In contrast, the survey reveals that 78% of Asian adults have heard at least a little about it, and roughly half of Hispanic or Black adults are familiar with ChatGPT. Asian adults are also more than twice as likely as adults of other races to say they have heard a lot about this chatbot, according to the survey.

More men than women: Further, men are more likely than women to have heard at least a little about ChatGPT, as are adults under 30 when compared with those 30 and older.

Whites have less fun with ChatGPT: Interestingly, a mere 14% of all US adults say they have used ChatGPT for entertainment -- to learn something new or for their work. The findings are in line with a Pew Research Center survey from 2021 that found that Americans were more likely to express concerns than excitement about the increased use of AI in daily life. White adults who have heard of ChatGPT are consistently less likely than their Asian, Hispanic or Black counterparts to have used the chatbot for fun, education or work.

Age does matter: The use of ChatGPT for these purposes is also closely related to age. For example, adults under 30 who have heard of ChatGPT are far more likely than those 65 and older to have used the chatbot for entertainment (31% vs. 4%). In addition, younger adults tend to find ChatGPT more useful than older adults. About four-in-ten adults under 50 who have used it (38%) say it was extremely or very useful, whereas only about a quarter of users 50 and older (24%) say the same.

You may read the full report here.

How tech enables policy and policy enables tech

When the internet was introduced to the public on 15 August 1995 (it was available only to academic institutions and research bodies from 1986 till 1995), there was a lot of excitement as individuals and businesses woke up to the potential of an unregulated online world. The government did catch up and introduce regulations, but the Indian IT Act got notified only in October 2000 -- when there were just 5.5 million netizens and just one type of intermediary. Times have radically changed. Today, there are multiple intermediaries: e-commerce, digital media, social media, OTT, gaming, AI, etc. There will be an estimated 900 million online users by 2025 (where a good portion of the netizens are in the 10-29 age bracket) and 1 billion by 2030.

And there are, as Meity itself acknowledges, new and more complex forms of malware, including Catfishing (luring someone into a relationship by means of a fictional online persona), Doxxing (search for and publishing private information about a particular individual on the internet, typically with malicious intent), cyberstalking, cyber-trolling, gaslighting (means to manipulate another person into doubting their perceptions, experiences or understanding of events by doing so online), etc., coupled with the spread of Hate Speech, Disinformation and Fake news, and not to forget deep fakes, real-time voice cloning, etc., with the advent of Generative AI.

Picture courtesy of The Information

The fact is that tech advancements such as Web3, blockchain, the metaverse, AI, generative AI, quantum computing and gene editing can drive growth but also rattle individuals, companies and governments with privacy, security, job loss, and AGI (artificial general intelligence) concerns. On the privacy front, for instance, India is yet to introduce a Data Protection Bill for its 1.4 billion population in this Digital age.

As Stewart Brand, an author and editor of the Whole Earth Catalog, put it: “Once a new technology rolls over you if you’re not part of the steamroller, you’re part of the road.”

In this context, Mint conducted a panel aptly titled: ‘How can tech enable policy and policy enable tech’ hopes to focus on how emerging technologies keep on pushing governments to tweak existing policies or even introduce new ones. And how good policy frameworks, in turn, help newer technologies blossom while preventing them from having an unbridled run -- for instance, remain inclusive, have fewer biases, use interoperable standards, etc.

Technology regulation is hard, and governments struggle to keep up with all they need to do to stay abreast of new technology. As a result, the regulations that govern specific sectors often appear to be a patchwork of rules, guidelines, and notifications rather than have some coherent form and vision. Such policies end up being more reactive than proactive, which sometimes results in shunning or banning technologies, ostensibly to protect individuals -- bitcoins, cryptocurrencies, and online gaming being some cases in point.

On the positive side, governments over these years have built the foundation for a true Digital India -- we have an exemplary India Stack (Aadhar-enabled services, UPI, Account Aggregator, ONDC, and are working on the Health Stack, to name a few). India is also introducing the Digital India Act to supersede the IT Act 2000. And the Digital Personal Data Protection Bill may soon see the light of day. And India is also close to finalizing its AI policy.

During the recent Mint Public Policy Summit in New Delhi, panellists discussed four broad areas: How effectively has India framed a good tech policy? What challenges does a country like India face in this context? What can we learn from countries like the US, the European Union, Singapore, etc.? Are India’s policymakers implementing concepts like Design Thinking, Systems Thinking, Game Theory and Theory of Change?

You may watch the entire discussion here.

Regulating AI in particular: Governments across the world have imposed AI regulations. These include the European Union’s AI Act, Canada’s Artificial Intelligence and Data Act (AIDA), the United States’s AI Bill of Rights and State Initiatives, and China’s Algorithm Transparency and Promoting AI Industry Development.

Are you familiar with these terms?

Fine-tuning is the process of adapting a pre-trained foundation model to perform better in a specific task. This entails a relatively short training period on a labelled data set, which is much smaller than the data set the model was initially trained on. This additional training allows the model to learn and adapt to the nuances, terminology, and specific patterns found in the smaller data set.

Foundation models (FM) are deep learning models trained on vast quantities of unstructured, unlabeled data that can be used for a wide range of tasks out of the box or adapted to specific tasks through fine-tuning. These models include GPT-4, PaLM, DALL·E 2, and Stable Diffusion.

Large language models (LLMs) comprise a class of foundation models that can process massive amounts of unstructured text and learn the relationships between words or portions of words, known as tokens. This enables LLMs to generate natural language text, performing tasks such as summarization or knowledge extraction. GPT-4 (which underlies ChatGPT) and LaMDA (the model behind Bard) are examples of LLMs.

MLOps: It refers to the engineering patterns and practices to scale and sustain AI and ML. It encompasses a set of practices that span the full ML life cycle (data management, development, deployment, and live operations). Many of these practices are now enabled or optimized by supporting software (tools that help to standardize, streamline, or automate tasks).

Prompt engineering: It refers to the process of designing, refining, and optimizing input prompts to guide a generative AI model toward producing desired (that is, accurate) outputs.

Structured data: Tabular data (for example, organized in tables, databases, or spreadsheets) that can be used to train some machine learning models effectively.

Unstructured data: Lack a consistent format or structure (for example, text, images, and audio files) and typically require more advanced techniques to extract insights.

Source: What every CEO should know about generative AI: McKinsey

Hope you folks have a great weekend, and your feedback will be much appreciated.

Download the Mint app and read premium stories
Google Play Store App Store
Livemint.com | Privacy Policy | Contact us You received this email because you signed up for HT newsletters or because it is included in your subscription. Copyright © HT Digital Streams. All Rights Reserved