Honey I shrunk AI: the latest in tech takeover

Small language models are as small as just 0.1% of LLMs.
Small language models are as small as just 0.1% of LLMs.

Summary

  • Turns out, shrinking AI models is what Big Tech is doing to reach every user in the world.

This month, Meta unveiled its new AI model, Llama-3, and Microsoft introduced Phi-3. The base version of each is tiny in comparison to large language models (LLMs). Turns out, shrinking AI models is what Big Tech is doing to reach every user in the world.

Why are firms going small with AI models?

Small language models (SLMs) are as small as just 0.1% of LLMs. For instance, Google’s Gemini Nano-1 uses 1.8 billion data parameters, in comparison with OpenAI’s GPT-4 using 1.75 trillion parameters. This has many advantages—companies building models need fewer but more specific data, which is easier to get. Further, small models will need less powerful computers to train, so costs come down. For users, any average smartphone would be able to run and process such AI models—hence saving the cost of cloud computing. This makes AI models more marketable, and thus, more directly monetizable than LLMs.

Why do we need large models?

The biggest advantage of LLMs lies in their vast variety, making them general-purpose in nature. This is similar to a general search engine like Google, versus a search feature built into a webpage. SLMs cannot be so versatile, which means that each topic or category would need their individual models to operate efficiently. LLMs are also typically more capable, since they are built and trained over a longer period of time, and have more data to cross-reference for complex queries. While SLMs are good for purpose-built use-cases, LLMs are crucial for foundational purposes—such as high-performance research.

Read more: How L&T is engineering an AI-driven conglomerate

 

Are companies already using small models?

Apart from Meta and Microsoft, Salesforce has introduced the XGen-7b, which uses 7 billion data parameters, and is used for generating syntax for coding. US-based tech firm Hugging Face also has an SLM, Zephyr, which is designed specifically for conversations. Google’s SLMs, Gemini Nano, power chat transcriptions and summaries in smartphones.

How would this change our gadgets?

The greatest benefit is their need for much lesser computing power. This means smartphones, laptops and devices such as smart speakers and appliances, will be able to run such models. This means any average mobile application can change. Transcriptions of voice calls and recordings can become mainstream, while voice interfaces in smart speakers and devices can become better at conversing. Even photo editing can include AI to create backgrounds or remove objects—Samsung and Google already have this in their phones.

Read more: Consulting firms forge a unique partnership: AI working in tandem with CAs

 

Does this matter to companies?

Yes. Big Tech, which primarily builds these AI models, can now create niche AI models for a wide variety of applications and industries, and therefore monetize generative AI—which has so far been a very expensive field. Developers will be able to access a wider range of features, potentially sparking a race to be early-movers in bringing generative AI to the masses. Companies such as Nvidia may suffer—running AI models locally on phones and laptops may take away the mainstream need for cloud and GPU infrastructure.

Catch all the Industry News, Banking News and Updates on Live Mint. Download The Mint News App to get Daily Market Updates.
more

MINT SPECIALS