GenAI has a killer app. It's coding, says Databricks AI head Naveen Rao
Summary
- Coding is a sub-class of a much larger class of uses around true design automation, Rao said in an interview.
- With many coding tasks getting automated, the ability to innovate in product design and create unique user experiences will become even more valuable.
Unlike many experts who are still awaiting a killer app in the field of artificial intelligence (AI), Naveen Rao, vice-president of generative artificial intelligence (GenAI) at Databricks, says the world already has one -- it’s coding.
"We're seeing a lot of developers actually latch onto this, especially younger developers. LLMs (large language models) have also worked on code (besides text, images, audios, videos, etc.), and we're already seeing design automation happening through coding assistance. It (coding) is a sub-class of a much larger class of uses around true design automation," Rao told Mint in a recent video interview from his San Diego office.
Rao asserts that with many coding tasks getting automated, the ability to innovate in product design and create unique user experiences will become even more valuable.
"You can now create apps simply by describing them in English. As a result, the value of translating a design idea into an app has diminished because much of that process is now automated," he says.
Being a developer will mean using AI tools effectively, says Rao, adding that the focus will shift to understanding why certain apps succeed while others don’t.
Rao underscores, though, that “it will take 3-5 years to engineer these systems to be reliable and deterministic enough to be able to use them in core engineering." The reason is that GenAI still struggles with “hallucinations" (incorrect or misleading results) and lacks true reasoning capabilities despite claims to the contrary by some big tech companies.
"Current LLMs primarily pattern-match based on probabilities, not reason. Unlike humans or even animals that learn through a cycle of action and feedback, LLMs don’t engage in causal, real-world learning. While advancements, like the step-by-step reasoning introduced by OpenAI, are promising, there’s a long way to go," elaborates Rao, who has a degree from Duke University in computer science and a PhD in computational neuroscience from Brown University.
He is also a serial entrepreneur, having founded AI companies Nervana Systems, which Intel Corp. acquired in 2016 for about $400 million, and MosaicML that was acquired by Databricks last July for $1.3 billion.
Also Read: Dell banks on AI factory, GenAI growth in India
What keeps Rao up at night is the gap between AI’s potential and its current limitations, particularly in reasoning. According to him, AI models like DeepMind's AlphaZero, which hypothesize and learn through self-play in structured environments, provide a glimpse of what's possible, “but we’re far from applying that in the real world."
“We need to better understand how to enable models to reason and learn in dynamic environments," says Rao.
This push for reasoning, however, doesn’t necessarily demand greater computing power, according to him. Instead, the shift is towards smaller, high-quality datasets and more targeted tuning, which may not require the massive compute resources associated with training larger models.
Rao says this transition from batch offline training to online learning could transform how we approach AI development going forward.
Why software won’t eat the AI world
Rao also insists that software will not eat the AI world, referring to co-founder of US VC firm Andreessen Horowitz, Marc Andreessen's famous 2011 essay titled, "Why software is eating the world". In a 30 August thread on 'X' (formerly, Twitter), Rao argued that the "...fundamental balance of compute and software is different with AI...In light of @nvidia's strong growth and recent earnings, clearly hardware is an essential ingredient to the current wave of AI…"
Rao says he's more “excited about the ongoing evolution of hardware, which continues to drive efficiency and lower costs." He cites the example of the human brain, which operates on just 20 watts of energy, “showcasing how far we still are from creating AI systems as efficient and advanced." He adds that there's significant progress being made in how AI systems interact with hardware, improving latency, cost, and accuracy.
Databricks, for instance, is exploring “several exciting applications of AI and GenAI, particularly in task automation and creative processes," according to Rao. One area of focus is automating tasks like answering human resources (HR) queries or searching through company manuals, where “you can tolerate some error."
Another innovative use involves training LLMs to replicate a specific writing style, aiding news organisations by speeding up content creation. Databricks also supports the use of LLMs in scientific research, such as drug discovery, where AI assists in analysing protein interactions and advancing new drug development.
Databricks has been recognised as a leader in the 2024 Gartner Magic Quadrant for data science and machine learning platforms, 'The Forrester Wave: Cloud Data Pipelines, Q4 2023," and the 2024 IDC MarketScape for worldwide analytic stream processing.
Key customers using the Databricks' data intelligence platform to streamline data, AI, and analytics processes include Adobe, Aditya Birla Fashion & Retail, Mercedes-Benz Tech Innovation, Nasdaq, Air India, Parle, MakeMyTrip, Meesho, Tata Steel, and Shell.
Data is the AI oil
Databricks' DBRX model, for instance, can turn raw data into a fully trained, fine-tuned model. Databricks, says Rao, is now focusing on "compound AI systems," which combine multiple AI models, including open-source and proprietary ones, to create advanced solutions. Using this approach, Databricks helped Factset, a financial software provider, improve its query accuracy and efficiency.
Data is an essential ingredient in making AI truly add economic value. So maybe now the mantra is: “Data is eating the world through AI", says Rao.
That said, many companies struggle to bridge the gap between data readiness and effective AI implementation. A key challenge, says Rao, is to understand the economics of AI, which differs significantly from traditional software models like Software-as-a-Service (SaaS), where many applications can run on a single piece of hardware.
AI models, on the other hand, require dedicated physical infrastructure for each additional user. This, according to Rao, implies that scaling AI would involve costly hardware investments, resulting in lower gross margins (often below 20%) as compared to SaaS. On the development side, rising costs make profitability difficult, particularly as larger companies continue to raise capital and operate at a loss.
Also Read: GenAI will transform banking with focused customer experiences: Deutsche Bank
In this context, the key metric for CXOs when integrating AI is “defining clear success criteria." In AI, especially with custom LLMs, this involves creating an evaluation system, much like an exam, to measure the performance of the AI system, says Rao.
He adds that Databricks offers various tools for optimisation, such as re-ranking and fine-tuning models, which help improve performance. Establishing and refining these metrics is crucial for successful AI deployment.
He cites the example of Ola's Krutrim, which is one of Databricks’ customers that built its own AI model using the platform, rather than relying on pre-existing models. Other customers like Freshworks and Air India are using custom LLMs and compound AI systems to automate tasks such as chatbots that assist with queries about baggage policies or refunds, according to Rao.
Ola’s model is particularly notable, according to him, due to India's linguistic diversity, where multiple languages such as Hindi, English, and regional languages, are mixed in daily communication.
New skill sets
That said, Rao agrees that AI will have a deep impact on jobs, especially given the increasing use of copilots and fully autonomous AI agentic systems in enterprises. Autonomous AI agents, or the so-called 'Agentic AI' systems, refer to AI models that can achieve specific goals without any human intervention.
It's clear we must decide how humans remain integral to the process, says Rao. Databricks is working on this by developing tools that help customers build high-quality AI agents.
These agents go beyond simple tasks, automating complex processes using customer data to drive value. However, enterprise customers demand transparency, auditability, and security, making this a challenging area, according to Rao.
He adds, "AI will significantly change jobs, much like other technological advances have in the past. As AI advances, the key skills will include data engineering, system orchestration, product design and AI tool usage."
Balancing innovation with responsible AI is also crucial, particularly regarding data privacy, bias, and transparency, Rao acknowledges. Databricks, he adds, excels in governance, with its Unity Catalog ensuring strict security across data and models.
He elaborates that while bias is application-specific and difficult to automate, Databricks offers filtering tools that allow companies to manage inputs and outputs, ensuring privacy, safety, and content moderation. However, these tools are designed to be flexible and customizable to suit various customer needs.
Do we need a Chief AI Officer?
Given the complexity of the field, do enterprises need a Chief AI Officer? Especially, given that many large organisations already have senior executive roles like that of a chief information officer (CIO), chief technology officer (CTO), chief data officer, chief digital officer, and even a chief marketing officer (CMO), who oversee a lot of AI functions.
Also Read: TCS boss Krithivasan is unafraid of GenAI but unwilling to say the worst is past
Rao agrees, pointing out that companies reacted to the AI boom by rushing to integrate AI and hiring for this role without clear boundaries, leading to conflicts over budgets and responsibilities. According to him, a more practical solution may be combining data and AI into a Chief Data and AI Officer role, separating these duties from the broader productivity-focused role of a CIO.
Last, but not the least, Rao says that Artificial General Intelligence (AGI) is often misinterpreted. Some define it by how much human productivity it can replace, but true AGI would need to interact with the world, learn from its actions, and adapt—something far beyond the current technology, he explains. He suggests that while AGI remains a distant goal, companies should focus on how AI can improve business processes and customer experiences today.