Active Stocks
Tue Apr 16 2024 15:59:30
  1. Tata Steel share price
  2. 160.05 -0.53%
  1. Infosys share price
  2. 1,414.75 -3.65%
  1. NTPC share price
  2. 359.40 -0.54%
  1. State Bank Of India share price
  2. 751.90 -0.65%
  1. HDFC Bank share price
  2. 1,509.40 0.97%
Business News/ Companies / People/  2024 is the year to scale up beyond pilots, advance GenAI projects: IBM's Candy
BackBack

2024 is the year to scale up beyond pilots, advance GenAI projects: IBM's Candy

IBM Consulting's Generative AI strategy involves leveraging AI in customer care, digital labour, and application modernization to deliver significant business outcomes.

Matthew Candy, global managing partner for Generative AI, IBM Consulting. Premium
Matthew Candy, global managing partner for Generative AI, IBM Consulting.

“Generative AI was my hobby job last year. It has become my day job since last August," says IBM Consulting's global managing partner for Generative AI, Matthew (Matt) Candy. In an interview during his recent visit to India, Candy discussed how companies can leverage Generative AI responsibly, and the significance of scaling up pilots. Edited excerpts:

What is IBM Consulting's broad Generative AI (GenAI) strategy?

IBM Consulting comprises 160,000 people including 21,000 data scientists and AI professionals, and a generative AI CoE (centre of excellence) of over 1,000 people. We’re working in three big areas. One of them is customer care and customer experience—applying generative AI for contact centre agents to improve call handling times and deliver better customer experience.

The second big area is around digital labour—in the areas of HR (human resources) and talent acquisition. About 94% of the interaction that the 275,000-odd IBM employees have with the HR, takes place without human intervention. That's digital first—all through an AI chat interface, and AI that sits in more than 79 different systems, 4,700 policy documents, and supports about 2,500 processes. The third area is around application modernization where we use generative AI in the development lifecycle—right from business analysis, design, wire framing, coding, development, testing, synthetic test, data generation, etc.

What’s the synergy with IBM’s broader AI vision?

Our strategy ties in with IBM's AI approach that is based on four core beliefs. One, that open technologies are very important. Second, that AI needs to be trusted, which is why it’s important to responsibly introduce this technology into organizations and govern it. Third, AI needs to be targeted at enterprises and business domains. And fourth, we believe AI is for value creators, and not just users.

Please share some examples of meaningful business outcomes for clients in these areas.

In the customer service space, we're seeing customer queries being answered with about 95% accuracy. In the marketing space, we're seeing the ability to reduce content creation costs by up to 40%. In the supply chain area, we've seen up to 50% of reduction in the cost per invoice. When you think about automating processes and improving cycle times, it can result in up to 45% improvement in these areas. Also, asset intensive companies can reduce unplanned downtime by up to about 43%.

What should CXOs consider when building a responsible generative AI framework?

About 74% of Indian companies have already made some significant investments in AI over the last couple of years, according to our recent global AI adoption study. But the barriers included lack of strategy, lack of company guidelines, and lack of AI governance and management tools. That's preventing the scaling up of pilots. So, it's very important for any organization to put in place appropriate guardrails for the AI models they intend to use. The data foundation that's put in place must address bias.

What should enterprises in India do to scale their pilots and proof-of-concepts?

The year 2024 is for scaling up generative AI projects. We have got to move away from pilots. Scaling up is going to come from having the right strategy and roadmap in place—mapping everything back to business value and having a multi-cloud strategy. One of the biggest barriers to scaling is skills and access to skills. That's an area that IBM and other tech companies have got to continue to focus on. A lot of work we're doing is helping create Centers of Excellences (CoEs) for our clients, which helps them infuse new skills and introduce change in their organizations. This is 70% human challenge, 20% tech challenge, and 10% an algorithm issue. Fundamentally, this is about change management, adoption, skills, and enablement within enterprises.

What should CXOs keep in mind when leveraging Generative AI, given that some enterprises are already mature users of AI?

Foundation models are the next evolution of a data strategy for a company. Enterprises will have to combine their unique data with these foundation models to create new sources of value. Certainly, some clients and some industries are more mature with traditional AI, like in the banking space. There's already a good understanding right around models, model governance, and what needs to be done in the banking space. For (other) organizations, there's a lot that they're going to have to think about, because Generative AI is being infused into all enterprise applications. By the end of 2024 about 40% of enterprise applications will have infused generative AI, according to Gartner. So, there are decisions that enterprises will have to take, such as: “Am I going to use the chat and natural language conversational interface within this platform, or do I layer a conversational interface and workflow atop the platform?"

How does it work in practice? Please share some examples.

I could use GPT or access Gemini from Google, etc., and access these proprietary large language models (LLMs) through an API (application programming interface) and embed them into an application. That may be fine for a set of use cases—others may need a smaller, domain-specific model that’s low on compute and cost because I want to bring my enterprise data together with that model for this use case.

For example, some banks are taking an open source LLM and training and fine-tuning it around the regulatory complaints’ framework in that market, which allows the model to classify complaints. This helps complaint handlers to generate responses that they can then tweak to meet regulatory requirements and frameworks. Sometimes this model may be run on a public cloud. At other times, you may need an on-prem (on the premises of a company) data centre because the data and workloads need to be there for regulatory reasons, etc.

In this context, do you feel the need for a Chief AI officer? 

But if we do so, the functions of the chief AI officer could overlap with those of the chief information officer (CIO), chief technology officer (CTO), chief digital officer (CDO), chief data officer, chief marketing officer (CMO) and even that of the CEO.

At times, I see the CIO or CTO leading and driving (the AI function), and sometimes it’s the chief data officer performing this task. But I do think we're going to see more companies appointing chief AI officers, and the role may be like that of the CDO who typically tries to unify business and IT (information technology) to drive change and help companies adopt a digital mindset. However, organizations that already have mature business or IT functions will be perfectly capable of leading and driving that change themselves (without appointing a chief AI officer).

How should companies view the return on investment (ROI) for these projects such as generative AI, given that these are emerging technologies?

At IBM, we use a component business model (CBM). We've got detailed sets of use cases for each domain in different industries. And we have a lot of benchmarking data too. Further, using the IBM Garage methodology allows us to map all business pain points, and build a strategy to address those pain points.

We're starting to develop some good benchmarking data from a lot of generative AI pilots and proof-of-concepts that are being implemented. But you've got to be able to identify the value pools to chase with Generative AI, and then you've got to be able to measure the ROI to go beyond the pilot, to scaling it. And the benchmarks and KPIs (key process indicators) and metrics around that are going to be important in areas like customer service, which has seen a lot of traction and application of Generative AI, and where we can measure, for instance, the average handling time for contact centre agents. We can also measure NPS (net promoter score), and containment rates in bots. So, there are some mature frameworks to measure outcomes.

Unlock a world of Benefits! From insightful newsletters to real-time stock tracking, breaking news and a personalized newsfeed – it's all here, just a click away! Login Now!

ABOUT THE AUTHOR
Leslie D'Monte
Leslie D'Monte specialises in technology and science writing. He is passionate about digital transformation and deeptech topics including artificial intelligence (AI), big data analytics, the Internet of Things (IoT), blockchain, crypto, metaverses, quantum computing, genetics, fintech, electric vehicles, solar power and autonomous vehicles. Leslie is a Massachusetts Institute of Technology (MIT) Knight Science Journalism Fellow (2010-11), author of 'AI Rising: India's Artificial Intelligence Growth Story', co-host of the 'AI Rising' podcast, and runs the 'Tech Talk' newsletter. In his other avatar, he curates tech events and moderates panels.
Catch all the Corporate news and Updates on Live Mint. Download The Mint News App to get Daily Market Updates & Live Business News.
More Less
Published: 03 Mar 2024, 04:34 PM IST
Next Story footLogo
Recommended For You
Switch to the Mint app for fast and personalized news - Get App