How managers and AI achieve better decisions together
True success requires understanding exactly where human insight enhances AI outputs, replacement is a far too simple solution
Consider the following statistics about AI’s growing influence: Microsoft is forecast to invest $88 billion in capital expenditure this year—more than India’s entire defence budget of $79 billion. If the Mag 7 companies (Microsoft, Tesla, Amazon, Alphabet, Nvidia, Meta and Apple) formed a country, their combined GDP would make them the world’s third-largest economy. To call AI’s impact “profound" would be an understatement.
AI is shaping how organizations operate and compete. For managers, this shift is not just about mastering new tools; it’s about redefining their role in decision-making. When an MBA student once asked me what keeps me awake at night, my answer was simple: the way AI will change management. Business leaders, educators and policymakers alike must now grapple with a core challenge—how to prepare managers to lead in an AI-enabled world.
Managers may need to transition from being sole decision-makers to becoming decision-augmenters, with AI as the backbone. This transformation, however, is easier said than done. Two major challenges stand in the way. The first is psychological. Many managers are reluctant to surrender decision-making authority to technology. In one large US department store I worked with, the CEO invested heavily in AI software to optimize labour scheduling across hundreds of locations. Yet store managers overrode nearly half of the tool’s recommendations, reverting to familiar methods. Millions of dollars in potential value were lost—not because the AI was ineffective, but because its outputs were never fully embraced. This phenomenon is so common that academics have coined a term for it—algorithm aversion, or the tendency to distrust, avoid or reject algorithmic decisions, even when those algorithms consistently outperform human judgement.
The second challenge is structural: Even when managers are willing to work with AI, it is often unclear how they should do so. Simply stating that humans should be “in the loop" is incomplete. The real question is how to blend human judgement with algorithmic precision in a way that consistently delivers better outcomes than either could achieve alone.
Favouring AI over human judgement without safeguards can backfire. Consider an Indian retail chain that introduced an AI-driven promotion system. A loyal customer, enticed by a voucher, purchased goods worth ₹19,998—just ₹2 short of the ₹20,000 needed for a discount. Bound by the system’s rules, the sales associate couldn’t override it and suggested the customer purchase another item. The customer walked away, vowing never to return. Here, the rigid application of technology—and the absence of empowered human intervention—turned a loyalist into a lost customer.
Even high-profile AI-first companies have stumbled. Take Stitch Fix, the US-based clothing startup lauded for combining AI with human stylists to deliver personalised designs. Initially, its AI-human model fuelled rapid growth and a soaring stock price. But when management mandated that stylists follow AI recommendations, customer satisfaction dropped, the company’s market value fell, and accountability blurred. Eventually, Stitch Fix clarified that human stylists—not algorithms—were ultimately responsible for the final decisions, even featuring their photos to reassure customers.
My research over the last decade has focused on the human-AI interface: how to integrate human judgement with AI precision. At a large spare-parts retailer I worked with, merchandising managers overrode more than 70% of the recommendations generated by a machine-learning tool. The data science team insisted the tool’s outputs were superior; the managers were equally convinced the system was flawed. The vice-president of merchandising, caught between these opposing views, lacked evidence to decide whether overrides helped or hurt performance. We ran a nine-month experiment to find out. The results were revealing: overall, manager overrides reduced profits by nearly 5%. Yet for new products—those without historical demand data—managerial overrides improved profitability by 20%, even as they hurt profits for older products. The reason was intuitive: older products had a long history of demand so the AI algorithms were able to more accurately predict the future sales using the historical trends. However, the new products did not have sufficient historical data for the AI algorithms to identify trend so the managers did a better job. The lesson is clear: the issue is not whether managers should override AI, but under what conditions their judgement adds value.
In a follow-up project, to test where human judgement makes the most difference, we allowed managers to adjust only the machine learning tool’s forecasts—not the subsequent decisions from algorithms. Profits rose across the board—for both new and old products. The reason human judgement was valuable to improve forecasts but not algorithmic decisions is interesting: human judgement is critical to interpret context beyond data. Humans are particularly adept at interpreting the impact of qualitative factors such as emerging technologies, new product introductions, and mergers and acquisitions on future demand for products. Machines, on the other hand, are better at crunching quantitative data using sophisticated algorithms to make optimal decisions. Our finding echoes what Demis Hassabis, CEO of Google DeepMind and 2024 Nobel Prize winner, has noted: replacing human judgement with AI is far from simple. The key lies in understanding the conditions where human insight enhances AI outputs and designing systems to enable that synergy.
For Indian corporate leaders, the lesson is urgent. AI will not “slot in" neatly to existing structures—it demands new protocols for when and how humans can overrule machines, as well as targeted training so managers can collaborate with AI rather than compete against it. Ignore these challenges, and organisations risk losing both value and trust. Address them thoughtfully, and India’s managers can harness AI’s immense power without sacrificing the uniquely human judgement that still defines great leadership.
Define Human AI Protocol
- 'Algorithm aversion' is Costly: Many managers are reluctant to trust AI. This psychological resistance causes organisations to lose millions by overriding effective system recommendations.
2. Humans augment: Managers must shift from being sole decision-makers to "decision-augmenters", using AI as a sophisticated tool .
3. Rigidity destroys trust: Over-relying on rigid AI rules without human safeguards can alienate customers.
4. The new protocol: Leaders must develop new protocols for when and how humans should overrule machines.
Saravanan Kesavan is the dean and professor of operations, BITSoM (BITS School of Management), Mumbai.
