CCI’s AI risk report: Why India should run pilot tests before firming up regulation
AI is transforming markets faster than our laws can keep up. Overly strict rules could stifle innovation, while no oversight risks empowering AI gatekeepers. The CCI has called for self-audits, but it should try out an RBI-style sandbox to finesse its regulatory approach.
Artificial Intelligence (AI) has moved from the laboratory into the heart of commerce, with algorithms setting cab fares, recommending films to watch and increasingly determining prices in online marketplaces. For antitrust agencies, this raises a key question: Are existing tools enough, or do we need a new AI-specific competition law?
Earlier this week, the Competition Commission of India (CCI) released the report of a study designed to identify harms that AI may cause and evaluate whether the Competition Act of 2002 can address them. Its findings may shape our regulatory path.
Unlike the EU, which has passed the world’s first AI law, India does not have an AI statute. Instead, authorities are exploring AI in a sectoral and thematic way. The ministry of electronics and information technology (MeitY) has issued a National Strategy on Artificial Intelligence ‘#AIforAll,’ and is discussing a broader AI framework, with a role in policy coordination for the IndiaAI platform.
The Niti Aayog has published a paper on ‘Responsible AI’ and the Digital Personal Data Protection (DPDP) Act of 2023 introduces obligations that intersect with AI-driven markets. These suggest India prefers an incremental, evidence-based approach.
The CCI has focused specifically on AI’s risks to fair competition. The key is to strike a balance: i.e., mitigate risks without stifling innovation or overreaching into areas better handled by privacy, intellectual property (IP) or sectoral regulators. So, what are the risks of AI to fair competition?
Algorithmic collusion: Unlike classic cartels, pricing algorithms can tacitly align on prices without human intervention. Traditional antitrust law hinges on proving an ‘agreement.’ If machines independently converge on higher prices, the legal concept of conspiracy becomes harder to apply.
Data concentration: Control over large data-sets is akin to controlling an essential facility, making it near-impossible for small players to compete. A few global tech firms have amassed user data, creating ‘AI gatekeepers’ that reinforce themselves through unending network effects, raising entry barriers and entrenchment risks.
Regulatory overlap: AI cuts across competition, privacy, consumer protection, IP and cybersecurity. Proving collusion might require access to user-level data or algorithmic logs, which are subject to restrictions under the DPDP Act. Without cooperation mechanisms among regulators, enforcement risks inconsistency and legal conflicts.
India should test, not legislate: The EU’s AI Act is a bold law and provides a useful playbook. Its risk-tiered approach—mandating technical documentation and logbooks for high-risk AI systems—is conceptually similar to India’s DPDP Act’s periodic audit requirements for ‘significant data fiduciaries.’
India’s regulatory philosophy has long been pragmatic. Sandbox models used by the Reserve Bank of India (RBI), for instance, illustrate how regulators can allow innovation while learning from controlled experimentation. By running targeted pilots and tests, the CCI too can build its technical expertise, signal industry direction and gather evidence to shape future reforms.
CCI recommendations: The CCI’s market study broadly suggests two areas for intervention. First, it proposes rigorous AI audits. These would mandate documenting algorithms, testing for collusive behaviour and reviewing pricing practices to prevent unfair market outcomes. Second, it stresses the need to remove entry barriers and calls for improved access to essential AI infrastructure, data and computing power.
The aim is to level the playing field so that startups and smaller players can compete effectively with market incumbents. To ensure responsible and competitive AI growth in India overall, the CCI hopes to foster fair competition and greater market transparency, while protecting innovation incentives.
For this, it plans to launch advocacy workshops, establish a think-tank on AI markets, strengthen its technical capacity and coordinate with other regulators.
Diversify the solution toolkit: While the CCI has made a good start, its report could have offered a more holistic set of solutions. For example, it could have launched a competition sandbox modelled on RBI’s provision for fintech. The CCI could have firms submit algorithms for simulated market tests. This would offer a safe environment for them to demonstrate compliance with the competition law as well as visibility on potential anti-competitive dynamics.
The regulator could also review AI acquisitions. India’s deal-value threshold for CCI clearance is a step forward, but it needs qualitative triggers—like acquisitions involving unique data-sets, critical AI intellectual property or top-tier research talent.
Heavy-handed regulation could deter investment and stifle startups, while too little oversight risks favouring global AI gatekeepers. The CCI study makes it clear that intervention is needed, but its scope and form must remain flexible. Given the rapid pace of AI innovation, rigid rules risk addressing outdated problems. Instead, proposals such as a self-audit framework highlight the need for experimentation.
India should run pilots to test audit models and early-warning systems. This will reveal what works, where risks lie and how governance can support innovation. The key takeaway from the CCI’s study is clear: for a technology as dynamic as AI, India must test first, learn fast and legislate later.
These are the author’s personal views.
The author is a partner at Khaitan & Co.
