During his visit to India, Sam Altman once again called for governments worldwide to regulate artificial intelligence (AI). At first glance, this proposition may seem paradoxical. Corporations generally prefer a laissez-faire approach from governments, favouring minimal regulations that allow businesses maximum freedom. After all, more regulation often translates to more bureaucratic oversight, increased compliance costs and potential constraints on innovation. Innovators might be discouraged from exploring uncharted territories for fear of non-compliance or burdensome regulatory processes.
On the other hand, bureaucrats might welcome such advocacy from influential figures in the tech industry like Altman. As Max Weber detailed in Economy and Society: An Outline of Interpretative Sociology, bureaucracy, at its core, seeks to create order, uniformity and predictability. A regulated AI landscape may just be the rational, systematic environment that traditional bureaucratic systems would be comfortable operating within.
As we stand at the frontier of the AI revolution, emerging technology ventures including AI grapple with the ‘liability of newness.’ The hurdles these ventures face are formidable—from securing necessary resources to stay afloat, to establishing their legitimacy among diverse audiences, including consumers, regulators and government bodies. Carving out a distinctive identity in a convoluted industry structure presents another battlefront.
Moreover, the ambiguity of the market structure, the underdog’s fight against large established companies for a voice in policymaking, the looming threats of competition from powerful incumbents and resistance from public-policy demanders, all combine to create a veritable storm.
However, Altman’s stance, which could easily be misunderstood as counter-intuitive, may instead be seen as a forward-thinking perspective. The presence of regulation, albeit potentially slowing the pace of innovation and erecting higher barriers to entry, bestows upon the industry a cloak of legitimacy and quells uncertainties.
The powerful and unpredictable societal impact of AI demands greater regulatory involvement. Governance systems must adapt swiftly to the pace of AI development to mitigate unforeseen consequences. There is a growing capacity of AI to manipulate language and influence society, from politics to relationships, by altering beliefs. Given AI’s potential misuse for misinformation and its risk of perpetuating bias, effective regulation is crucial. However, designing such regulations is a complex and time-consuming task, with five inherent challenges.
AI’s inherently fast-paced and constantly evolving nature often outstrips legislative processes. As such, the rapid advancement of technologies, particularly those dealing with generative AI systems, often leaves regulatory bodies scrambling to catch up. These systems’ capacity to generate convincing yet fictional texts and images introduces new layers of complexity that make regulation even more challenging.
Even defining AI remains an elusive goal. It encompasses a broad spectrum, ranging from straightforward automation algorithms to complex machine learning models. The lack of a universally accepted definition complicates efforts to establish clear regulatory parameters, thereby hindering the development of precise and relevant regulations.
AI’s diverse nature defies a blanket regulatory approach, risking over-regulation in some areas and under-regulation in others. For instance, AI in generative language models needs a different regulatory touch than AI that could imperil infrastructure security or human lives.
AI needs an international consensus for effective regulation. However, achieving global standardization is no small feat. The varied regulatory approaches and differing stances among countries create a significant barrier to this effort. The EU and the US have contrasting approaches to AI regulation. The EU is proactive, aiming to create a strong, enduring framework for AI that protects individual rights and data privacy, mitigates risks, and promotes AI adoption. The EU approach blends specific and general regulations, addresses cybersecurity concerns, and encourages innovation through regulatory sandboxes. Meanwhile, the US favours a decentralized approach, assigning responsibilities to specific federal agencies to avoid overregulation and promote innovation.
Furthermore, we cannot let AI players self-regulate themselves or become pseudo-regulators. A case in point are AI startups involved in creation of deepfakes. AI self-regulation stirs issues of accountability and transparency. Though it may fuel flexibility and innovation, it’s not bulletproof; studies reveal it can breed oversight gaps, fostering unethical practices or biases.
India should consider a balanced approach to AI regulation, learning from the EU and US models. Like the EU, India needs a robust framework that safeguards individual rights, mitigates risks and ensures data protection without impeding AI development. Embracing aspects of the decentralized US model can help avoid overregulation, allowing specific sectors to address AI-related issues based on their unique needs, and also encourage innovation.
Catch all the Business News, Market News, Breaking News Events and Latest News Updates on Live Mint. Download The Mint News App to get Daily Market Updates.