China’s ‘socialist’ AI chatbots: Is this a doomed project?

Tech companies in the US have spent years trying to control the output of AI models and ensure they don’t hallucinate or spew offensive responses.
Tech companies in the US have spent years trying to control the output of AI models and ensure they don’t hallucinate or spew offensive responses.

Summary

  • AI chatbots ready to spout a political ideology? Beijing's attempt to brainwash its homegrown AI models reveals a profound misunderstanding of artificial intelligence. One can’t anthropomorphize AI models beyond a point.

Beijing’s rigorous push for chatbots with core socialist values is the latest roadblock in its effort to catch up with the US in a race for artificial intelligence (AI) supremacy. It’s also a timely reminder for the world that a chatbot cannot have its own political beliefs, the same way it cannot make human decisions.

It’s easy for finger-wagging Western observers to jump on recent reporting that China is forcing companies to undergo intensive political tests as more evidence that AI development will be knee-capped by the government’s censorship regime. 

The arduous process adds a painstaking layer of work for tech firms, and restricting the freedom to experiment can impede innovation. The difficulty of creating AI models infused with specific values will likely hurt China’s efforts to create chatbots as sophisticated as those in the US in the short term. 

But it also exposes a broader misunderstanding around the realities of AI, despite a global arms race and a mountain of industry hype propelling its growth.

Also read: China puts power of state behind AI—and risks strangling it

Since the launch of OpenAI’s ChatGPT in late 2022 kicked off a global generative AI frenzy, there has been a tendency from the US to China to anthropomorphize this emerging technology. But treating AI models like humans, and expecting them to act that way, is a dangerous path to forge for a technology still in its infancy. China’s misguided approach should serve as a wake-up call.

Beijing’s AI ambitions are already under severe threat from all-out US efforts to bar access to advanced semiconductors and chip-making equipment. 

But Chinese internet regulators are also trying to impose political restrictions on the outputs from homegrown AI models, ensuring their responses don’t go against Communist Party ideals or speak ill of leaders like Xi Jinping. Companies are restricting certain phrases in the training data, which can limit overall performance and the ability to spit out accurate responses.

Moreover, Chinese AI developers are already at a disadvantage. There is far more English-language text online than Chinese that can be used for training data, not even counting what is already cut off by the Great Firewall. 

The black box nature of large-language models also makes censoring outputs inherently challenging. Some Chinese AI companies are now building a separate layer onto their chatbots to replace problematic responses in real time.

But it would be unwise to dismiss all this as simply restricting its tech prowess in the long run. Beijing wants to be the global AI leader by 2030, and is throwing the entire might of the state and private sector behind this effort. The government reiterated its commitment to develop the high-tech industry during its recent Third Plenum. 

And in racing to create AI their own way, Chinese developers are also forced to approach LLMs in novel ways. Their research could potentially sharpen AI tools for harder tasks that they have traditionally struggled with.

Also read: The global AI race for supremacy is intensifying: India must define its role

Tech companies in the US have spent years trying to control the output of AI models and ensure they don’t hallucinate or spew offensive responses—or, in the case of Elon Musk, ensure responses are not too “woke." Many tech giants are still figuring out how to implement and control these types of guard rails.

Earlier this year, Alphabet Inc’s Google paused its AI image generator after it created historically inaccurate depictions of people of colour in place of Caucasian folks. An early Microsoft AI chatbot dubbed Tay was infamously shut down in 2016 after it was exploited on Twitter and started spitting out racist and hateful comments. 

Because AI models are trained on gargantuan amounts of text scraped from the internet, their responses risk perpetuating the racism, sexism and myriad other dark features baked into discourse there.

Companies like OpenAI have since made great strides in reducing inaccuracies, limiting biases and improving the overall output of chatbots—but these tools are still just machines trained on the work of humans. 

They can be re-engineered and tinkered with, or programmed not to use racial slurs or talk politics, but it’s impossible for them to grasp morals or form their own political ideologies.

China’s push to ensure chatbots toe the party line may be more extreme than the restrictions US companies are imposing on their AI tools. But these efforts from different sides of the globe reveal a profound misunderstanding of how we should collectively approach AI. 

Also read: AI can predict tipping points before they happen

The world is pouring vast swaths of money and immense amounts of energy into creating conversational chatbots. Instead of trying to assign human values to bots and use more resources to make them sound more human, we should start asking how they can be used to help humans. ©bloomberg

Catch all the Business News, Market News, Breaking News Events and Latest News Updates on Live Mint. Download The Mint News App to get Daily Market Updates.
more
Read Next Story footLogo