Active Stocks
Wed Sep 27 2023 15:59:55
  1. Tata Steel share price
  2. 128.15 -0.54%
  1. HDFC Bank share price
  2. 1,527.2 -0.68%
  1. Tata Motors share price
  2. 620.4 0.1%
  1. NTPC share price
  2. 239.35 -0.35%
  1. Infosys share price
  2. 1,467.55 0.51%
Business News/ Ai / Tech Titans Look to Lobby Washington on AI—In Different Directions

Tech Titans Look to Lobby Washington on AI—In Different Directions


Elon Musk and Mark Zuckerberg may not ever hold their cage match, but they’ll take center stage on Capitol Hill in a debate over the future of artificial intelligence.

Sam Altman, chief executive of OpenAI.Premium
Sam Altman, chief executive of OpenAI.

WASHINGTON—Titans of Silicon Valley are descending on Washington Wednesday to brief U.S. senators on artificial intelligence, in a meeting aimed at advancing nascent efforts to regulate the new technology.

The closed-door all-senators’ session, organized by Senate Majority Leader Chuck Schumer (D., N.Y.), will feature Elon Musk, Mark Zuckerberg, Bill Gates and more than a dozen other executives and civil-society leaders.

These tech bosses are already locked in a market-driven race to roll out sophisticated artificial-intelligence systems, especially generative tools that can quickly produce humanlike outputs. The Capitol Hill gathering gives them a chance to shape the priorities of lawmakers, some of whom are racing just as fast to place guardrails on AI development.

Wednesday also might be the first time Musk and Zuckerberg find themselves in the same room following their called-off cage match.

Assuming the tech leaders keep their fisticuffs in check, their previous comments suggest they will try to pull senators in different directions.

Here are where some of the key players stand:

Sam Altman: Regulate Us

Altman is the chief executive of OpenAI, the company that kicked off an industry arms race last year when it launched the viral chatbot ChatGPT. He is among the tech leaders driven by pursuit of so-called artificial general intelligence, or AGI, a computer program that could match human reasoning.

Altman says his goal is to ensure AI benefits all of humanity, helping us become more productive and creative. He has also argued that the government must intervene to make sure AI doesn’t cause economic or geopolitical catastrophe.

In 2021, Altman proposed that Congress implement a new tax system that would impose levies on big companies and landholders. The proceeds would be paid out to Americans, ensuring citizens a minimum income in a world where AI is capable of performing more and more jobs.

More recently, Altman has asked lawmakers to consider creating a new agency that would impose safety standards on AI systems that have certain high-level capabilities, such as the potential to develop bioweapons. This idea has also been backed by Microsoft, whose views will be represented Wednesday by CEO Satya Nadella and Gates, currently an adviser to the company he founded.

Mark Zuckerberg: Open Up AI Development

Zuckerberg, who has led the social-media company Meta Platforms for nearly two decades, is trying to catch up to rivals by asserting Meta as a champion of the “open-source" approach to AI. Open-source software typically is made widely available for use, modification and sharing by the public—and there is some debate about whether Meta’s AI models live up to that spirit.

In July, Meta debuted a new model, dubbed Llama 2, which would be free for commercial use. This week, The Wall Street Journal reported that Meta was looking to build an even larger AI model that would be as capable as the most advanced system currently offered by OpenAI. Zuckerberg, the Journal reported, is personally insisting the new software be open sourced as well.

Meta’s methods cut against the approach of other companies at the Senate meeting, including OpenAI and Google, which are keeping their models under much tighter control. The companies say this allows them to put in place stronger guardrails against misuse—for instance, by programming chatbots to avoid parroting racist language or providing instructions on how to make a bomb.

Some tech-industry watchdogs have even called on Congress to stop the practice of making large, powerful AI systems open source. That hasn’t stopped Zuckerberg from embracing them.

Elon Musk: Focus on Existential Risk

For at least a decade now, Musk has tried to accomplish two, seemingly opposite goals: steer the development of artificial intelligence and warn others that AI could be humanity’s greatest threat yet.

Musk was one of the earliest investors in AI research company DeepMind, and even launched a last-minute bid to purchase the company before losing out to Google. He then helped start OpenAI, but left after losing a power struggle to Altman.

Yet Musk also brings to the Senate meeting a more ominous outlook than many of his peers. This year, as generative AI tools boomed in popularity, Musk began sounding the alarm about the race to develop new AI systems. He was one of the first signatories of an open letter calling for a six-month pause in the breakneck development of powerful new AI tools, partly because the technology was advancing more quickly than experts expected and could eventually outsmart humans.

Companies “will not heed this warning, but at least it was said," he wrote in a post on X, the social-media platform previously known as Twitter, which he owns.

His solution: To start a new company, xAI, with a goal of understanding “the true nature of the universe."

Inioluwa Deborah Raji: It’s the Bias, Stupid

Many of the executives who will attend Schumer’s forum Wednesday will argue that AI presents a future threat that could imperil humanity. But Deb Raji has her feet firmly planted in the present.

Researchers have found that AI systems trained on historical data can perpetuate past discriminatory practices into future decisions around housing, hiring or criminal sentencing. Research also has shown that generative AI systems can produce biased images.

Raji’s work at the University of California, Berkeley, the Mozilla Foundation and elsewhere has focused on evaluating modern-day AI systems and holding their developers accountable for their harms. In 2020, she published a paper with Google’s Ethical AI team outlining how companies can better internally assess AI systems.

Meredith Stiehm: Look Out for Workers

Leaders of three unions are attending Wednesday’s meeting, representing writers and other professions that view rapid consumer adoption of AI tools as a potential threat to their livelihood—and are calling for Congress to intervene.

Stiehm is president of the Writers Guild of America West, whose members have been on strike since May. She and other fellow writers have demanded that studios not use AI to replace them, forming one sticking point in the labor talks. Authors have also joined other artists in lobbying Congress to pass a law saying their work can’t be used to train large AI systems without consent or compensation.

Other union leaders at the meeting include Randi Weingarten, president of the American Federation of Teachers, and Liz Shuler, president of the AFL-CIO. Both organizations have called on Congress to ensure AI benefits workers rather than displacing them.

Weingarten has said that while the use of AI chatbots for plagiarism is top of mind for many teachers, they are also concerned that AI systems could be used to spread false information or violate users’ privacy. “The technology and the innovation will race beyond the responsibility for it—unless there are some guardrails," she said in an interview earlier this year.

Sundar Pichai: Let Industry Lead

Pichai is CEO of Google, which has released its own line of powerful artificial-intelligence systems to rival OpenAI, Microsoft, and Meta. Google also owns DeepMind, a research lab racing OpenAI to create artificial general intelligence.

Pichai has described AI as “too important not to regulate," but Google has distanced itself from OpenAI and Microsoft by arguing against the idea of creating a new agency to license the technology. It has suggested instead that Congress look to existing agencies to oversee the application of AI in particular sectors, such as healthcare.

Google and other companies have also touted their voluntary efforts to address some of AI’s potential harms, for example by watermarking AI-generated content and testing their systems for security risks. Those were among the commitments that more than a dozen AI companies recently made to the White House.

Write to Ryan Tracy at and Deepa Seetharaman at

"Exciting news! Mint is now on WhatsApp Channels 🚀 Subscribe today by clicking the link and stay updated with the latest financial insights!" Click here!

Next Story
Recommended For You
Switch to the Mint app for fast and personalized news - Get App