To keep on top of AI, focus on the points where it touches the outside world, writes Martin Chavez
The Alphabet director suggests drawing inspiration from the way financial markets and railways are policed
ARTIFICIAL INTELLIGENCE advances relentlessly, presenting both immense potential and profound risks. How do we promote AI innovation while effectively managing those risks?
Current thinking on how to regulate AI largely emphasises governing the models themselves. California’s now-vetoed AI Safety Bill, for example, would have held developers liable for system misuse and required a “kill switch". Developing safe, ethical and efficient AI systems inevitably demands standards and testing. As with any tool, we need guidelines for what qualifies as acceptable.
But as Alan Turing proved mathematically nearly a century ago, no approach guarantees the correctness of any sufficiently complex program. Furthermore, the advent of DeepSeek, from China, suggests the world has arrived at a tipping point where open-source models proliferate beyond any governance.
Governance must begin with a practical approach that addresses the junctions where these models interact with—and influence—the outside world.
History offers lessons. Consider the 19th-century railway boom. Society achieved safety not by regulating each train, but by managing track junctions where accidents most often occurred.
Financial markets adopted this approach more recently. The Securities and Exchange Commission’s Market Access Rule doesn’t constrain the internal workings of traders. Instead, regulators emphasise the points of contact with the market, mandating capital-adequacy checks before algorithms enter each order into the stockmarket. Similarly, the Federal Reserve doesn’t micromanage banks. Rather, it requires them to simulate their cashflow, income statement and balance-sheet nine quarters into the future, demonstrating sufficient capital to lend and make markets in severely adverse scenarios of the regulator’s choosing. In each case, governing the points of interaction proved more effective than attempting to control the internal complexity of each system.
Apply the same logic to AI. Policymakers and regulators must shift their attention to the interfaces through which AI systems and agents connect with critical infrastructure, financial markets, health care and other sensitive domains. They should design and set clear, testable standards for how AI systems interact with the real world, including stress tests, audit trails, attestations and certifications. We want as little regulation as possible, and no less.
Of course, with so many potential uses, no single approach handles all risk. For example, a chatbot might manipulate humans, or an AI system might create a novel bioweapon. Governance at the system-interface level alone cannot manage such risks. We must also build resilience through simulation and scenario planning, like the stress tests for monitoring banks.
Standards must sit on a continuum to meet the complexity and risk of AI systems. Rather than set rules with arbitrary thresholds, move those thresholds dynamically to reflect the probability and severity of negative interactions with the outside world.
To truly harness the power of AI, governance must also align corporate incentives with public safety. Collaborative governance and open communication can build trust and pave a way that promotes safety and innovation—using fines, penalties and other mechanisms to encourage developers to prioritise safety.
The European Union’s Artificial Intelligence Act offers some ideas for interface-level standards by implementing governance based on the level of risk. But the act also leaves important questions unanswered and at the same time overreaches. Whereas high-risk systems face outright bans or strict obligations, those deemed low-risk—generally systems meant for personal use—remain minimally regulated, despite their very real influence.
As the demand for AI tools outstrips supply, America has the opportunity to lead in establishing global AI standards that promise security without stifling innovation. But it must act quickly and decisively if it is to remain competitive.
Adopting standards at the system-interface level constitutes an important first step. Over-regulation remains a big concern—but the inconsistent approach America has taken thus far has only served as an obstacle for developers and users alike, slowing down creation with incompatible state-by-state rules. With clear, consistent and comprehensive governance, however, America could accelerate domestic innovation and become the global hub for AI.
A public-private partnership will be necessary to build a flexible, forward-thinking and resilient framework. This partnership can develop system-interface standards for AI systems and guarantee that foreign-developed technology adheres to domestic safety standards when interacting with sensitive systems.
An interface-centric approach offers a first step towards a broader plan for AI safety. In “Genesis", a book published last year, Henry Kissinger, Craig Mundie and Eric Schmidt proposed developing an “AI Book of Laws", informed by legal precedents, jurisprudence, scholarly commentary and human norms, and made the case for encoding the concept of human dignity into AI systems to ensure they operate according to ethical principles. By establishing practical mechanisms for controlling AI’s interaction with the real world, we can manage its immediate risks while also working in the long term to align AI with human values.
Governing AI at the point of contact with the real world offers an effective and pragmatic starting point. We must move beyond the futile desire for a perfect solution and embrace a framework that can adapt as quickly as the technology it seeks to govern. By doing so, we can harness the power of AI while managing its risks, delivering innovation together with safety.
R. Martin Chavez is a board member at Alphabet, vice-chairman of Sixth Street Partners and a former CFO of Goldman Sachs.
