We Want to Regulate AI, But Do Not Know How

The “AI Safety Summit,” hosted by the British government on November 1st and 2nd at Bletchley Park, is poised to leave its mark in the annals of history.

We Want to Regulate AI, But Do Not Know How

We Want to Regulate AI

The setting will be stunning: a 19th-century estate located north of London, which served as the residence for Alan Turing, his code-breaking team, and the birthplace of the first programmable digital computer during World War II. The participants will include a select group of 100 global leaders and technology executives. Their mission will revolve around a momentous question: how to safeguard artificial intelligence from unchecked misuse and prevent it from becoming a threat to humanity.

The “AI Safety Summit,” hosted by the British government on November 1st and 2nd at Bletchley Park, is poised to leave its mark in the annals of history. It may well be remembered as the first instance when influential figures from around the world gathered to earnestly deliberate on the future of a technology that has the potential to reshape our world. As Jonathan Black, one of the organizers, pointed out, unlike other major policy debates like climate change, there is a strong sense of goodwill, but we are still grappling with the optimal solutions.

Numerous efforts to regulate AI are underway. In Brussels, negotiations reached a critical stage on October 25th as officials worked on finalizing the European Union’s ambitious AI act by year-end. In the days leading up to the British summit or shortly thereafter, the White House is expected to issue an executive order regarding AI. This autumn, the G7, a group of wealthy democracies, will begin drafting a code of conduct for AI companies. On October 18th, China unveiled a “Global AI Governance Initiative.”

The momentum behind these efforts is fueled by an unusual political economy. The incentives to take action, and to do so collectively, are substantial. For starters, AI is a truly global technology. Large language models (LLMs), which power incredibly human-like services such as ChatGPT, are easily accessible and can be run on standard laptops. Tightening AI regulations in some countries while allowing them to remain lax in others serves little purpose. Public opinion appears to support such measures as well, with more than half of Americans expressing greater concern than excitement about the use of AI, according to polling by the Pew Research Centre.

The Beijing effect

Competition among regulators is intensifying the sense of urgency. Europe’s AI act, in part, aims to solidify the EU’s role as a leader in setting global digital standards. The White House is keen on preventing what’s often referred to as the “Brussels effect.” Neither the EU nor the US wants to be outdone by China, which has already enacted several AI laws. Despite concerns, the British government’s decision to invite China to the summit was met with criticism, even though without China’s involvement, any regulatory framework would lack true global reach. (China may participate, even if its interests align more with the Communist Party’s agenda than the protection of humanity.)

Surprisingly, another driving force behind AI rule-making diplomacy is the AI model developers themselves. Historically, the technology industry largely resisted regulation, but now tech giants like Alphabet and Microsoft, along with AI innovators such as Anthropic and OpenAI, the creators of ChatGPT, are advocating for it. These companies are concerned that unchecked competition could lead to reckless actions, such as releasing models that could be easily abused or develop autonomous thinking, leading to potential troubles.

In essence, the willingness to act is present, but what’s missing is a consensus on the problems that need governance, let alone how to govern them. As Henry Farrell of Johns Hopkins University pointed out, three critical debates stand out: What should the world be concerned about? What should the rules target? How should they be enforced?

Starting with the objectives of regulation, setting them is challenging because AI is evolving rapidly. Innovations in AI occur almost daily, and even the developers of LLMs cannot definitively predict their capabilities. This underscores the importance of establishing evaluation methods to assess their potential risks, which remains more of an art than a science. Without such evaluations (evals), it would be difficult to determine whether a model complies with any rules.

We Want to Regulate AI
We Want to Regulate AI, But Do Not Know How

Tech companies may support regulation, but they prefer it to be specific and focused on extreme risks. At a Senate hearing in Washington in July, Dario Amodei, CEO of Anthropic, warned that AI models could soon provide all the information necessary to construct bioweapons, making large-scale biological attacks more accessible to various actors. Similar dire predictions are made about cyber weapons. Earlier this month, Gary Gensler, Chairman of America’s Securities and Exchange Commission, stated that an AI-engineered financial crisis is “almost inevitable” without prompt intervention.

Others argue that focusing on these speculative risks can divert attention from other significant threats, such as undermining the democratic process. During an earlier Senate hearing, Gary Marcus, a notable AI skeptic, began his testimony with a passage generated by GPT-4, OpenAI’s top model, which convincingly alleged that parts of Congress were “secretly manipulated by extraterrestrial entities.” Mr. Marcus contended that there should be deep concern about systems capable of generating such persuasive fabrications.

The debate over what exactly to regulate will be equally challenging to resolve. Tech companies typically propose limiting scrutiny to the most powerful “frontier” models. Microsoft, among others, has called for a licensing system requiring companies to register models that surpass specific performance thresholds. Other suggestions include controlling the sale of powerful chips used to train LLMs and mandating that cloud computing firms inform authorities when customers train frontier models.

Most firms agree that regulation should focus on the applications of these models rather than the models themselves. For office software, a light regulatory touch may be sufficient. In contrast, healthcare AI may require stringent rules, and facial recognition in public spaces might be deemed unacceptable. This application-based approach has the advantage of relying on existing laws for the most part. AI developers caution that broader and more intrusive rules could stifle innovation.

Until last year, the US, UK, and the EU seemed to align with this risk-based approach. However, the rapid proliferation of LLMs since ChatGPT’s launch a year ago has led to reconsideration. The EU is now contemplating whether the models themselves need oversight. The European Parliament is calling for model developers to test LLMs for potential impacts on human health and human rights and insists on obtaining data about the models’ training data. Canada is working on a more rigorous “Artificial Intelligence and Data Act,” while Brazil is discussing a similar approach. In the US, President Joe Biden’s forthcoming executive order is also expected to include stricter regulations. Even the UK may reevaluate its hands-off approach.

These stricter regulations would mark a departure from the voluntary codes of conduct that have been favored until now. Last summer, the White House negotiated a set of “voluntary commitments” that 15 model developers have signed. These companies have agreed to subject their models to internal and external testing before release and to share information about their AI risk management.

Then comes the question of who should handle the regulation. The US and UK believe that existing government agencies can oversee most of the work. In contrast, the EU is pushing for the creation of a new regulatory body. On the international stage, a few tech executives are now advocating for the establishment of something akin to the Intergovernmental Panel on Climate Change (IPCC), tasked with staying updated on global warming research and developing methods to assess its impact.

With these unresolved questions, it’s no surprise that the organizers of the London summit are cautiously optimistic. As Mr. Black puts it, the event should primarily serve as “a conversation.” Nevertheless, there’s a hope that it might yield concrete outcomes, especially on the second day when only around 20 of the most influential corporate and world leaders remain in attendance. They could potentially endorse the White House’s voluntary commitments and recommend the establishment of an IPCC-like body for AI or the global expansion of the UK’s existing “Frontier AI Taskforce.”

Such an outcome would be seen as a success for the UK government and could expedite the official global efforts in AI governance, such as the G7’s code of conduct. This would be a valuable initial step, but the journey is far from over.

Similar Posts