How will AI be regulated?
Simply sign up to the Artificial intelligence myFT Digest -- delivered directly to your inbox.
Why is regulation of AI needed?
Regulators around the world have found no shortage of issues to worry about with the rise of artificial intelligence.
Should they intervene in algorithms that could bias or distort decisions that affect the everyday lives of billions? What about the risk that chatbots, such as ChatGPT, will supercharge the production of online misinformation, or lead to the misuse of vast amounts of personal data? And what should they do about warnings that computers could soon reach such a level of intelligence that they escape the control of their makers — with potentially dire consequences for humanity?
The technology is moving so fast that there has been little agreement yet on a regulatory agenda, though the focus on both sides of the Atlantic has increasingly fallen on the most powerful, general-purpose AI systems. Developed by companies like OpenAI, Google and Anthropic, and known as foundation models, these systems support a wide range of applications from different companies, giving them broad impact across society and the economy.
What AI issues are regulators looking at first?
The European Union was well on its way to finalising a first-of-its-kind AI Act that would have controlled, or even banned, supposedly “high-risk” uses of AI — such to make decisions on job or loan applications or health treatments. Then ChatGPT mania exploded — the huge public interest in OpenAI’s freely-available generative AI chatbot.
Lawmakers quickly adjusted their plans, setting new rules that will force companies to disclose what data foundation models like the one used in ChatGPT have been trained on. Creators of the most powerful models, which Brussels believes could pose systemic risks, have face extra requirements, like assessing and mitigating risks in their models and reporting any serious incidents. The Act, which was adopted in May 2024 and begins to come into force a year later, also established a powerful new AI Office charged with setting the standards that advanced AI systems have to meet.
However, Patrick Van Eecke, co-chair of law firm Cooley’s global cyber, data and privacy practice, believes Brussels has moved too soon to try to regulate a technology that is still “a moving target,” reflecting a cultural bias towards knee-jerk regulation. “We like to regulate reality even before it becomes reality,” he says — echoing a view widely held in the AI world.
Many US tech executives have a different explanation. They see Brussels’ haste to directly regulate the most powerful AI systems as a deliberate protectionist move by the EU, slapping limitations on a group of mainly American companies that dominate the AI industry.
Will the EU’s AI regulations become a model for the rest of the world?
That is what happened with the bloc’s data protection legislation, and it is a potential development that US tech companies are concerned about. The EU Act’s backers say it will be enforced flexibly to reflect changing standards and technology advances. But critics say experience shows Brussels takes a more dogmatic approach — and that rules baked in now could limit the technology’s evolution.
Some European companies agree. In a letter to the European Commission in 2023, 150 large European concerns warned that the law could hamper the bloc’s economy by preventing companies there from freely using important AI technology.
Aren’t AI companies asking for regulation?
The AI industry has learned from the backlash against social media that it does not pay to duck regulation on technologies that can have significant social and political impact.
But that does not mean they like what’s planned by the EU. Sam Altman, head of OpenAI and a voluble supporter of AI regulation, told the FT that his company might have to pull out of the EU altogether if the final rules on AI are too stringent. The furore his words provoked led him quickly to backtrack but, behind the scenes, the US concerns are undimmed.
The readiness of big tech companies to call for regulation has also provoked suspicions that they see it as a way to entrench their hold on the AI market. Higher costs and bureaucracy could make it harder for new competitors to break in.
What’s the alternative to the EU approach?
Before deciding on new laws, many countries are taking a close look at how their existing regulations apply to applications that are powered by AI.
In the US, for example, the Federal Trade Commission has opened an investigation into ChatGPT, using its existing powers. One of its concerns is that ChatGPT is sucking up personal data and sometimes using it to regurgitate false and damaging information about ordinary people. US lawmakers have also embarked on a broad review of AI that explicitly tries to balance the benefits of the technology against its potential harms.
US Senate majority leader Chuck Schumer has called for a series of expert briefings and forums for the most important senate committees, to help them decide which aspects of AI might need regulating.
Holly Fechner, co-chair of the technology industry group at law firm Covington & Burling, has said: “Significant bipartisanship in Congress on US competition with China” makes Schumer’s approach “a winning message — and signals that the US is moving in a different direction than Europe.”
However, like Brussels, Washington has also started to impose requirements on the most powerful foundation models. In an executive order adopted late in 2023, the Biden White House required companies that create powerful, dual-use systems — that is, ones that also have a potential military use — to disclose their systems’ capabilities to officials, while also promoting ways to set standards and guidelines for how such models are trained, tested and monitored. Though less stringent that the EU’s new law, the order was the first comprehensive attempt in the US to address AI.
If governments don’t regulate now, won’t the AI race become a dangerous free-for-all?
Many tech companies say that the development of AI should mirror the early days of the internet: regulators held off then, letting innovation flourish, and only stepped in later, as needed.
There are already signs that new industry standards and agreements about best practices in AI are starting to take hold, even without explicit regulation. In the US, for example, the industry has been working with the National Institute for Standards and Technology on codifying the best ways to design, train, and deploy AI systems. At the urging of the White House, a group of leading AI companies signed up last year to a set of voluntary commitments.
Yet critics warn that a flood of capital into the AI industry and soaring valuations for AI start-ups has led to unrestrained development of the technology, regardless of the risks. In the one of the clearest signs of the stresses this has created inside tech companies, Sam Altman, CEO of OpenAI, was sacked by the company’s board in 2023 over worries about his leadership, before a revolt among the staff let to him to be reinstated five days later. Others who have warned about the dangers of a headlong rush in AI — like Elon Musk — have also appeared to throw aside their concerns and joined the race.
Some people developing AI say it could destroy humanity — is that not reason for regulation immediately?
No one in the tech industry thinks today’s AI systems present an existential threat to humanity and there is no agreement on when — if ever — the technology could reach that point. But last year an open letter signed by many technologists called for a six-month moratorium on work on the most advanced systems, to allow time to come up with new safety protocols.
While governments have started to consider this issue, it would take new international agreements to try to control the spread of dangerous AI. Even then, such efforts might be impractical, given the wide availability of the computing resources and data sets needed to train AI systems.
For now, the same companies that are leading the charge into AI claim they are also at the forefront of trying to rein it in. OpenAI said in the middle of 2023 that it was creating an internal team to start researching ways to control ‘superintelligent’ computers, which it thinks could come this decade. But less than a year later, it disbanded the team — and one of the people who had led it accused the company of being more interested in building “shiny products” than creating a real culture around AI safety.
Comments