Regulate AI? How US, EU and China are going about it

The European Union has reached a tentative deal on a sweeping law that would put guardrails on the technology. PHOTO: AFP

WASHINGTON – Governments do not have a great track record of keeping up with emerging technology. But the complex, rapidly evolving field of artificial intelligence (AI) raises legal, national security and civil rights concerns that cannot be ignored.

The European Union has reached a tentative deal on a sweeping law that would put guardrails on the technology; in China, no company can produce an AI service without proper approvals. The United States is still working on its regulatory approach.

While Congress considers legislation, some American cities and states have already passed laws limiting use of AI in areas such as police investigations and hiring, and President Joe Biden has directed government agencies to vet future AI products for potential national or economic security risks.

1. Why does AI need regulating?

Already at work in products as diverse as toothbrushes and drones, systems based on AI have the potential to revolutionise industries from healthcare to logistics. But replacing human judgment with machine learning carries risks.

Even if the ultimate worry – fast-learning AI systems going rogue and trying to destroy humanity – remains in the realm of fiction, there already are concerns that bots doing the work of people can spread misinformation, amplify bias, corrupt the integrity of tests and violate people’s privacy.

Reliance on facial recognition technology, which uses AI, has already led to people being falsely accused of crimes. A fake AI photo of an explosion near the Pentagon spread on social media, briefly pushing US stocks lower.

Alphabet’s Google, Microsoft, IBM and OpenAI have encouraged lawmakers to implement federal oversight of AI, which they say is necessary to guarantee safety. 

2. What’s been done in the US?

Mr Biden’s executive order on AI sets standards on security and privacy protections and builds on voluntary commitments adopted by more than a dozen companies. Members of Congress have shown intense interest in passing laws on AI, which would be more enforceable than the White House effort, but an overriding strategy has yet to emerge.

Two key senators said they would welcome legislation that establishes a licensing process for sophisticated AI models, an independent federal office to oversee AI, and liability for companies for violating privacy and civil rights.

Among more narrowly targeted Bills proposed so far, one would prohibit the US government from using an automated system to launch a nuclear weapon without human input; another would require that AI-generated images in political advertisements be clearly labelled.

At least 25 US states considered AI-related legislation in 2023, and 15 passed laws or resolutions, according to the National Conference of State Legislatures. Proposed legislation sought to limit use of AI in employment and insurance decisions, healthcare, ballot-counting and facial recognition in public settings, among other objectives. 

3. What has the EU done?

The EU reached a preliminary deal in December on what is poised to become the most comprehensive regulation of AI in the Western world. It would set safeguards on uses of AI seen as most potentially manipulative of the public, such as live scanning of faces.

Developers of general purpose AI models would be required to report a detailed summary of the data used to train their models, according to an EU document seen by Bloomberg. Highly capable models, such as OpenAI’s GPT-4, would be subject to additional rules, including reporting their energy consumption and setting up protections from hackers.

The draft legislation still needs to be formally approved by EU member states and the EU Parliament. Companies that violate the rules would face fines of up to €35 million (S$51 million) or 7 per cent of global revenue, depending on the infringement and size of the company. 

4. What has China done?

A set of 24 government-issued guidelines took effect on Aug 15, targeting generative AI services, such as ChatGPT, that create images, videos, text and other content. Under those guidelines, AI-generated content must be properly labelled and respect rules on data privacy and intellectual property.

A separate set of rules governing the AI-aided algorithms used by technology companies to recommend videos and other content took effect in 2022. 

5. What do the companies say?

Leading technology companies including Amazon.com, Alphabet, IBM and Salesforce pledged to follow the Biden administration’s voluntary transparency and security standards, including putting new AI products through internal and external tests before their release.

In September 2023, Congress summoned tech tycoons including Mr Elon Musk and Mr Bill Gates to advise on its efforts to create a regulatory regime.

One concern for companies is the degree to which US rules could apply to the developers of AI products, not just to users of them. That mirrors a debate in Europe, where Microsoft, in a position paper, contended that it’s crucial to focus on the actual use cases of AI, because companies can’t “anticipate the full range of deployment scenarios and their associated risks”.

6. Why is the US effort in focus?

Since American tech companies and specialized American-made microchips are at the forefront of AI innovation, US leaders wield particular sway over how the field is overseen.

Many of the participants in the Senate meetings stressed that the US should play a leading role in the shaping of global governance of AI, and some referenced China’s advancements in the field as a specific concern.

Critics have raised concern about the potential for tech executives to have too much influence over legislation, creating a form of regulatory capture that could enhance the power of a few large companies and hamper efforts by so-called open-source organisations to build competing AI platforms. BLOOMBERG

Join ST's Telegram channel and get the latest breaking news delivered to you.