China and Europe are leading the push to regulate AI

A robot plays the piano at the Apsara Conference, a cloud computing and artificial intelligence conference, in China, on Oct. 19, 2021. While China revamps its rulebook for tech, the European Union is thrashing out its own regulatory framework to rein in AI but has yet to pass the finish line.

Str | Afp | Getty Images

As China and Europe try to rein in artificial intelligence, a new front is opening up around who will set the standards for the burgeoning technology.

In March, China rolled out regulations governing the way online recommendations are generated through algorithms, suggesting what to buy, watch or read.

It is the latest salvo in China’s tightening grip on the tech sector, and lays down an important marker in the way that AI is regulated.

“For some people it was a surprise that last year, China started drafting the AI regulation. It’s one of the first major economies to put it on the regulatory agenda,” Xiaomeng Lu, director of Eurasia Group’s geo-technology practice, told CNBC.

While China revamps its rulebook for tech, the European Union is thrashing out its own regulatory framework to rein in AI, but it has yet to pass the finish line.

With two of the world’s largest economies presenting AI regulations, the field for AI development and business globally could be about to undergo a significant change.

A global playbook from China?

At the core of China’s latest policy is online recommendation systems. Companies must inform users if an algorithm is being used to display certain information to them, and people can choose to opt out of being targeted. 

Lu said that this is an important shift as it grants people a greater say over the digital services they use.  

Those rules come amid a changing environment in China for their biggest internet companies. Several of China’s homegrown tech giants — including Tencent, Alibaba and ByteDance — have found themselves in hot water with authorities, namely around antitrust.

I see China’s AI regulations and the fact that they’re moving first as essentially running some large-scale experiments that the rest of the world can watch and potentially learn something from.

Matt Sheehan

Carnegie Endowment for International Peace

“I think those trends shifted the government attitude on this quite a bit, to the extent that they start looking at other questionable market practices and algorithms promoting services and products,” Lu said.

China’s moves are noteworthy, given how quickly they were implemented, compared with the timeframes that other jurisdictions typically work with when it comes to regulation.

China’s approach could provide a playbook that influences other laws internationally, said Matt Sheehan, a fellow at the Asia program at the Carnegie Endowment for International Peace.

“I see China’s AI regulations and the fact that they’re moving first as essentially running some large-scale experiments that the rest of the world can watch and potentially learn something from,” he said.

Europe’s approach

The European Union is also hammering out its own rules.

The AI Act is the next major piece of tech legislation on the agenda in what has been a busy few years.

In recent weeks, it closed negotiations on the Digital Markets Act and the Digital Services Act, two major regulations that will curtail Big Tech.

The AI law now seeks to impose an all-encompassing framework based on the level of risk, which will have far-reaching effects on what products a company brings to market. It defines four categories of risk in AI: minimal, limited, high and unacceptable.

France, which holds the rotating EU Council presidency, has floated new powers for national authorities to audit AI products before they hit the market.

Defining these risks and categories has proven fraught at times, with members of the European Parliament calling for a ban on facial recognition in public places to restrict its use by law enforcement. However, the European Commission wants to ensure it can be used in investigations while privacy activists fear it will increase surveillance and erode privacy.

Sheehan said that although the political system and motivations of China will be “totally anathema” to lawmakers in Europe, the technical objectives of both sides bear many similarities — and the West should pay attention to how China implements them. 

“We don’t want to mimic any of the ideological or speech controls that are deployed in China, but some of these problems on a more technical side are similar in different jurisdictions. And I think that the rest of the world should be watching what happens out of China from a technical perspective.”

China’s efforts are more prescriptive, he said, and they include algorithm recommendation rules that could rein in the influence of tech companies on public opinion. The AI Act, on the other hand, is a broad-brush effort that seeks to bring all of AI under one regulatory roof.

Lu said the European approach will be “more onerous” on companies as it will require premarket assessment.

“That’s a very restrictive system versus the Chinese version, they are basically testing products and services on the market, not doing that before those products or services are being introduced to consumers.”

‘Two different universes’

Seth Siegel, global head of AI at Infosys Consulting, said that as a result of these differences, a schism could form in the way AI develops on the global stage.

“If I’m trying to design mathematical models, machine learning and AI, I will take fundamentally different approaches in China versus the EU,” he said.

At some point, China and Europe will dominate the way AI is policed, creating “fundamentally different” pillars for the technology to develop on, he added.

“I think what we’re going to see is that the techniques, approaches and styles are going to start to diverge,” Siegel said.

Sheehan disagrees there will be splintering of the world’s AI landscape as a result of these differing approaches.

“Companies are getting much better at tailoring their products to work in different markets,” he said.

The greater risk, he added, is researchers being sequestered in different jurisdictions.

The research and development of AI crosses borders and all researchers have much to learn from one another, Sheehan said.

“If the two ecosystems cut ties between technologists, if we ban communication and dialog from a technical perspective, then I would say that poses a much greater threat, having two different universes of AI which could end up being quite dangerous in how they interact with each other.”

https://www.cnbc.com/2022/05/26/china-and-europe-are-leading-the-push-to-regulate-ai.html

Steve Liem

Learn More →