China, the West's AI Adversary, Joins Call for AI Oversight

Global AI leaders sign yet another open letter warning against the existential risk of AI. This time, China joins them

Deborah Yao, Editor, AI Business

June 7, 2023

2 Min Read
RAWF8/GETTY IMAGES

To make sure the public and regulators are aware of the existential dangers posed by AI, global leaders and AI luminaries signed yet another open letter warning of future risks.

Around the same time, China’s Xinhua news agency reported that Chinese President Xi Jinping is calling for an acceleration in the modernization of China’s national security system and capacity. This includes improving guardrails around the use of AI and online data.

China is widely considered the West’s potential AI adversary and competitor. Xi's latest mandates ostensibly is given for its own sovereign security, rather than global safety.

Xi made the remarks at the first meeting of the National Security Commission under the 20th CPC Central Committee. At the meeting, members urged “dedicated efforts” to improve the security governance of internet data and artificial intelligence.

The mandates come as China’s national security issues are “considerably more complex and much more difficult to be resolved,” the group said. As such, China must be prepared to handle “high winds, choppy waters, and even dangerous storms.”

At the meeting, members called for proactively shaping a “favorable” external security environment for China to “better safeguard its opening up and push for a deep integration of development and security.”

Warning from Silicon Valley

In San Francisco, the nonprofit Center for AI Safety announced on May 30 that a “historic” collection of AI experts signed its open letter, including Turing award winners Geoffrey Hinton and Yoshua Bengio, as well as OpenAI CEO Sam Altman, Google DeepMind CEO Demis Hassabis, Microsoft co-founder Bill Gates and many others.

The letter has one statement: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

Notably, the nonprofit said Meta leaders did not sign the letter.

In late March, many of the same signatories signed an open letter from the Future of Life Institute warning of similar existential risks posted by advanced AI. Altman himself and other AI leaders have been on a world tour to urge stronger regulation of powerful AI above a certain threshold. And the U.S. has called for the public vetting of advanced AI models from OpenAI, Google, Microsoft and others.

This article first appeared on IoT World Today's sister site, AI Business.

Read more about:

Asia

About the Author(s)

Deborah Yao

Editor, AI Business

Deborah Yao is an award-winning journalist who has worked at The Associated Press, Amazon and the Wharton School. A graduate of Stanford University, she is a business and tech news veteran with particular expertise in finance. She loves writing stories at the intersection of AI and business.



Sign Up for the Newsletter
The most up-to-date news and insights into the latest emerging technologies ... delivered right to your inbox!

You May Also Like