But Turing award winner Yann LeCun disagrees

Deborah Yao, Editor, AI Business

March 30, 2023

3 Min Read
Getty

Billionaires Elon Musk and Steve Wozniak, Apple’s cofounder, joined more than 1,000 people to sign an open letter calling for a 6-month pause on development of advanced AI systems as they “pose profound risks to society and humanity.”

Penned by nonprofit Future of Life Institute, the letter said that in recent months AI labs have been “locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.”

OpenAI creator ChatGPT, released to the public in November 2022, has become the fastest growing app of all time. The AI chatbot’s human-like responses to queries, or prompts, has led to an increase in investment from and much closer collaboration with Microsoft. ChatGPT capability is now being incorporated across the software giant’s products. This sparked a competitive response from Google, which rushed to match these announcements.

But the Institute said such advanced tech needs to be “planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening.”

The letter has garnered nearly 1,200 signatures with many more names on hold for verification “due to high demand," the nonprofit said in a blog.

Besides Musk and Wozniak, other notable signers include Turing award winner Yoshua Bengio, AI luminary Stuart Russell, former presidential candidate Andrew Yang, and the co-founders of Skype, Pinterest, Ripple among many others.

Even Emad Mostaque, CEO of Stability AI, the maker of popular text-to-image generator Stable Diffusion, signed. There were a few signers from Google, DeepMind and Microsoft, but none from OpenAI.

One notable dissenter is Turing award winner Yann LeCun, who is also Meta’s chief AI scientist. “Nope. I did not sign this letter. I disagree with its premise,” he tweeted. LeCun later deleted the tweet.

Fearing Generative AI



The Institute – headed by MIT physics professor Max Tegmark − said current AI systems are now becoming as good as humans at doing general tasks.

“Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?”



“Such decisions must not be delegated to unelected tech leaders.”



The nonprofit even pointed to OpenAI’s own statement regarding artificial general intelligence: “At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models.”


During this respite, AI labs and independent experts should come together to develop safety protocols for advanced AI design that will be audited by third parties.

Gary Marcus, professor emeritus at New York University, said he is not exactly sure “what counts as more powerful than GPT-4” but signed the letter nonetheless because he agrees with the spirit of it.

This article first appeared on IoT World Today's sister site, AI Business.

About the Author(s)

Deborah Yao

Editor, AI Business

Deborah Yao is an award-winning journalist who has worked at The Associated Press, Amazon and the Wharton School. A graduate of Stanford University, she is a business and tech news veteran with particular expertise in finance. She loves writing stories at the intersection of AI and business.



Sign Up for the Newsletter
The most up-to-date news and insights into the latest emerging technologies ... delivered right to your inbox!

You May Also Like