Elon Musk and others urge pause in AI research, citing potential risks to humanity

Elon Musk, Steve Wozniak and other tech experts signed an open letter and are calling on all AI labs to immediately pause for at least six months the training of AI models more powerful than GPT-4. They warn this could represent a profound change in the history of life on Earth and believe pose serious risks to society and humanity.

Earlier this month, artificial intelligence company OpenAI released the fourth iteration of its GPT (Generative Pre-trained Transformer) the AI-powered language which became an internet sensation in 2022.

GPT-4, the latest model, can understand images as input, meaning it can look at a photo and provide the user with broad information about the image; and it has the ability to write code in all major programming languages, among other advances.

The letter was signed by more than 1,000 people including Musk. Sam Altman, chief executive at OpenAI, was not among those who signed the letter. Sundar Pichai and Satya Nadella, CEOs of Alphabet and Microsoft, were not among those who signed either. The letter urged the establishment of shared safety protocols, audited by independent experts.

“Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” said the letter issued by the Future of Life Institute.

“AI systems with human-competitive intelligence can pose profound risks to society and humanity,” the letter said.

“Recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control,” the letter added.

The open letter called on AI developers to work with policymakers to improve oversight of artificial intelligence technology, and called on the industry to shift its priorities as it works to enhance AI.

“AI research and development should be refocused on making today’s powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal,” the letter said.

Questions in the open letter titled “Pause Giant AI Experiments”:

Should we let AI flood social media with propaganda & untruth?

Should we automate all the jobs?

Should we develop AI minds that might eventually outnumber, outsmart, obsolete, and replace us?

Should we risk loss of control of our civilization?

However, the letter did not detail the dangers revealed by GPT-4.

The Editor of Millat Times English and founding member of Millat Times Group, featuring stories and reports Email: irshadayub5@gmail.com