Your #1 source for auto industry news and content

Given the ‘risk to society’ Elon Musk and experts urge AI pause

More than 1,000 individuals, including Musk, signed a letter from the nonprofit Future of Life Institute, which urged a halt on the development of advanced artificial intelligence until standardized safety guidelines for such designs were created.

In an open letter citing possible risks to society and humanity, Elon Musk and a group of artificial intelligence (AI) experts and business executives are urging a six-month halt to the development of systems more powerful than OpenAI’s recently released GPT-4.

Backed by Microsoft, OpenAI revealed the fourth version of OpenAI’s GPT (Generative Pre-trained Transformer) AI program earlier this month. Which has wowed users on its wide range of applications, including engaging users in human-like conversation, song composition, and document summarization, earlier this month. 

More than 1,000 individuals, including Musk, signed a letter from the nonprofit Future of Life Institute, which urged a halt on the development of advanced artificial intelligence until standardized safety guidelines for such designs were created, put into place, and independently audited.

In the letter, it was stated that “powerful AI systems should only be developed once we are confident that their effects will be positive and their risks will be manageable.”

Additionally, the letter outlined potential dangers posed to society and civilization by human-competitive AI systems. Which included potential for political and economic turmoil, and urged developers to collaborate with regulators and lawmakers on governance and regulatory frameworks.

On the other hand, Musk has been outspoken about his worries regarding AI, which is being used in an autopilot system by his company.

Since its debut in 2017, OpenAI’s ChatGPT has inspired competitors to expedite the creation of similar large language models and businesses to incorporate generative AI models into their products.

Critics asserted that claims regarding the technology’s current potential had been greatly exaggerated and accused the letter’s signatories of promoting “AI hype.”

According to Ume University assistant professor and AI researcher Johanna Björklund, “these remarks aim to create excitement. She adds, “the purpose of it is to make people anxious. I don’t think there’s a need to pull the handbrake.”  

She suggested that greater transparency requirements for AI researchers be put in place rather than pausing research. “You should be very transparent about how you conduct AI research,” the author advised.

Stay up to date on exclusive content from CBT News by following us on Facebook, Twitter, Instagram and LinkedIn.

Don’t miss out! Subscribe to our free newsletter to receive all the latest news, insight and trends impacting the automotive industry.

CBT News is part of the JBF Business Media family.

Jaelyn Campbell
Jaelyn Campbell
Jaelyn Campbell is a staff writer/reporter for CBT News. She is a recent honors cum laude graduate with a BFA in Mass Media from Valdosta State University. Jaelyn is an enthusiastic creator with more than four years of experience in corporate communications, editing, broadcasting, and writing. Her articles in The Spectator, her hometown newspaper, changed how people perceive virtual reality. She connects her readers to the facts while providing them a voice to understand the challenges of being an entrepreneur in the digital world.

Related Articles

Latest Articles

From our Publishing Partners