AI safety has taken a backseat to shiny products, says OpenAI’s chief scientist – who recently resigned

Digital Products

How can we prevent AI from going out of control?

OpenAI, 80 billion dollars The AI ​​company behind ChatGPT just disbanded the team working on the problem after two executives in charge of the effort left the company.

The controversy over AI safety comes less than a week after OpenAI announced a new AI model. GPT-4ohas more features and voices eerily similar Scarlett Johansson. Company paused That particular voice will be published on Monday.

Related: Scarlett Johansson says ‘no’ to OpenAI, then ‘shocked’ the company used a voice that was ‘eerily similar’ to hers

Sahil Agarwal, Yale PhD in Applied Mathematics, co-founder and current owner Encrypt AIsaid the startup focused on making AI a risk-free bet for businesses. entrepreneur Innovation and safety are not separate things that need to be balanced, but rather they should go hand in hand as your company grows.

“Trying to make these systems safer for society doesn’t stop innovation from happening,” Agarwal said.

Digital Products OpenAI executives raise safety concerns

Last week, former OpenAI chief scientist and co-founder Ilya Satskeva and former OpenAI head of research Jan Reik resigned From the AI ​​giant. The two were tasked with leading a super-alignment team that ensured that the AI ​​remained under human control even as its capabilities increased.

Related: OpenAI chief scientist and co-founder Ilya Sutskever resigns

Sutskever said he is “confident” that OpenAI, under CEO Sam Altman’s leadership, will build “safe and useful” AI. Parting words,like Said He left because he felt that OpenAI was not prioritizing AI safety.

“For the past few months, my team has been sailing against the wind,” Rike said. I have written. “Building machines that are smarter than humans is an inherently risky endeavor.”

Like too Said “Over the past few years, safety culture and processes have taken a backseat to shiny products,” OpenAI said, calling on ChatGPT manufacturers to prioritize safety.

But in recent years, safety culture and processes have taken a backseat to shiny products.

— Jan Leike (@janleike) May 17, 2024

OpenAI has disbanded the Super Alignment team led by Reike and Satskever, the company confirmed. wired on friday.

Sam Altman, CEO of OpenAI, said:Photographer: Dustin Chambers/Bloomberg via Getty Images

Altman and OpenAI President and Co-Founder Greg Brockman released In a statement responding to Reich on Saturday, OpenAI raised awareness of AI risks, helped the world prepare for AI, and noted that AI companies are deploying systems safely.

We really appreciate everything Jan has done for OpenAI, and we know he will continue to contribute to this mission externally. In light of the questions raised by his resignation, I wanted to explain a little bit about how we are thinking about our overall strategy.

first,… https://t.co/djlcqEiLLN

— Greg Brockman (@gdb) May 18, 2024

Digital Products How can we prevent AI from running out of control?

Agarwal says it’s not necessarily superintelligent beings that are dangerous, as OpenAI is trying to make ChatGPT more human-like.

“Even systems like ChatGPT are never implicitly making inferences,” Agarwal said. entrepreneur. “So I don’t look at this risk from the perspective of superintelligent artificial organisms.”

The problem, he explained, is that as AI becomes more powerful and multifaceted, the potential for more implicit bias and harmful content increases, increasing the risk of implementing AI. Adding more ways to interact with ChatGPT, from images to videos, forces OpenAI to consider safety from more angles.

Related: OpenAI launches new AI chatbot GPT-4o

Mr. Agarwal’s company is safety leaderboard Earlier this month, we ranked the safety and security of AI models from Google, Anthropic, Cohere, OpenAI, and more.

They found that the new GPT-4o model may contain more bias than previous models and may produce more toxic components than previous models. .

“What ChatGPT has done is make AI a reality for everyone,” Agarwal said.

Source of this program
“My dad says this plugin is elegant!!”
“We spoke to AI experts who told us that safety and innovation are not separate things that need to be balanced, but rather go hand in hand…”
Source: Read more
Source link: https://www.entrepreneur.com/business-news/openai-resignations-how-do-we-prevent-ai-from-going-rogue/474414

Author: BLOGGER