OpenAI reportedly disbands existing AI risk team

Google News

OpenAI’s Superalignment team, tasked with controlling the existential crisis of superhuman AI systems, has reportedly been disbanded. wired on friday. The news comes a few days after team founders Ilya Satskeva and Jan Reike said: quit the company at the same time.

Why would anyone sue AI companies? | Future Tech

Wired reports that OpenAI’s Superalignment team, which was first launched in July 2023 to prevent future superhuman AI systems from being compromised, is no more. The report states that the group’s research will be absorbed into OpenAI’s other research activities. According to Wired, research into the risks associated with more powerful AI models will be led by OpenAI co-founder John Schulman. Sutskever and Leike were one of OpenAI’s top scientists, focusing on AI risks.

Like posted X long thread Friday vaguely explains why he left OpenAI. He said he has been fighting with OpenAI leaders over core values ​​for some time, but reached a breaking point this week. Reike noted that the superalignment team is “facing headwinds” and struggling to get enough computing power for important research. He believes OpenAI needs to focus more on security, safety, and collaboration.

OpenAI’s press team instructed us: Sam Altman tweets When asked if the Super Alignment team has disbanded. Altman said he plans to post a longer article in the coming days and that OpenAI “still has a lot of work to do.”

Afterwards, an OpenAI spokesperson revealed, “Superalignment will be more deeply ingrained across research in the future, which will allow us to better achieve the goals of Superalignment.” The company says the integration began “several weeks ago” and will ultimately result in the transfer of Superalignment team members and projects to other teams.

“Currently, there are no solutions to steer or control a potentially superintelligent AI and prevent it from misbehaving,” the Superalignment team said in an OpenAI blog post. Released in July. “However, current coordination techniques are not ready for superintelligence because humans cannot reliably monitor AI systems that are much smarter than we are. We need new scientific and technological breakthroughs. is.”

It is unclear at this point whether these technological advances will receive similar attention. OpenAI definitely has other teams focused on safety. Shulman’s team has reportedly absorbed SuperAlignment’s responsibilities and is now responsible for fine-tuning the AI ​​model after training. However, Superalignment specifically focused on the most serious consequences of rogue AI. As Gizmodo noted yesterday, some of OpenAI’s most outspoken AI safety advocates said: resigned or was fired In the last few months.

Earlier this year, the group published a notable research paper: Control large AI models with small AI models—Considered the first step towards controlling super-intelligent AI systems. It is unclear who will take the next steps for these projects at OpenAI.

Sam Altman’s AI startup this week Introducing GPT-4 Omni, the company’s latest Frontier model featuring ultra-low latency responses that are more human than ever. Many of the OpenAI staff members mentioned how their latest AI models bring him closer than ever to anything in science fiction, especially movies. she.

Source of this program
“My father says this plugin is great!!”
“Wired on Friday reports that OpenAI’s Super Alignment Team, which was responsible for controlling the existential crisis of superhuman AI systems, has been disbanded. This news comes just days after the incident…”
Source: Read more
Source link: https://gizmodo.com/openai-reportedly-dissolves-its-existential-ai-risk-tea-1851484827

Author: BLOGGER