News in English

The OpenAI squad in charge of mitigating the risks of super-intelligent AI has lost nearly half its members, says a former researcher

"It's not been like a coordinated thing. I think it's just people sort of individually giving up," former OpenAI governance researcher Daniel Kokotajlo (not pictured) told Fortune.
  • OpenAI initially had about 30 people working on AI safety, says ex-researcher Daniel Kokotajlo.
  • But Kokotajlo says multiple departures have since reduced its ranks to about 16 members.
  • Kokotajlo said people working on "AGI safety and preparedness are being increasingly marginalized."

OpenAI has lost nearly half of the company's team working on AI safety, according to one of its former governance researchers, Daniel Kokotajlo.

"It's not been like a coordinated thing. I think it's just people sort of individually giving up," Kokotajlo told Fortune in a report published Tuesday.

Kokotajlo, who left OpenAI in April 2023, said that the ChatGPT maker initially had about 30 people working on safety issues relating to artificial general intelligence.

But multiple departures within the year have since seen the safety team's ranks whittled down to around 16 members, per Kokotajlo.

"People who are primarily focused on thinking about AGI safety and preparedness are being increasingly marginalized," Kokotajlo told the outlet.

Business Insider could not independently confirm Kokotajlo's claims about OpenAI's staffing numbers. When approached for comment, a spokesperson for OpenAI told Fortune that the company is "proud of our track record providing the most capable and safest AI systems and believe in our scientific approach to addressing risk."

OpenAI, the spokesperson added, will "continue to engage with governments, civil society and other communities around the world" on issues relating to AI risks and safety.

Earlier this month, the company's cofounder and head of its alignment science efforts, John Schulman, said he was leaving OpenAI to join rival Anthropic.

Schulman said in an X post on August 5 that his decision was a "personal one" and wasn't "due to lack of support for alignment research at OpenAI."

Schulman's departure came just months after another cofounder, chief scientist Ilya Sutskever, announced his resignation from OpenAI in May. Sutskever launched his own AI company, Safe Superintelligence Inc., in June.

Jan Leike, who co-led OpenAI's superalignment team with Sutskever, left the company in May. Like Schulman, he now works at Anthropic.

Leike and Sutskever's team was tasked with ensuring that OpenAI's superintelligence would remain aligned with humanity's interests.

"I joined because I thought OpenAI would be the best place in the world to do this research," Leike wrote in an X post in May.

"But over the past years, safety culture and processes have taken a backseat to shiny products," he added.

OpenAI didn't immediately respond to a request for comment from Business Insider sent outside regular business hours.

Read the original article on Business Insider

Читайте на 123ru.net