News in English

OpenAI Bosses: We Take Safety ‘Very Seriously’

OpenAI is trying to ease concerns following the departure of two safety executives. Last week saw the resignation of Ilya Sutskever, co-founder and chief scientist of OpenAI, and Jan Leike, both of whom headed the company’s “superalignment team,” which was focused on the safety of future advanced artificial intelligence (AI) systems. With their departure, the company has effectively dissolved that team.  While Sutskever said […]

The post OpenAI Bosses: We Take Safety ‘Very Seriously’ appeared first on PYMNTS.com.

OpenAI is trying to ease concerns following the departure of two safety executives.

Last week saw the resignation of Ilya Sutskever, co-founder and chief scientist of OpenAI, and Jan Leike, both of whom headed the company’s “superalignment team,” which was focused on the safety of future advanced artificial intelligence (AI) systems. With their departure, the company has effectively dissolved that team

While Sutskever said his departure was to pursue other projects, Leike wrote on X Friday (May 17) that he had reached a “breaking point” with OpenAI’s leadership over the company’s central priorities.

He also wrote that the company did not give safety enough emphasis, especially in terms of artificial general intelligence (AGI), an as-yet-unrealized version of AI that can think and reason like humans.

“We are long overdue in getting incredibly serious about the implications of AGI,” Leike wrote.OpenAI must become a safety-first AGI company.”

But Sam Altman and Greg Brockman, OpenAI’s CEO and president, wrote in a joint message on X Saturday (May 18) that they were aware of the risks and potential of AGI, saying the company had called for international AGI standards and helpedpioneer” the practice of examining AI systems for catastrophic threats.

“Second, we have been putting in place the foundations needed for safe deployment of increasingly capable systems,” the executives wrote.

“Figuring out how to make a new technology safe for the first time isn’t easy. For example, our teams did a great deal of work to bring GPT-4 to the world in a safe way, and since then have continuously improved model behavior and abuse monitoring in response to lessons learned from deployment.”

A report by Bloomberg News last week said that other members of the superalignment team had also left OpenAI in recent months, adding even more challenges. The company has appointed John Schulman, a co-founder specializing in large language models, as the scientific lead for the organization’s alignment work.

Aside from the superalignment team, OpenAI has other employees specializing in AI safety across a range of teams inside the company, that report said, as well as individual teams dedicated solely to safety. Among them is a preparedness team that analyzes and mitigates potential catastrophic risks connected to AI systems.

Meanwhile, Altman appeared on the “All-In” podcast earlier this month, expressing support for the creation of an international agency to regulate AI, citing concerns about the potential for “significant global harm.”

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.

 

The post OpenAI Bosses: We Take Safety ‘Very Seriously’ appeared first on PYMNTS.com.

Читайте на 123ru.net