News in English

What Elon Musk and Ilya Sutskever Feared About OpenAI Is Becoming Reality

OpenAI is reportedly working on A.I. models with advanced reasoning capabilities under the code-name "Strawberry."

Man in blue tshirt stands in crowd

As part of OpenAI’s path towards artificial general intelligence (A.G.I), a term for technology matching the intelligence of humans, the company is reportedly attempting to enable A.I. models to perform advanced reasoning. Such work is taking place under a secretive project code-named ‘Strawberry,’ as reported by Reuters, which noted that the project was previously known as Q* or Q Star. While its name may have changed, the project isn’t exactly new. Researchers and co-founders of OpenAI have previously warned against the initiative, with concerns over it reportedly playing a part in the brief ousting of Sam Altman as OpenAI’s CEO in November.

Strawberry uses a unique method of post-training A.I. models, a process that improves their performance after being trained on datasets, according to Reuters, which cited internal OpenAI documents and a person familiar with the project. With the help of “deep-research” datasets, the company aims to create models that display human-level reasoning. OpenAI reportedly is looking into how Strawberry can allow models to be able to complete tasks over an extended period of time, search the web by themselves and take actions on its findings, and perform the work of engineers. OpenAI did not respond to requests for comment from Observer.

Elon Musk and Ilya Sutskever raised concerns about Q*

Altman, who has previously reiterated OpenAI’s desire to create models able to reason, briefly lost control of his company last year when his board fired him for four days. Shortly before the ousting, several OpenAI employees had become concerned over breakthroughs presented by what was then known as Q*, a project spearheaded by Ilya Sutskever, OpenAI’s former chief scientist. Sutskever himself had reportedly begun to worry about the project’s technology, as did OpenAI employees working on A.I. safety at the time. After his reinstatement, Altman referred to news reports about Q* as an “unfortunate leak” in an interview with the Verge.

Elon Musk, another OpenAI co-founder, has also raised the alarm about Q* in the past. The billionaire, who severed ties with the company in 2018, referred to the project in a lawsuit filed against OpenAI and Altman that has since been dropped. While discussing OpenAI’s close partnership with Microsoft (MSFT), Musk’s suit claimed that the terms of the deal dictate that Microsoft only has rights to OpenAI’s pre-A.G.I. technology and that it is up to OpenAI’s board to determine when the company has achieved A.G.I.

Musk argued that OpenAI’s GPT-4 model constitutes as A.G.I, which he believes “poses a grave threat to humanity,” according to the suit. Court filings stated that “OpenAI is currently developing a model known as Q* that has an even stronger claim to A.G.I.”

Recent internal meetings have suggested that OpenAI is making rapid progress toward the type of human-level reasoning that Strawberry is working on. In an OpenAI all-hands meeting held earlier this month, the company unveiled a five-tiered system to track its progress towards A.G.I., as reported by Bloomberg. While the company said it is currently on the first level, known as “chatbots,” it revealed that it has nearly reached the second level of “reasoners,” which involves technology that can display human-level problem-solving. The subsequent steps consist of A.I. systems acting as “agents” that can take actions, “innovators” that aid in invention and “organizations” that do the work of an organization.

Читайте на 123ru.net