News in English

Safe Superintelligence’s Launch Spotlights OpenAI Roots

Ilya Sutskever, co-founder and former chief scientist of OpenAI, has launched Safe Superintelligence Inc. (SSI), a new artificial intelligence (AI) company focused on creating a safe and powerful AI system, marking another evolution from OpenAI’s roots. To understand Ilya Sutskever’s new venture, consider the goals of OpenAI, the company he co-founded in 2015. OpenAI aims […]

The post Safe Superintelligence’s Launch Spotlights OpenAI Roots appeared first on PYMNTS.com.

Ilya Sutskever, co-founder and former chief scientist of OpenAI, has launched Safe Superintelligence Inc. (SSI), a new artificial intelligence (AI) company focused on creating a safe and powerful AI system, marking another evolution from OpenAI’s roots.

To understand Ilya Sutskever’s new venture, consider the goals of OpenAI, the company he co-founded in 2015. OpenAI aims to develop artificial general intelligence (AGI), a system that can rival human abilities. Their mission is to ensure that AGI benefits all of humanity, not just a select few.

OpenAI comprises two entities: the non-profit OpenAI, Inc. and its for-profit subsidiary OpenAI Global, LLC. The organization has been at the forefront of the ongoing AI boom, developing several technologies, including advanced image generation models like DALL·E and the AI-enabled chatbot ChatGPT, and is often credited with sparking the current AI frenzy.

Major Investors

OpenAI was founded by a group of prominent AI researchers and entrepreneurs, including Sutskever, Greg Brockman, Trevor Blackwell, Vicki Cheung, Andrej Karpathy, Durk Kingma, Jessica Livingston, John Schulman, Pamela Vagata and Wojciech Zaremba. Sam Altman and Elon Musk served as the initial board of directors members.

OpenAI has received significant investments from Microsoft, totaling $11 billion as of 2023. These investments have allowed the organization to pursue its ambitious research goals and develop AI technology.

Despite its success, OpenAI has faced criticism for its shift toward a more commercial focus, with some experts arguing that the organization has strayed from its original mission of developing safe and beneficial AGI.

This criticism has been fueled by the recent leadership changes at OpenAI, which saw Altman removed as CEO and Brockman resigning as president before both returned after negotiations with the board. The board now includes former Salesforce co-CEO Bret Taylor as chairman and retired U.S. Army general and National Security Agency (NSA) head Paul Nakasone as a member, with Microsoft also obtaining a non-voting board seat.

Against this backdrop, Sutskever has launched SSI, which he claims will approach safety and capabilities in tandem, allowing the company to advance its AI system while prioritizing safety. The announcement emphasized the company’s commitment to avoiding distractions from management overhead or product cycles, which often pressure AI teams at companies like OpenAI, Google and Microsoft.

“Our business model means safety, security, and progress are all insulated from short-term commercial pressures,” the announcement said. “This way, we can scale in peace.”

Shift in Priorities?

The launch of the new company follows a turbulent period at OpenAI. In late 2023, Sutskever was involved in an unsuccessful attempt to remove CEO Sam Altman from his position. By May, Sutskever decided to leave the company entirely.

Sutskever’s exit is part of a broader trend. Shortly after his departure, two other notable OpenAI employees — AI researcher Jan Leike and policy researcher Gretchen Krueger — also announced their resignations. Both cited concerns that OpenAI was prioritizing product development over safety considerations.

These departures have sparked discussions within the AI community about balancing rapid technological advancement and responsible development practices. Many interpret Sutskever’s decision to start SSI as a response to what he perceives as a shift in OpenAI’s focus.

As OpenAI continues to forge partnerships with tech giants like Apple and Microsoft, SSI is taking a different approach, focusing solely on developing safe superintelligence without the pressure of commercial interests.

This has reignited the debate over the possibility of achieving such a feat, with some experts questioning the feasibility of creating a superintelligent AI, given the current limitations of AI systems and the challenges in ensuring its safety.

Critics of the superintelligence goal point to the current limitations of AI systems, which, despite their impressive capabilities, still struggle with tasks that require common sense reasoning and contextual understanding. They argue that the leap from narrow AI, which excels at specific tasks, to a general intelligence that surpasses human capabilities across all domains is not merely a matter of increasing computational power or data.

Additionally, even among those who believe in the possibility of superintelligence, there are concerns about ensuring its safety. The development of a superintelligent AI would require advanced technical capabilities and a deep understanding of ethics, values and the potential consequences of such a system’s actions. Additionally, skeptics argue that the challenges involved in creating a safe superintelligence may be insurmountable given our current understanding of AI and its limitations.

As the AI landscape continues to evolve, the debate over the potential and limitations of artificial intelligence is likely to intensify. While the goal of creating a safe and beneficial superintelligent AI remains a distant and controversial prospect, the work of researchers like Sutskever and his colleagues at SSI will likely shape the future of this rapidly advancing field, just as OpenAI’s achievements have done in recent years.

The post Safe Superintelligence’s Launch Spotlights OpenAI Roots appeared first on PYMNTS.com.

Читайте на 123ru.net