News in English

Hacker accessed OpenAI's internal AI details in 2023 breach: report

OpenAI was reportedly breached last year by a hacker who accessed the company's internal discussions about details of its artificial intelligence (AI) technologies.

A report on Thursday by the New York Times citing two people familiar with the incident said a hacker accessed an online forum where OpenAI employees talked about its latest technologies.

However, the breach didn't result in the hacker accessing systems where OpenAI houses and builds its AI, including the chatbot ChatGPT.

OpenAI executives told employees and the company's board about the breach at an all-hands meeting in April 2023, according to the report, but executives opted against sharing the news publicly because no information related to customers or partners had been stolen.

OPENAI CTO SAYS AI MAY FILL SOME CREATIVE INDUSTRY JOBS THAT WERE REPLACEABLE

The report said OpenAI executives didn't consider the incident a national security threat because they believed the hacker was a private individual with no known connections to a foreign government.

That assessment led OpenAI to not inform federal law enforcement agencies about the breach.

"As we shared with our board and employees last year, we identified and fixed the underlying security issue and continue to invest in strengthening our security," an OpenAI spokesperson told FOX Business.

GOOGLE WILL MANDATE DISCLOSURE OF DIGITALLY ALTERED ELECTION ADS

The hack raised new fears among some OpenAI employees that an adversarial foreign government like China could steal the company's AI technology that could eventually pose a threat to U.S. national security, as well as spurring questions about how the company is handling its security, the Times reported.

In May, OpenAI said it disrupted five covert influence operations that sought to use its AI models for "deceptive activity" across the internet. The disclosure was the latest incident to raise concerns about potential misuse of AI technology.

OpenAI was among the 16 companies developing AI technologies that pledged at a global meeting in May to develop the technology safely and address concerns raised by regulators around the world.

WHAT IS ARTIFICIAL INTELLIGENCE (AI)?

In a separate announcement in May, OpenAI announced it was forming a new safety and security committee to advise the board about how to approach those issues in the company's projects and operations.

The committee will evaluate and develop OpenAI's processes and safeguards and share its recommendations at the end of the 90-day assessment period, the company said. It will then share any adopted recommendations "in a manner that is consistent with safety and security."

GET FOX BUSINESS ON THE GO BY CLICKING HERE

OpenAI CEO Sam Altman and Chair Bret Taylor, as well as board members Adam D'Angelo and Nicole Seligman, will serve on the safety committee along with four of its technical and policy experts.

FOX Business' Stephen Sorace and Reuters contributed to this report.

Читайте на 123ru.net