News in English

Google Chief Scientist Jeff Dean Discusses A.I.’s Energy Crisis and Climate Goal

Google (GOOGL) chief scientist Jeff Dean" width="970" height="647" data-caption='Jeff Dean leads A.I. efforts across Google DeepMind and Google Research. <span class="lazyload media-credit">THOMAS SAMSON/AFP via Getty Images</span>'>

Across the tech industry, companies are lauding the potential of A.I. to transform productivity, business operations and society itself. But the race to develop the emerging technology is also eating up mass amounts of energy. Data centers consume a tremendous amount of energy from powering A.I. models. Of the total electricity use of data centers, “A.I. is a very small portion right now but growing fast,” said Jeff Dean, chief scientist of Google DeepMind and Google Research, while speaking today (July 16) at the Fortune Brainstorm Tech 2024 conference in Park City, Utah.

Data centers are expected to double their power consumption to encompass up to 9 percent of the nation’s electricity by 2030, according to the Electric Power Institute. Meanwhile, A.I. applications currently account for around 10 percent to 20 percent of data center electricity—a figure that is expected to grow rapidly.

Google, a leader in the A.I. race, has seen its emissions skyrocket as it builds more data centers, with its greenhouse gas emissions increasing by 48 percent between 2019 and 2023 despite the company’s pledge to achieve net-zero emissions by 2030. Google’s goal to power its data centers with clean energy isn’t “necessarily a linear thing,” according to Dean, who said that several of the company’s collaborations with clean energy providers aren’t expected to come to fruition for another three to five years.

In addition to using graphics processing units (GPUs) from Nvidia, Google produces its own in-house A.I. chips known as tensor processing units (TPUs), which it says are more energy efficient. “When we first introduced the [TPU] in 2016, that was quite a big advance in energy efficiency,” Dean said, noting that it was about 30 to 80 times more efficient than rival chips at the time. “We’ve now been through multiple successive generations of GPUs that have also made pretty significant improvements in energy efficiency.”

Dean, 55, has worked at Google for 25 years and currently oversees both Google DeepMind and Google Research. The two A.I. research teams merged last year to “have a better set of ideas to build on” and “pool the compute,” he said.

Can Google solve A.I.’s hallucination issues?

Google’s current A.I. focus includes solving A.I.’s hallucination problem, which refers to A.I. models producing false content. “I do think we’re making progress; it’s a difficult problem,” Dean admitted. However, Google’s latest Gemini models have shown promise in decreasing hallucinations around information directly provided by users, he said.

Dean’s team is additionally working on initiatives like Astra, a multimodal A.I. assistant it unveiled in May. The company hopes to test the product by the end of the year. “The ability to combine Gemini models with models that actually have agency and can perceive the world around you in a multimodal way is going to be quite powerful,” he said, adding that Google is taking care to ensure “this technology is ready and that it doesn’t have unforeseen consequences, which is why we’ll roll it out first to a smaller set of initial test users.”

Predictions on the unforeseen consequences of generative A.I. have ranged from a proliferation of misinformation to concerns regarding an existential threat to humanity. Dean said he is “somewhat in the middle” along a spectrum of optimism and pessimism regarding the technology’s risks, citing its potential to transform fields like education and healthcare.

“It’s obviously hard to predict the future in a very fast-moving field,” he said. “I think probably the most extreme viewpoints on either side are not the most likely outcomes.”

Читайте на 123ru.net