News in English

How Do Big Tech’s A.I. Chips Stack Up Against Nvidia’s Dominance?

Nvidia (NVDA) CEO Jensen Huang" width="970" height="686" data-caption='Nvidia CEO Jensen Huang is well aware of his company&#8217;s vulnerability at the top. <span class="lazyload media-credit">Justin Sullivan/Getty Images</span>'>

In one of the strongest indicators yet of A.I.’s impact on the stock market, Nvidia earlier this week surpassed tech giants Microsoft (MSFT) and Apple (AAPL) to become the most valuable public company in the U.S. The chipmaker’s market cap of more than $3.3 trillion is just the latest notch in a rapid rise to success for the company, which was valued at around $420 billion only two years ago. Nvidia’s rally, driven by its dominance in graphics processing units (GPUs) that power numerous A.I. products, has also made the company’s CEO Jensen Huang the 11th wealthiest person in the world with an estimated net worth of $121.4 billion.

Nvidia’s chips are in such demand that the chipmaker has struggled to keep up with supply. It generated more than $14.8 billion in profit in the first quarter, a 628 percent increase from the year prior. The sustainability of its market dominance, however, faces threats from rivals, many of which are currently its largest customers.

Meta (META), Microsoft, Amazon (AMZN) and Google (GOOGL), all of whom rely upon Nvidia’s H1000 GPUs, reportedly account for nearly 40 percent of Nvidia’s revenue. But the four companies are also introducing their own A.I. chips in an attempt to decrease dependency and costs in the long term.

Meta’s Artemis 

In April, Meta announced plans to develop an in-house A.I. chip that would act as a second generation version of its Meta Training and Inference Accelerator (MTIA), which launched its first product last year. The new chip is internally known as “Artemis.”

Microsoft’s Maia 100

Microsoft, meanwhile, announced its first A.I. chip, the Azure Maia 100, in November. The chip is currently being tested on Microsoft’s Bing A.I. chatbot, GitHub Copilot and OpenAI GPT-3.5-Turbo large language model. The company said it is already developing a second-generation version of the chip.

Google’s Trillium

Google for years has been using in-house A.I. chips, called tensor processing units (TPUs), to power A.I. models. The tech company announced its most recent chip in May. With plans to become available later this year, the chip is named Trillium and supposedly boasts a computing performance that is nearly five times faster than its predecessor the TPU v5e. Companies using Google’s TPUs to power A.I. features include Assembly AI, Hugging Face and Anthropic.

Amazon’s Trainium

Anthropic additionally plans to use chips created by Amazon, which in March concluded a $4 billion investment in the startup. Amazon initially launched its Inferentia A.I. chips in 2018 and, two years later, announced Trainium, a chip designed to train A.I. systems.

AMD and Intel are catching up

It isn’t just Big Tech companies—traditional chipmakers like AMD (AMD) and Intel are also putting pressure on Nvidia. AMD earlier this month revealed its plans to make its next-generation A.I. chip available this year with new chips rolled out on an annual basis. Intel CEO Pat Gelsinger recently described his company’s Gaudi accelerators as significantly more cost-effective than those of Nvidia.

Huang appears to be well aware of his company’s vulnerability at the top. “There are no companies that are assured survival,” said the CEO in November. “If you don’t think you are in peril, that’s probably because you have your head in the sand,” he added.

Читайте на 123ru.net