News in English

Major Tech Companies Are Quietly Warning of A.I. Risks in SEC Filings

From chipmakers like Nvidia to Big Tech behemoths like Meta and Microsoft, the largest players across A.I. are sounding the alarm on the technology’s risks in SEC filings.

Man in black leather jacket stands on stage behind screen with "<a href="https://observer.com/company/nvidia/" title="Nvidia" class="company-link">Nvidia (NVDA)</a>" written on it

Rising security concerns. Biased datasets. A surge in copyright infringement claims. These are just some of the scenarios major tech companies have put forward as potential risks of A.I. in recent filings with the U.S. Securities and Exchange Commission (SEC). As the tech world races to capitalize off the A.I. revolution, they are also cautioning against its harms in SEC filings under “risk factor” sections, often used to prevent shareholder lawsuits, as first reported by Bloomberg.

A.I. has undoubtedly been a boom for tech companies. Nvidia, which dominates the market for graphics processing units (GPUs) powering A.I. applications, has seen its market cap skyrocket past $3 trillion this year to become the world’s most valuable public company. On the software side, leaders like Microsoft (MSFT)—the largest corporate investor in OpenAI—have paved the way for integrating A.I. across its product offerings. But that doesn’t mean these companies aren’t aware of its many risks, which range from increased regulatory restrictions to the generation of offensive content.

From chipmakers like AMD (AMD) to Big Tech behemoths like Meta (META) and Google (GOOGL), the largest players across A.I. are taking time in quarterly and annual reports to warn investors on how A.I. could negatively impact their business. Here’s what major tech companies are saying about the risks presented by the nascent technology:

Meta: Misinformation and improper use from third parties

Meta’s A.I. ambitions include both large language models and an in-house A.I. chip. But the company has also warned of the technology’s capacity for disruption. Its A.I.-related efforts subject the company to the possibility of misinformation and deepfakes, including those related to political elections, said the company in a recent SEC filing. It also listed harmful or illegal content, bias, discrimination and toxicity among the concerning byproducts that could arise from its A.I. pursuits.

In addition, Meta noted that its development of A.I. made available to third parties means it may be unable to control how such tools are used. “We cannot guarantee that third parties will not use such A.I. technologies for improper purposes,” Meta said, noting the potential for the dissemination of illegal, inaccurate, defamatory or harmful content that could further discrimination, cybersecurity attacks and data privacy violations.

Microsoft: Copyright claims and security risks

Microsoft, which recently launched a suite of A.I. PCs, has also raised a number of potential risks related to the technology in recent SEC filings. Flawed A.I. algorithms or training methodologies, biased or insufficient datasets, offensive or illegal content generated by A.I., and ineffective A.I. deployment by Microsoft itself could all cause harm, according to the company.

The company additionally discussed the potential for copyright infringement claims related to A.I. training and output and possible impacts stemming from regulations like the European Union’s A.I. Act and, in the U.S., the Biden administration’s A.I. Executive Order, both of which aim to develop A.I. responsibly while mitigating harm.

Not to mention the security risks associated with generative A.I. features, which the company said “may be susceptible to unanticipated security threats from sophisticated adversaries.” Microsoft noted that users could use A.I. to impersonate other people or organizations or disseminate misleading information to manipulate the opinions of Microsoft customers.

Google: Societal harm and exploitation from bad actors

Google’s parent company, Alphabet (GOOGL), meanwhile, described A.I. uses as ones that “will present ethical issues and may have broad effects on society” in an SEC report. While the company has in recent years pursued an A.I. push that included acquiring A.I. startup DeepMind in 2014, it is also aware that it may be unable to resolve A.I.-related issues with the potential to negatively affect “human rights, privacy, employment or other social concerns” before they arise and possibly lead to lawsuits or increased regulatory scrutiny.

The company also warned that new A.I. features could become susceptible to “unanticipated security risks” as Google continues to understand A.I. protection methods, adding that the rising prevalence of A.I. in its offerings could allow bad actors to pursue new methods of abuse.

Amazon: Negative public perceptions

Amazon (AMZN), too, has bet big on the new technology through a series of A.I. launches. However, public perceptions of social and ethical issues concerning its development and use of A.I. could affect the e-commerce and cloud giant’s finances, Amazon conceded in a recent SEC report.  The company also said its use of A.I. could lead to a rise in infringement claims, noting the unauthorized use of “third-party technology or content” as a risk factor.

Adobe: Workforce disruption and rising competition

While Adobe (ADBE) may have embraced the power of A.I. when it comes to image-generation tools, the company has warned that A.I. offerings could harm its business if it fails to adapt. The technology could disrupt workforce needs, said Adobe, with A.I.’s potential to provide new ways of marketing, content creation and document interaction, providing the possibility of changing the industries in which Adobe operates.

The company is also expecting a boost of competition in the rapidly evolving A.I. market. “For example, we face increasing competition from companies offering text-to-image generative A.I. technology that may compete directly with our own creative offerings,” said Adobe in a filing, adding that it could experience reduced sales if it is not able to compete effectively.

Nvidia: Unknown long-term prospect and growing regulatory scrutiny

Despite the chipmaker’s dominance in the A.I. hardware market, Nvidia isn’t overly confident about maintaining this momentum. “Recent technologies, such as generative A.I. models, have emerged, and while they have driven increased demand for Data Center [products], the long-term trajectory is unknown,” said Nvidia in a recent SEC report.

Growing concern around A.I. risks has also led to a rise in regulatory scrutiny that could impact the company’s A.I. offerings, Nvidia said, noting that it has received requests for information from regulators across the E.U., the U.K. and China regarding its sales of GPUs and expects to receive additional requests in the future.

AMD: Inability to compete and talent poaching

AMD, another major player in the A.I. chipmaking industry, has also capitalized off the A.I. revolution with rising demand for its GPUs to power the data centers running A.I. models. But like Nvidia, it isn’t confident in the technology’s staying power, stating in SEC filings that “the long-term strategy of such generative A.I. solutions is unknown.”

The company additionally noted the “intense” competition from A.I. accelerator competitors like Nvidia and said it simply may not be able to meet the adaptation capabilities of rivals, especially given the A.I. market’s inclination towards rapid change, product obsolescence, evolving trends and new product introductions.

AMD is also concerned about its ability to attract and retain A.I. talent. “Competition for highly skilled executives and employees in the technology industry, especially in the areas of A.I. and machine learning, is intense,” said the company, adding that its “competitors have targeted individuals in our organization that have desired skills and experience.”

Читайте на 123ru.net