News in English

Artificial Intelligence Narratives: A Global Voices Report

By situating AI in the context of data, we analyzed their narrative relationship, shared incentives, and strategies necessary to protect the public interest

Originally published on Global Voices

Image made by Giovana Fleck, used with permission.

This report is part of Data Narratives, a Civic Media Observatory project that aims to identify and understand the discourse on data used for governance, control, and policy in El Salvador, Brazil, Turkey, Sudan, and India. Read more about the project here and see our public dataset.

Powerful actors, governments, and corporations are actively shaping narratives about artificial intelligence (AI) to advance competing visions of society and governance. These narratives help establish what publics believe and what should be considered normal or inevitable about AI deployment in their daily lives — from surveillance to automated decision-making. While public messaging frames AI systems as tools for progress and efficiency, these technologies are increasingly deployed to monitor populations and disempower citizens’ political participation in myriad ways. This AI narrative challenge is made more complex by the many different cultural values, agendas, and concepts that influence how AI is discussed globally. Considering these differences is critical in contexts in which data exacerbates inequities, injustice, or nondemocratic governance. As these systems continue to be adopted by governments with histories of repression, it becomes crucial for civil society organizations to understand and counter AI narratives that legitimize undemocratic applications of these tools.

We built on the groundwork laid by the Unfreedom Monitor to conduct our Data Narratives research into data discourse in five countries that face different threats to democracy: Sudan, El Salvador, India, Brazil, and Turkey. To better understand these countries’ relationships to AI, incentives, and public interest protection strategies, it is helpful to contextualize AI as a data narrative. AI governance inherently involves data governance and vice versa. AI systems rely on vast quantities of data for training and operation, while AI systems gain legibility and value as they are widely integrated into everyday functions that then generate the vast quantities of data they require to work.

AI terminology as a narrative and an obstacle

The term artificial intelligence itself is controversial. Conventional definitions of artificial intelligence — such as the one established by the EU AI Act — attempt to provide policymakers with the necessary scope to establish effective governance policies; however, these attempts almost always fall short by inadvertently excluding some systems from oversight because of the variability in the forms and functions of so-called intelligent systems at large. Approaches to defining artificial intelligence further vary by field. This is illustrated by survey data in which computer scientists focus on technical system functions while policymakers focus instead on metaphoric connections between system aspects they believe to be like human thinking and behavior. This demonstrates how even the term artificial intelligence has a narrative. We begin to understand it by accepting the broad uses in popular media alongside government and research-based ones.

Global diversity in the narratives of AI terminology reflects international variance in cultural values and priorities. Many languages ascribe the word “intelligence” to these systems, but cultures vary in terms of their concepts about what intelligence entails. The mere use of the term “intelligence” ascribes human-like qualities to the technology, which can create a false sense of AI as autonomous and powerful rather than a tool developed and controlled by human actors. Anthropomorphic descriptors of AI can obscure the underlying infrastructure and business interests behind its development — and mislead the public by prompting them to generate incorrect inferences about how the technology works and whether it should be trusted in particular use cases. Consequently, AI narratives that utilize anthropomorphic descriptions may prevent public critique of AI systems.

The language that ascribes more intelligence than a system is capable of can be understood under corporate-framing narratives that market these tools. Because advanced AI systems, particularly generative AI, require massive computational resources and infrastructure that only a handful of companies can afford, power over AI development has been concentrated in Big Tech companies like Google and Microsoft. This technical dominance gives these companies an advantage over narratives about AI systems, pushing narratives that suggest their privileged control over this technology is in the public good because of promises that it will “democratize AI” for everyone. In fact, these models are wildly incapable of democratizing knowledge in equal parts because these technologies cannot distinguish fact from fiction in their outputs to justify even a liberal definition of “knowledge” and because they provide different responses to different people due to the nature of identity-information encoded in users’ speech. The popularity of generative AI further expands US corporate influence in other countries, shapes the issues that are raised and the dialogue on social media platforms, and determines what research gets funded.

Framing AI systems as intelligent is further complicated and intertwined with neighboring narratives. In the US, AI narratives often revolve around opposing themes such as hope and fear, often bridging two strong emotions: existential fears and economic aspirations. In either case, they propose that the technology is powerful. These narratives contribute to the hype surrounding AI tools and their potential impact on society. Some examples include:

Many of these framings often present AI as an unstoppable and accelerating force. While this narrative can generate excitement and investment in AI research, it can also contribute to a sense of technological determinism and a lack of critical engagement with the consequences of widespread AI adoption. Counter-narratives are many and expand on the motifs of surveillance, erosions of trust, bias, job impacts, exploitation of labor, high-risk uses, the concentration of power, and environmental impacts, among others.

These narrative frames, combined with the metaphorical language and imagery used to describe AI, contribute to the confusion and lack of public knowledge about the technology. By positioning AI as a transformative, inevitable, and necessary tool for national success, these narratives can shape public opinion and policy decisions, often in ways that prioritize rapid adoption and commercialization.

Main narratives found in our research

You can click on the table below to navigate through the narratives we've investigated and describe in the following section:

Brazil

The discussion around AI in Brazil is primarily centered on regulatory measures and has gained momentum because Brazil hosted the G20 this year (2024) and conducted a series of preparations and side events involving data governance issues, such as NetMundial+10. The key regulation discussed was Bill 2338/2023, which seeks to establish national standards for developing, implementing, and responsible use of AI systems. Besides this bill, there are also sector-specific legislative proposals, such as those criminalizing pornographic deepfakes, and targeted actions like the 2024 resolutions by the Electoral Superior Court on AI and elections.

In this context, our researcher mapped three essential narratives. The first narrative, Brazil needs to remain at the forefront of regulating new technologies, covers Brazil's current left-wing government and members of civic society's desire to return to the forefront of regulating new technologies as they were in 2014, with “Marco Civil da Internet” (Brazilian Civil Rights Framework for the Internet). The second narrative, Brazil needs to regulate AI to avoid falling behind, also follows a sense of urgency for regulation, yet promoted by the conservative right wing, the Liberal Party (PL), it prioritizes facilitating a secure business landscape that thrives on AI innovation. The third narrative, Regulating AI hampers innovation in Brazil, advanced by the neoliberal right wing of the country, publicizes the thought that regulating AI will impede the development of the AI industry in Brazil.

India

In November 2023, an AI deepfake video of actor Rashmika Mandanna circulating on social media sparked a significant conversation in India about AI regulation. The confirmation of image manipulation led to discussions on the government's role in controlling AI deepfakes and mitigating potential harm. This debate extended to concerns about AI's impact on elections, prompting the government to warn platforms about AI deepfakes and issue an advisory for self-regulation and labeling AI-generated content. These measures followed Prime Minister Narendra Modi's public addresses, highlighting the threats posed by AI deepfakes and their potential harm to citizens.

During the first phase of the discussion, right after the AI deepfake video was published, our researcher mapped two main narratives. The first narrative, The Indian government is committed to making the internet safe for its citizens, advertised the government's general actions in the form of regulation to address disinformation, hate speech, and online harassment. The second narrative, The Indian government's response to AI deepfakes is confused and reactive, asserted by organizations like the Internet Freedom Foundation, questioned the government's reactive measures and considered them to be taken without informed assessment of the different issues involved.

Later, our researcher mapped three additional narratives when the focus switched to elections. The first two narratives, AI deepfakes will be used by India's anti-national actors, and The makers of AI deepfakes and platforms hosting them are the true drivers of election manipulation, proposed by the government, aim to blame opposition parties, social media platforms, and the people behind the creation of deepfake videos for AI disinformation without facilitating a firm regulation addressing it. The third narrative, The solution to AI dis/misinformation in India should go beyond banning and taking down content, also promoted by organizations like the Internet Freedom Foundation, demands a more efficient response from the government that considers political accountability and strengthens the media system.

Sudan

The civil war in Sudan, ongoing since April 2023, has deeply impacted all aspects of Sudanese society. In May 2023, the Sudanese Armed Forces (SAF) used the national TV station to spread disinformation, claiming that a video of Mohamed Hamdan Dagalo (Hemedti), the Rapid Support Forces (RSF) commander, was AI-generated and that Hemedti was dead. However, Hemedti reappeared in July 2023 in a recorded video with his forces, published by the RSF on X (formerly Twitter), which SAF supporters used to bolster their narrative of his death by claiming that “the RSF uses AI deepfakes for military deception.” In January 2024, SAF's credibility was damaged when Hemedti appeared publicly with several African leaders during a tour in the region.

In the middle of this weaponization of AI, our researcher mapped three key narratives. The first narrative, as mentioned above, The RSF uses AI deepfakes for military deception, promoted by SAF and its supporters, aims to discredit RSF by linking them to AI deepfakes and accusing them of deceiving people into thinking that their commander, Hemedti, is still alive. The second narrative, The United Arab Emirates and Israel use AI to pass their agenda in Sudan, also pushed by SAF supporters, argues that the United Arab Emirates used AI to pass its agenda in Sudan to boost RSF while damaging the reputation of the SAF. Israel and the UAE have enhanced cooperation in AI innovation, which has expanded following their peace agreement under the Abraham Accords. The RSF, which has strong ties with the UAE, has sought in the past to establish independent relations with Israel outside official state channels.

The third narrative, The RSF does not use AI deepfakes for military deception, advanced by RSF and its supporters, counters the SAF's accusations and denies using AI deepfakes for military deception in its online operations.

Turkey

In Turkey, hype is a dominant factor when discussing AI, yet AI is often misunderstood and treated as a magical solution. Many public and private sector entities turn to AI to project a tech-savvy, progressive, and objective image, using it as a defense against accusations of bias or backwardness. However, the optimism contrasts sharply with the frustrating or abusive experiences many Turkish people have with AI in their daily lives.

In this clash of perceptions, our researcher mapped two narratives. The first narrative, Artificial intelligence is unbiased and, by itself, can solve many of the Turkish people's problems, advanced by the ruling party, the opposition, and even the country's football federation, portrays AI as always unbiased, objective, and superior to human-made systems, regardless of why/how/where it is used. The second narrative, Artificial Intelligence causes more problems than it solves, is asserted implicitly and showcases the threats of using AI, including exacerbating existing societal issues and creating new ones.

The gap in the conversation

The narratives spotted by our researchers also showcase the absences in the conversations. The most evident case is in El Salvador, where no key narratives discussing AI were found. Our research indicates that the data governance topics that thrive in El Salvador's social dialogue are mainly cryptocurrency policies and data protection issues.

Data protection issues are also factors in the discussion around data governance in India, Sudan, and Turkey. Yet, in any of those countries, concerns about the leaking or misuse of people's data are linked to AI. There seems to be no conversation about the source of the data used to support AI systems and the implications that might or might not have on people's privacy, even in countries where possible regulations are being discussed. The same happens with discussions around AI's potential impacts on employment — no local narratives debate the threats or opportunities AI can present for people in their workplaces.

The narratives mapped in our research also provide little optimism or clear positive associations with AI. Only in Turkey is AI presented as an effective tool, though it is mainly mystified and used in discourse and as a communication tactic to mask inefficiencies.

The work of our researchers also shows no influential narratives in the societies of the countries studied addressing the possible impacts AI can have on the countries’ military or police capability, unlike in the US. Only in Turkey and El Salvador did our researchers find active conversations about possible espionage from their governments. Yet, in any of those incidents, the narratives mapped connect the issue with AI. The same occurred with human rights violations; no discussions on how AI can prevent or facilitate them were registered.

Our research likewise reveals no in-depth social debate about the opportunity AI presents for authoritarian leaders currently in power or desiring to access power. There seem to be little discussions about whether it can exacerbate the notion of no indisputable truth and how that climate of distrust is detrimental to democracies.

Conclusion: AI narrative change

Civil society organizations who want to protect democracy and human rights must gain a better understanding of AI narratives globally. We have now seen the first generative AI-influenced election. Algorithmic-driven authoritarianism and the use of AI for repression is broad, including deep fake propaganda, surveillance, content moderation, cyberwar, and AI weapons. Regional focus on specific AI threats and AI policies exists, but broader public narratives about AI’s impacts on democracy and human rights are necessary. The research conducted through the Data Narratives Observatory shows how regional conversations often miss critical connections between AI systems, data governance, and power. There is a need for narrative change and creating narrative infrastructure. Viewing AI as part of a data ecosystem and not anthropomorphized as intelligence can help counter harmful and oversimplified AI narratives. This can also help educate the public on data and AI rights, as AI policy depends on data governance. This relationship is especially important when considering the growth of AI and the role AI plays in the surveillance pipeline.

Our research reveals significant gaps in the global discourse on AI. In El Salvador, the social dialogue around data governance is centered on cryptocurrency policies and data protection, with minimal attention to AI. AI deepfakes and their use in disinformation are discussed in countries like Brazil, India, and Sudan, but these rarely address AI's implications for data privacy or employment. Optimism about AI is uncommon, except in Turkey, where AI is promoted as a solution, though often superficially. Discussions on AI's impact on military and police capabilities, human rights, and its potential to support authoritarian regimes or undermine democratic values are notably absent, highlighting the need for more comprehensive and critical engagement with AI's societal impacts worldwide.

Promising emergent AI governance approaches could be strengthened with methods that respect the power of narratives around central issues. Community-oriented data and AI governance initiatives like Connected by Data and Indigeneous data sovereignty networks like Te Mana Raraunga demonstrate how collective stewardship can challenge corporate control and state surveillance. Recently, there have also been efforts such as the Fostering a Federated AI Commons Ecosystem policy briefing inspired by the work of Coding Rights that calls on the G20 to support an AI ecosystem that emphasizes task-specific AI that is community-focused and aims to curb tech monopolies’ power. Mozilla Foundation tracks many more data initiatives that empower communities.

These ideas can be powerful, but their success depends on building support within diverse communities. Research on narratives shows they can effectively influence public outcomes, but challenges arise when trying to develop narratives from the grassroots level. Without deliberate narrative change, the dominant AI narratives could intensify and lead to:

  • Corporate AI narratives that overstate AI capacity mislead and confuse expectations, making it harder to regulate;
  • A normalization of greater surveillance and loss of decision-making by humans, led by state narratives;
  • Simplistic AI narratives, such as ones that polarize AI as a threat or AI as a solution, may distract from preventing realistic risks and benefits;
  • The extractive practices and the concentration of power in the AI landscape will remain obfuscated;
  • Alternative visions of community-centered AI development and governance, data stewardship, and AI commons work will struggle to gain traction.

These findings highlight the critical need for expanded research on AI narratives — to document them but also to help civil society develop effective counter-narratives and strategies. Documentation is critical to enabling societies to build narrative infrastructures that challenge corporate control and state surveillance. Communities’ best shot at resisting AI-enabled repression must integrate a nuanced understanding of how different communities conceptualize and discuss AI to craft messages that resonate locally and contribute overall to a larger, global movement. We start by understanding existing narratives. We use this knowledge to help communities recognize and resist the misuse of AI systems. And they, in turn, can use this information to advance their alternative visions of AI governance centered on human rights and democratic values.

References

A Is. (2021). Aisforanother.net. https://aisforanother.net/

Article 3: Definitions | EU Artificial Intelligence Act. (2024, June 13). Future of Life Institute. https://artificialintelligenceact.eu/article/3/

Artificial intelligence (AI) and human rights: Using AI as a weapon of repression and its impact on human rights | Think Tank | European Parliament. (2024). Europa.eu. https://www.europarl.europa.eu/thinktank/en/document/EXPO_IDA(2024)754450

Aawsat. (2023, April 22). Where did Al-Burhan and “Hemedti” disappear? https://aawsat.com/home/article/4287326/%D8%A3%D9%8A%D9%86-%D9%8A%D8%AE%D8%AA%D9%81%D9%8A-%D8%A7%D9%84%D8%A8%D8%B1%D9%87%D8%A7%D9%86-%D9%88%C2%AB%D8%AD%D9%85%D9%8A%D8%AF%D8%AA%D9%8A%C2%BB%D8%9F 

Bryson, J. J. (2022, March 2). Europe Is in Danger of Using the Wrong Definition of AI. Wired. https://www.wired.com/story/artificial-intelligence-regulation-european-union/

Catalysing Community Campaigns. (2024, September 26). Connected By Data. https://connectedbydata.org/projects/2023-catalysing-communities

Center for Preventive Action. (2024, October 3). Civil War in Sudan. Global Conflict Tracker; Council on Foreign Relations. https://www.cfr.org/global-conflict-tracker/conflict/power-struggle-sudan

Celso Pereira, P., Jungblut, C. (2014, March 25). Câmara aprova Marco Civil da Internet e projeto segue para o Senado. O Globo. https://oglobo.globo.com/politica/camara-aprova-marco-civil-da-internet-projeto-segue-para-senado-11984559

Chubb, J., Reed, D., & Cowling, P. (2022). Expert views about missing AI narratives: is there an AI story crisis? AI & SOCIETY. https://doi.org/10.1007/s00146-022-01548-2

D’Souza, A. A. (2024, April 24). India's foray into regulating AI. Iapp.org. https://iapp.org/news/a/indias-foray-into-regulating-ai

Edsall, T. B. (2024, June 5). Opinion | Will A.I. Be a Creator or a Destroyer of Worlds? The New York Times. https://www.nytimes.com/2024/06/05/opinion/will-ai-be-a-creator-or-a-destroyer-of-worlds.html

Foodman, J. (2024, August 12). Council Post: Artificial Intelligence Is Changing The World And Your Business. Forbes. https://www.forbes.com/councils/forbesbusinesscouncil/2023/07/24/artificial-intelligence-is-changing-the-world-and-your-business/

Framing AI. (2023, October 12). Rootcause. https://rootcause.global/framing-ai/

Funding Narrative Change, An Assessment and Framework. (2022, September 30). Convergence Partnership. https://convergencepartnership.org/publication/funding-narrative-change-an-assessment-and-framework/

Future of Life Institute. (2023, March 22). Pause Giant AI Experiments: an Open Letter. Future of Life Institute. https://futureoflife.org/open-letter/pause-giant-ai-experiments/

Grossman, G., Edelman. (2023, October 15). Smarter than humans in 5 years? The breakneck pace of AI. VentureBeat. https://venturebeat.com/ai/smarter-than-humans-in-5-years-the-breakneck-pace-of-ai/

Hacking the patriarchy — Coding Rights. (2024, September 4). Coding Rights. https://codingrights.org/en/

Halina, M., & Shevlin, H. (2019). Apply rich psychological terms in AI with care. Cam.ac.uk. https://doi.org/10.17863/CAM.37897

Hennessey, Z. (2023). Israel and the UAE join forces to accelerate AI innovation in Abu Dhabi. The Jerusalem Post. https://www.jpost.com/business-and-innovation/article-742509

İspir, E. (2023, March 29). Yapay Zekayla Müstehcen İçeriklerini Yapıp, Taciz Ettiler! “Sonuna Kadar Peşindeyim.” Onedio. https://onedio.com/haber/yapay-zekayla-mustehcen-iceriklerini-yapip-taciz-ettiler-sonuna-kadar-pesindeyim-1137454

Jain, R. (2024, May 9). ChatGPT Creator Sam Altman Feels It's A ‘Massive, Massive Issue’ That We Don't Take AI's Threat To Jobs And Economy ‘Seriously Enough’. Benzinga. https://www.benzinga.com/news/24/05/38725610/chatgpt-creator-sam-altman-feels-its-a-massive-massive-issue-that-we-dont-take-ais-threat-to-jobs-an

Javaid, U. (2024, May 31). Democratizing AI: Why Opening Wider Access To AI Is Vital. Forbes. https://www.forbes.com/councils/forbestechcouncil/2024/05/31/democratizing-ai-why-opening-wider-access-to-ai-is-vital/

Kak, A. and Myers West, S. eds. (2024, March). AI Nationalism(s): Global Industrial Policy Approaches to AI. AI Now Institute. (2024). https://ainowinstitute.org/ai-nationalisms

Kak, A., Myers West, S., & Whittaker, M. (2023, December 5). Make no mistake—AI is owned by Big Tech. MIT Technology Review. https://www.technologyreview.com/2023/12/05/1084393/make-no-mistake-ai-is-owned-by-big-tech/

Kalluri, Pratyusha Ria, Agnew, W., Cheng, M., Owens, K., Soldaini, L., & Birhane, A. (2023, October 17). The Surveillance AI Pipeline. ArXiv.org. https://arxiv.org/abs/2309.15084

Khanal, S., Zhang, H., & Taeihagh, A. (2024, March 27). Why and How Is the Power of Big Tech Increasing in the Policy Process? The Case of Generative AI. Policy & Society (Print). https://doi.org/10.1093/polsoc/puae012

Krafft, P. M., Young, M., Katell, M., Huang, K., & Bugingo, G. (2020, February 7). Defining AI in Policy versus Practice. Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society. https://doi.org/10.1145/3375627.3375835

Lakhani, K. (2023, August 4). AI won’t replace humans — but humans with AI will replace humans without AI. Harvard Business Review. https://hbr.org/2023/08/ai-wont-replace-humans-but-humans-with-ai-will-replace-humans-without-ai

Magally, N. & Younis, A. (2021, June 26). Mossad Meetings with Hemedti Stir Anger in Sudan. (2021). Aawsat.com. https://english.aawsat.com/home/article/3048136/mossad-meetings-hemedti-stir-anger-sudan

Manson, K. (2024, February 28). AI Warfare Is Already Here. Bloomberg.com. https://www.bloomberg.com/features/2024-ai-warfare-project-maven/

Markelius, A., Wright, C., Kuiper, J., Delille, N., & Kuo, Y.-T. (2024, April 2). The mechanisms of AI hype and its planetary and social costs. AI and Ethics (Print). https://doi.org/10.1007/s43681-024-00461-2

Mozilla Foundation. (2021). Research Collection: Data for Empowerment. Mozilla Foundation. https://foundation.mozilla.org/en/data-futures-lab/data-for-empowerment/

Microsoft. (2024). AI For Good Lab. Microsoft Research. https://www.microsoft.com/en-us/research/group/ai-for-good-research-lab/

Naprys, E. (2023, November 27). China vs US: who’s winning the race for AI supremacy | Cybernews. https://cybernews.com/tech/china-usa-artificial-intelligence-race/

Narrative Power. (n.d.). Narrative Power. https://narrative.colorofchange.org/

Novak, M. (2023, November 5). Viral Video Of Actress Rashmika Mandanna Actually AI Deepfake. Forbes. https://www.forbes.com/sites/mattnovak/2023/11/05/viral-video-of-actress-rashmika-mandanna-actually-ai-deepfake/

Placani, A. (2024, February 5). Anthropomorphism in AI: hype and fallacy. AI and Ethics, 4. https://doi.org/10.1007/s43681-024-00419-4

Read, J. (2024, March 1). Nurturing a Workforce Ready for AI Integration in Manufacturing · EMSNow. EMSNow. https://www.emsnow.com/nurturing-a-workforce-ready-for-ai-integration-in-manufacturing/

Sky News Arabia. (2023, July 28). Video: This is what Hemedti said in his first appearance in months. https://www.skynewsarabia.com/middle-east/1640960-%D9%81%D9%8A%D8%AF%D9%8A%D9%88-%D9%87%D8%B0%D8%A7-%D9%82%D8%A7%D9%84%D9%87-%D8%AD%D9%85%D9%8A%D8%AF%D8%AA%D9%8A-%D8%A7%D9%94%D9%88%D9%84-%D8%B8%D9%87%D9%88%D8%B1-%D8%B4%D9%87%D9%88%D8%B1

Te Mana Raraunga. (n.d.). Te Mana Raraunga. https://www.temanararaunga.maori.nz/

PM Modi warns against deepfakes; calls on media to educate people on misinformation. (2023, November 17). The Hindu. https://www.thehindu.com/news/national/pm-modi-warns-against-deepfakes-calls-on-media-to-educate-people-on-misinformation/article67543869.ece

TSE – Tribunal Superior Eleitoral. (2024, March 1). Publicadas resoluções do TSE com regras para as Eleições 2024. (2024). Justiça Eleitoral. https://www.tse.jus.br/comunicacao/noticias/2024/Marco/eleicoes-2024-publicadas-resolucoes-do-tse-com-regras-para-o-pleito

Varon, J. (n.d.). T20 POLICY BRIEFING Fostering a Federated AI Commons ecosystem 1 TF05 -Inclusive digital transformation. Subtopic 5.5 -Challenges, Opportunities, and Governance of Artificial Intelligence. Retrieved December 19, 2024, from https://codingrights.org/docs/Federated_AI_Commons_ecosystem_T20Policybriefing.pdf

Vengattil, M. & Kalra, A. (2023, November 24). India warns Facebook, YouTube to enforce rules to deter deepfakes – sources. Reuters. https://www.reuters.com/world/india/india-warns-facebook-youtube-enforce-rules-deter-deepfakes-sources-2023-11-24/

Yahoo Finance Video. (2023, October 21). Google is “democratising AI” for non-expert business app creation | The Crypto Mile. Yahoo Finance. https://finance.yahoo.com/video/google-democratising-ai-non-expert-113148816.html

Читайте на 123ru.net