News in English

‘AI-powered Weapons Depersonalise the Violence, Making It Easier for the Military to Approve More Destruction’

By CIVICUS
Nov 22 2024 (IPS)

 
CIVICUS discusses the dangers arising from military uses of artificial intelligence (AI) with Sophia Goodfriend, Post-Doctoral Fellow at Harvard Kennedy School’s Middle East Initiative.

The global rise of AI has raised concerns about its impact on human rights, particularly for excluded groups, with controversial uses ranging from domestic policing and surveillance to ‘kill lists’ such as those used by Israel to identify targets for missile strikes. Digital rights groups are calling for the development of an AI governance framework that prioritises human rights and bans the most dangerous uses of AI. While recent United Nations (UN) resolutions recognise the human rights risks of AI, more decisive action is needed.

Sophia Goodfriend

Why should we be concerned about AI and its current and potential uses?

AI is being rapidly integrated into military operations around the world, particularly in weapons systems, intelligence gathering and decision-making. Its increasing autonomy reduces human oversight, raising serious concerns and sci-fi fears of machines making life-and-death decisions without meaningful human intervention.

AI-based technologies such as drones, automated weapons and advanced targeting systems are now part of military arsenals. The military’s increasing reliance on these systems raises significant concerns, as they are largely unregulated under international law. The level of surveillance these technologies rely on violates privacy protections under international law and many national civil rights laws.

The rapid development and deployment of these technologies is outpacing regulation, leaving the public largely unaware of their implications. Without proper oversight, AI could be misused in ways that cause widespread harm and evade accountability. We urgently need to regulate the military use of AI and ensure it is consistent with international law and humanitarian principles.

In addition, faulty or biased data can lead to devastating mistakes, raising serious ethical and legal questions. And the decisions made by these systems can undermine the principles of proportionality and distinction in warfare, putting civilian lives at risk.

What’s an example of how AI is currently being used?

The Israeli military is using AI-assisted targeting systems to identify and strike targets in Gaza. These systems analyse huge amounts of data collected through drones, satellites, surveillance cameras, social media and phone hacks to identify potential targets, locate them and decide where and when people should be killed.

AI-generated ‘kill lists’ raise serious concerns. Flawed or biased data has already led to devastating mistakes, with journalists and humanitarian workers killed in strikes. There have also been allegations that the military has expanded its definition of who or what constitutes a valid target, allowing attacks on people or places that may not meet the standards set by international law.

These systems operate at an unprecedented speed and scale, creating a huge number of targets. They have the potential to cause widespread destruction without thorough oversight. Soldiers operating in Gaza have as little as 20 seconds to approve targets that include Hamas militants, but also people who wouldn’t be considered valid military targets under international laws of war and human rights standards.

What does this mean for moral responsibility over the damage caused?

AI-assisted targeting technologies such as the Lavender system are not fully autonomous. They still require human oversight. This is a critical point because these technologies are only as destructive as the people in charge. It all depends on the decisions made by military leaders, and these decisions can either comply with or violate international human rights law.

At the same time, the use of machines to target and destroy can depersonalise violence, making it easier for military personnel to authorise more destruction. By outsourcing decision-making to AI, there’s a risk of abdicating moral responsibility. This technological approach makes military action seem more efficient and rational, which can help justify each bombing with a seemingly logical rationale, but it also dehumanises the civilian casualties and widespread devastation that follow.

Are current AI governance frameworks sufficient to protect human rights?

The short answer is no: current AI governance frameworks fall short in protecting human rights, particularly in military applications. While most states agree that AI-driven weapons – from fully autonomous to AI-assisted ones – should comply with international human rights law, there’s no global framework to ensure this happens.

This has led to calls for more comprehensive and enforceable rules, and there have been some positive steps. For example, civil society groups and researchers successfully pushed for a ban on fully autonomous weapons in the UN Convention on Certain Conventional Weapons, which was supported by over 100 states. As a result, the UN Secretary-General has called for a legally binding treaty to be adopted in 2026 to completely ban fully autonomous weapons, which are powered by AI but have no human oversight of their operations.

The European Union (EU) has also taken action, banning some military AI applications such as social scoring systems – which give people ratings based on their social behaviour – as part of its AI Act. However, the EU still lacks specific rules for military AI.

Organisations such as the Future of Life Institute, Human Rights Watch and Stop Killer Robots have been instrumental in pushing for change. But they’re facing growing challenges as Silicon Valley tech CEOs and venture capitalists push for faster AI development with fewer regulations. This is worrying, as these powerful figures will now have more influence over AI policy under a new Trump administration.

What role should AI companies play in ensuring compliance with human rights principles?

Companies have a critical role to play. In recent years, many of the leading companies, such as Amazon, Google, Microsoft and OpenAI, have made public statements about their commitment to human rights. OpenAI, for example, has called for the creation of a watchdog similar to the International Atomic Energy Agency, and its founders have pledged not to allow their technology to be used for military purposes. Amazon, Google and Microsoft also have fair use policies, which they claim ensure their technologies are used in accordance with human rights principles.

But in practice, these policies often fall short, particularly when it comes to military applications. Despite their claims, many of these companies have sold their technologies to military forces, and the extent of their involvement in military AI development is often unclear. Just a few weeks ago, The Intercept reported that the US military’s Africa Command had purchased OpenAI software through Microsoft. We also know the Israeli military used Google cloud services to target bombs in Gaza and Amazon web services to store classified surveillance data on civilians in the Palestinian territories.

This has sparked protests within the companies involved, with workers staging walkouts and demanding greater transparency and accountability. While these protests are important, AI companies can ultimately only do so much to ensure their technologies are used ethically. We need stronger, more comprehensive international laws on the military use of AI, and governments must take responsibility for ensuring these laws are enforced at the national level.

At the same time, many tech CEOs, such as Elon Musk, have moved away from their previous commitment to human rights and are more aligned with right-wing political leaders like Trump. Some CEOs, such as Peter Thiel of PayPal and Alex Karp of Palantir Technologies, argue that private companies need to work closely with the military to maintain US technological superiority. This has created tensions between human rights advocates and tech giants, highlighting the need for stronger regulatory frameworks to hold these companies accountable and prevent AI being used in ways that undermine human rights.

GET IN TOUCH
Website
LinkedIn
Twitter

SEE ALSO
Human rights take a backseat in AI regulation CIVICUS Lens 16.Jan.2024
AI: ‘The biggest challenges are the biases and lack of transparency of algorithms’ Interview with Humanitarian OpenStreetMap Team 24.Aug.2023
AI regulation: ‘There must be a balance between promoting innovation and protecting rights’ Interview with Nadia Benaissa 25.Jul.2023

 


  

Читайте на 123ru.net