News in English

Our future AI dystopia is inevitable

Our future AI dystopia is inevitable

By the time it becomes clear that we should have acted, it will of course be too late.

For the average person, the notion of artificial intelligence may feel like a combination of distant ideas tainted by wild claims found online and imagery from movies and TV. Still, we are all familiar with the prediction that artificial intelligence may one day in the future destroy society as we know it. While it is difficult to predict exactly what problems AI will create for us, it is a near certainty that AI will have catastrophic consequences and that we won’t do anything to stop it.

Sometime in the near future, we will develop artificial intelligence capable of self-improvement – referred to as a “seed-AI” by Nick Bostrom. This AI will be able to improve itself or create more intelligent systems. Because such AI would quickly become better at improving AI than humans, iterations will soon reach something often referred to as superintelligence: a system that possesses intelligence far beyond anything humanity is capable of. 

Our cognitive capacities are only modestly greater than those of the smartest non-human animals. This humble edge gave us all we needed to completely dominate the planet. Superintelligent systems will be so much more intelligent than us that we will find it impossible to understand its motivations and find it near impossible to control. 

Precisely because it would be so much smarter than us, it would be able to think through billions of possibilities and the implications of those possibilities in the blink of an eye. It would be able to strategize and problem solve at a much higher level than the smartest humans. If it somehow decided to dethrone us from world domination, the resistance it would feel from us would be like the resistance we feel when we destroy an ant colony. 

As many experts and philosophers have noted, ensuring that this does not happen, while being able to use AI for its intended purpose is a tall order. Perhaps confining our creation to a single computer and refusing to interact with it may ensure that it can’t destroy us. But then what was the point of creating it? If we create it, we will use it, and by using it we give it a mode to enact its will.

This superintelligence need not have the desire to destroy humanity. It may not care about us at all. In this case, the fear is that it’s goal to have a nice even lawn involves running over the ant hill.

This is all assuming certain things about the increase of processing power over time. But these assumptions are not so controversial. What is controversial is whether these AIs will have consciousness or awareness. While this is an interesting question, consciousness is not a prerequisite for a system to be able to act intelligently or cause harm. Consider how a Tesla’s self-driving function appears to act intelligently and is capable of harming people despite failing to be conscious.

If artificial intelligence is such an existential threat, why don’t we just stop developing AI? The short answer is that we can’t stop. 

The possible economic and humanitarian benefits are too great of a temptation to ignore not just for private companies but also for world governments and militaries. A highly advanced AI would offer a decisive edge to whichever military controls one first. The economic opportunities are endless: new special materials, new cures, new technology and it would all pay well for whatever companies take the initiative.

Even if we decided to legally restrict the development of AI, the technology and knowledge required to develop and advance the field is too widespread to reliably control. The project cost is also quite small and getting smaller as time goes on. There will always be evil people motivated by power or even destruction for its own sake. They will have the ability to create AI and there will be nothing forcing them to integrate failsafes.

We are not good at addressing future harms, with climate change being a perfect example. Right now, the harms of climate change are not extreme enough to compel us to incur the high costs of combating it and forgoing the vast benefits of continued pollution. Catastrophic problems like climate change and superintelligence start out small and gradually increase in severity. 

Right now, it’s just a bit warmer and no one cares enough to create deepfakes of me. By the time it becomes clear that we should have acted, it will of course be too late. Pet your children and kiss your dogs while you still can.

Rafael Perez is a doctoral candidate in philosophy at the University of Rochester. You can reach him at rafaelperezocregister@gmail.com.

Читайте на 123ru.net