Category: Artificial Intelligence
Oct 28
The Gorilla Problem
Artificial Intelligence will kill us! If you listen to the unusual alliance of famous scientists, philosophers, and company founders then AI is the biggest threat to humanity since nuclear power. Stephen Hawking warned about the dangers right before his passing. Nick Bostrom, philosophy professor in Oxford and author of the opinion leading book Super Intelligence lists a number of examples, which shall demonstrate the dangers of AI. Even Elon Musk, CEO of Tesla and SpaceX, and not known as an enemy of new technologies, warns of the dangers. Max Tegmark, physicist, AI researcher, and co-founder of the Asilomar conference on AI, depicts in his book Life 3.0 multiple scenarios of how AI can impact humanity, and the majority of them are pessimistic. Here are Tegmark’s AI-scenarios:
- Libertarian Utopia
- Benevolent Dictator
- Egalitarian Utopia
- Guardian
- Protecting God
- Slaved God
- Conqueror
- Descendant
- Zookeeper
- 1984
- Relapse
- Self-destruction
AI skeptics even demonstrate with their book titles that they don’t doubt how it will end. James Barrat doesn’t mince words in his book Our Final Invention. Artificial Intelligence and the End of the Human Era. But: is it even true? Or is it an overreaction?
We can’t answer that with ultimate certainty. AI is not the first technology, of which alarmists predict, that it will harm us. I am imagining the skeptics, holding the first fire in their hands, and warning their tribe members from the imminent danger. One of the latest iteration was nuclear power – and that, as we had to experience, with absolute right.
How progress impacts the very life form that started it, is explained by the Gorilla problem. And no, it’s not about the also well-known and popular Gorilla in the room, where test subjects have to watch players passing a Basketball and count the number of passes, while they totally miss the gorilla walking through the room at the same time.
As Charles Darwin has show, humans and primates have the same ancestors. We humans are nothing else then a more advanced version of gorillas. Gorillas spawned a species, that became smarter, and crested tools that made them superior to gorillas. While humans quickly evolved, gorillas stayed at the same technological and biological level.
And exactly that creates a problem for gorillas. Humans have used their intelligence not to the advantage of all species, but used them, inadvertently or not, to kill and extinct gorillas. In regards to AI, the dangers that AI-skeptics are warning us, are correct. A technologically advanced civilization tends to destroy a less developed one.
What the gorilla problem means is loss of control. Gorillas ‘created us‘; but now we humans have taken control and the gorillas have lost control. We are now creating AI and lose control over it. To make working AI, we have to hand over some control. But: how much control is that? Do we cede so much control to the AI that we turn from captains to passengers of our ship?
We are yet far from it, because this would assume that AI knows how to steer society, maintain and advance it. And not even we know that. The American AI researcher Stuart Russell thinks that philosophy so far has dealt not enough with the uncertainty about our knowledge of our goals. We have a different understanding of what a fair society is and how to get there. And we don’t even know much about both the goals and the methods.
In the past we looked at the dangers of weapons of mass destruction, especially ABC weapons, which were capable of destroying humanity. In a world that relies on computers, software, and machines, we can now also add cyber weapons. Virus software and AI can sabotage utilities, water supply, hospitals, and other necessary systems. They are a new form of threat for us.
Super intelligence, which dominates or destroys us, always makes a good story for movies and novels. We love good stories, but whether AI is making us humans history is one of these questions that we cannot answer with ultimate certainty today.
This article was also published in German.
Recent Comments