Rant mode on:
Whenever Hawking blurts something out, mass media spread it around straight away. While he is probably OK with black holes, when it comes to global risks, his statements are not only false, but, one could say, harmful.
So, today he has said that within the millennia to come we’ll face the threat of creating artificial viruses and a nuclear war.
This statement brings all the problems to about the same distance as that to the nearest black hole.
In fact, both a nuclear war and artificial viruses are realistic right now and can be used during our lifetime with probability as high as tens percent.
Feel the difference between chances for an artificial flu virus to exterminate 90% of population within 5 years (the rest would be finished off by other viruses) and suppositions regarding dangers over thousands of years.
The first thing is mobilizing, while the second one causes enjoyable relaxation.
He said: ‘Chances that a catastrophe on the Earth can emerge this year are rather low. However, they grow with time; so this undoubtedly will happen within the nearest one thousand or ten thousand years’
The scientist believes that the catastrophe will be the result of human activity: people can be destroyed by nuclear disaster or artificial virus spread.
However, according to the physicist, the mankind still can save itself. For this end, colonization of other planets is needed.
Reportedly, earlier Stephen Hawking stated that the artificial intelligence would be able to surpass the human one as soon as in 100 years.”
Also, the statement that migration to other planets automatically means salvation is false.
What catastrophe can we escape if we have a colony on Mars? It will die off without supplies.
If a world war started, nuclear missiles would reach it as well.
In case of a slow global pandemia, people would bring it there like they bring AIDS virus now or used to bring plague on ships in the past. If hostile AI appeared, it would instantly penetrate to Mars via communication channels. Even gray goo can fly from one planet to another. Even if the Earth was hit by a 20-km asteroid, the amount of debris thrown into the space would be so great that they would reach Mars and fall there in the form of a meteorite shower.
I understand that simple solutions are luring, and a Mars colony is a romantic thing, but its usefulness would be negative.
Even if we learned to build starships travelling at speeds close to that of light, they would primarily become a perfect kinetic weapon: collision of such a starship with a planet would mean death of the planet’s biosphere.
Finally, some words about AI. Why namely 100 years? Talking about risks, we have to consider a lower time limit, rather than a median. And the lower limit of estimated time to create some dangerous AI is 5 to 15 years, not 100.
Rant mode off