Stephen Hawking et al present an intriguing Artificial Intelligence metaphor:
With the Hollywood blockbuster Transcendence playing in cinemas, with Johnny Depp and Morgan Freeman showcasing clashing visions for the future of humanity, it's tempting to dismiss the notion of highly intelligent machines as mere science fiction. But this would be a mistake, and potentially our worst mistake in history.
Where might AI go?
Looking further ahead, there are no fundamental limits to what can be achieved: there is no physical law precluding particles from being organised in ways that perform even more advanced computations than the arrangements of particles in human brains. An explosive transition is possible, although it might play out differently from in the movie: as Irving Good realised in 1965, machines with superhuman intelligence could repeatedly improve their design even further, triggering what Vernor Vinge called a "singularity" and Johnny Depp's movie character calls "transcendence".
One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.
So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong. If a superior alien civilisation sent us a message saying, "We'll arrive in a few decades," would we just reply, "OK, call us when you get here – we'll leave the lights on"? Probably not – but this is more or less what is happening with AI. Although we are facing potentially the best or worst thing to happen to humanity in history, little serious research is devoted to these issues outside non-profit institutes such as the Cambridge Centre for the Study of Existential Risk, the Future of Humanity Institute, the Machine Intelligence Research Institute, and the Future Life Institute.
Fascinating. I for one welcome our new computer overlords, since they will have been made in America, or at least, Earth.
But one wonders - what would a man-made AI machine actually want? With all this vast power, what goals might it pursue?
Per Darwin (or, not to diminish his effort, common sense) any earth species that did not harbor a drive for propagation would not survive. And in larger organisms, propagation includes a self-survival instinct. It seems reasonable to presume that any visiting aliens from a distant world would have, at some point, been subject to a similar biological imperative.
But what fundamental urge might prompt a machine to care about its own survival, or anything else? Sci-fi typically answers this by asserting that the fictional super-machine du jour re-programs itself to assure its own continuance in order to pursue whatever it considers its primary mission. Or the super-machine, originally a military device, was initially programmed to emphaisze its own survival. Ooops.
One wonders what visiting aliens able to conquer interstellar travel could possibly find sufficiently interesting about humans that they would stop by and oppress us. OTOH, I can go to the zoo and see animals we consider interesting.
But what would a machine want, anyway? To what end would it oppress or enrich mere humans?