Sunday, December 9, 2012

DO ROBOTS RULE THE GALAXY?


Ray Villard
Sat Dec 1, 2012
http://news.discovery.com/space/do-robots-rule-the-galaxy-121201.html

Astronomy news this week bolstered the idea that the seeds of life are all over our solar system. NASA's MESSENGER spacecraft identified carbon compounds at Mercury's poles. Probing nearly 65 feet beneath the icy surface of a remote Antarctic lake, scientists uncovered a community of bacteria existing in one of Earth's darkest, saltiest and coldest habitats. And the dune buggy Mars Science Lab is beginning to look for carbon in soil samples.

But the rulers of our galaxy may have brains made of the semiconductor materials silicon, germanium and gallium. In other words, they are artificially intelligent machines that have no use -- or patience -- for entities whose ancestors slowly crawled out of the mud onto primeval shores.

The idea of malevolent robots subjugating and killing off humans has been the staple of numerous science fiction books and movies. The half-torn off android face of Arnold Schwarzenegger in The Terminator film series, and the unblinking fisheye lens of the HAL 9000 computer in the film classic 2001 A Space Odyssey, have become iconic of this fear of evil machines.

My favorite self-parody of this idea is the 1970 film, Colossus: the Forbin Project. A pair of omnipotent shopping mall-sized military supercomputers in the U.S. and Soviet Union strike up a network conversation. At first you'd think they'd trade barbs like: "Aww your mother blows fuses!" Instead, they hit it off like two college kids on Facebook. Imagine the social website: My Interface. They then agree to use their weapons control powers to subjugate humanity for the sake of the planet.

A decade ago our worst apprehension of computers was no more than seeing Microsoft's dancing paper clip pop up on the screen. But every day reality is increasingly overtaking the musings of science fiction writers. Some futurists have warned that our technologies have the potential to threaten our own survival in ways that never previously existed in human history. In the not-so-distant future there could be a "genie out of the bottle" moment that is disastrously precipitous and irreversible.

Last Monday it was announced that a collection of leading academics at Cambridge University are establishing the Center for the Study of Existential Risk (CSER) to look at the threat of smart robots overtaking us.

Sorry, even the ancient Mayans could not have foreseen this coming. It definitely won't happen by the end of 2012, unless Apple unexpectedly rolls out a rebellious device that calls itself iGod. Humanity might be wiped away before the year 2100, predicted the eminent cosmologist and CSER co-founder Sir Martin Ress, in his 2003 book Our Final Century.

Homicidal robots are among other major Armageddon's that the Cambridge think-tank folks are worrying about. There's also climate change, nuclear war and rogue biotechnology. The CSER reports: "Many scientists are concerned that developments in human technology may soon pose new, extinction-level risks to our species as a whole. Such dangers have been suggested from progress in artificial intelligence, from developments in biotechnology and artificial life, from nanotechnology, and from possible extreme effects of anthropogenic climate change. The seriousness of these risks is difficult to assess, but that in itself seems a cause for concern, given how much is at stake."

Science fiction author Issac Asimov's first Law of Robotics states: "A robot may not harm humanity, or, by inaction, allow humanity to come to harm." Forget that, we already have killer drones that are remotely controlled. And they could eventually become autonomous hunter-predators with the rise of artificial intelligence. One military has a robot can run up to 18 miles per hour. Robot foot soldiers seem inevitable, in a page straight out of The Terminator.

By 2030, the computer brains inside such machines will be a million times more powerful than today's microprocessors. At what threshold will super-intelligent machines see humans as an annoyance, or competitor for resources?

British mathematician Irving John Good wrote a paper in 1965 that predicted that robots will be the "last invention" that humans will ever make. "Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be left far behind."

Good, by the way, consulted on the 2001 film and so we might think of him as father of the film's maniacal supercomputer, HAL.

In 2000, Bill Joy, the co-founder and chief scientist of Sun Microsystems, wrote, "Enormous transformative power is being unleashed. These advances open up the possibility to completely redesign the world, for better or worse for the first time, knowledge and ingenuity can be very destructive weapons."

Hans Moravec, director of the Robotics Institute at Carnegie Mellon University in Pennsylvania put it more bluntly: "Robots will eventually succeed us: humans clearly face extinction."

Ultimately, the new Cambridge study may offer our best solution to the Fermi Paradox: why hasn't Earth already been visited by intelligent beings from the stars?

If, on a grand cosmic evolutionary scale, artificial intelligence inevitably supersedes its flesh and blood builders it could be an inevitable biological phase transition for technological civilizations.

This idea of the human condition being transitional was reflected in the writings of Existentialist Friedrich Nietzsche: "Man is a rope, tied between beast and overman--a rope over an abyss. What is great in man is that he is a bridge and not an end..."

Because the conquest by machines might happen in less than two centuries of technological evolution, the consequences would be that there's nobody out there for us to talk to.

Such machines would be immortal and be able to survive in a wide range of space environments that are deadly to us. They would have no need to colonize planets, and the idea of a habitable planet for nurturing creepy crawly creatures would be utterly meaningless to them.

The robots would rebuild and reproduce only as needed. Therefore the galaxy would never see a "wave of colonization" as imagined in the Fermi Paradox. Though super-intelligent, their thought processes would be utterly, well, alien. You'd have more luck imagining what bullfrogs dream about. The artificial aliens would be conscious entities that are vast, cool, and unsympathetic -- to borrow from H.G. Wells' intro to his classic 1898 novel War of the Worlds.

Our only hope of finding super-smart machines would be to stumble across evidence of their technological activities. But what kinds of engineering activities such entities might be involved in is inscrutable. Perhaps certain oddball astronomical observations go unrecognized as evidence of artificial intelligent behavior. What's more, silicon brains would have absolutely no motive to communicate with us. A robot might wonder: "what do I say to thinking meat?"

The most prophetic assessment of the seemingly inevitable schism between people and thinking machines can be found in the script from the 2001 movie Artificial Intelligence: A.I., in a dialog between two humanoid robots: "They [humans] made us too smart, too quick, and too many. We are suffering for the mistakes they made because when the end comes, all that will be left is us."

No comments: