Home | Page 3 | Index Page

Gallery L

Artificial Intelligence

Could Machines Someday Rule The World?

By James Donahue

Most film viewers have a vivid memory of the unblinking red eye of HAL, the computerized spaceship that took control of the ship in Arthur C. Clarke’s classic "2001; A Space Odyssey." And then there was the fierce android humanoid played by Arnold Schwarzennegger in the Terminator films.

The Battlestar Galactica television series involved a future war between humans and machines that raged throughout the stars.

Indeed, science fiction writers have been playing with the idea of artificially intelligent machines that rise up against humanity for years. Now, with the development of super computers that can calculate and "think" faster than the human brain, the creation of robotic machines that replace humans on the workplace and even in the air (those flying drones), many scientists perceive the day when a machine takeover may be a real threat.

Writer Ray Villard recently noted that the advent of those killer drones already violates science fiction author Issac Asimov’s "first law of robotics" which is: "A robot may not harm humanity, or, by inaction, allow humanity to come to harm."

All that is now needed is for technology to advance to that final stage; artificial intelligence, and the machines might, indeed, turn on humanity. Some scientists perceive this threat to be serious enough that next year they will create the Center for the Study of Existential Risk (CSER) at Cambridge to study the possible dangers of biotechnology, artificial life, nanotechnology and, of course, climate change.

One of the issues to be examined by CSER is whether this technology has the potential of destroying human civilization. They say that they see a danger in dismissing concerns of even a possible uprising among artificially intelligent robots.

This is because that in addition to the development of the drones to rule the skies, military-oriented technologies also are working on robotic soldiers with superhuman capabilities that can replace combat fighters in the battlefield.

If the advancements in artificial intelligence reach a level where the machines can self-replicate, all of the elements of a super race of androids will come into existence. These would be machines that can out-think us and outwit us. If they can repair themselves and create copies of themselves, the machines will, indeed, be in a position to dominate all humanity.

The question is, can organizations like CSER work effectively to stop the progress of technological advancement within the secret laboratories behind world military armies?

The CSER project is the brainchild of Cambridge philosophy professor Huw Price, cosmology and astrophysics professor Martin Rees and Skype co-founder Jaan Tallinn.

In a recent interview Price said: "It seems a reasonable prediction that some time in this or the next century intelligence will escape from the constraints of biology. What we’re trying to do is push it forward in the respectable scientific community."

As robots and computers become smarter than the humans who build and operate them, the day may come when we find ourselves at the mercy of "machines that are not malicious, but machines whose interests don’t include us," Price said.

Let’s hope one of them doesn’t get named HAL.