The Next Horror – Thinking Military Robots
By James Donahue
This news ought to scare the hell out of humanity. Researchers for the Pentagon say
they are on the brink of making autonomous robots capable of flying aircraft, operating military machinery and fighting ground
wars without being controlled by ground-based operators at some distant location.
The research team, which has been working under the sponsorship of the Defense Advanced
Research Projects Agency (DARPA), a special wing of the U.S. Defense Department created by President Eisenhower in 1958, has
revealed the development of a computer that "looks and thinks like a human brain" which can be programmed to perform specific
programs and think for itself in the field.
The "thinking computer" utilizes nano-scale interconnected wires that perform billions
of connections, just like a human brain, and can remember information. Each connection thus is a synthetic synapse, allowing
neurons to pass electrical or chemical signals to the other cells. It is said the device is so complex it surpasses all other
efforts to develop machines that work with artificial intelligence.
Researchers from University of California’s Berkeley Freeman Laboratory for Nonlinear
Neurodynamics and HRL, the former Hughes Research Laboratory in Malibu were involved in the project.. The Berkeley facility
bears the name of Walter Freeman, who devoted 50 years to work on a mathematical model of the human brain based on electroencephalography
Professor James K. Gimzewski, one of the researchers at the Berkeley facility, who was
involved in the announcement, said this computer may behave like a human brain, but it processes information in a completely
different way. Thus, he said this may represent "a revolutionary breakthrough in robotic systems."
Gimzewski noted that artificial intelligence research has struggled with finding ways
to generate human-like reasoning or cognitive functions. Thus the new DARPA device offers the possibility of using what he
called "an off-the-wall approach" to accomplish something like human reasoning in a machine.
While it is frightening to realize that the military is the first to develop a computerized
version of artificial intelligence, researchers have been working for years toward this goal. This should not be surprising
since the United States pours more federal dollars into defense than into private research laboratories.
The goal among scientists has not been for military purposes. Well known Professor Stephen
Hawking of Cambridge University has warned that genetic engineering for new bodies or robotic machines with human brains may
be needed if the human race seeks to survive the horrors of our dying planet, or to venture successfully into deep space with
thoughts of colonizing elsewhere.
In an interview with the German magazine Focus, Hawking said that because of the speed
of advances in computer technology, he foresees a time when intelligent machines will be smarter than humans and will have
the capability of taking over the world.
"The danger is very real," Hawking said.
He said he believes that through careful manipulation of human genes, humans can raise
the complexity of their personal DNA and awaken the sleeping parts of their brains.
Hawking, a victim of ALS, a motor neuron disease that leaves him dependent on machines,
finds it easy to consider the concept of cyborg technology, with direct links between human brains and computers.
"We must develop as quickly as possible technologies that make possible a direct connection
between brain and computer, so that artificial brains contribute to human intelligence rather than opposing it."
While the scientific community is rushing toward developing artificial intelligence
in working machines, there is opposition. It should not be surprising that much of this is coming from the religious community
that argues against "playing god" when we do things that only God should do.
One real argument against creating robots that serve as soldiers for military and police
service is that while such machines may be able to reason their way to an objective, they will lack empathy and be incapable
of expressing love and compassion for humans or animals. There is great danger in this.
Would such machines rise up as monsters that turn against us? Scientists are very aware
of this problem. A story in New Scientist once noted that most researchers involved in developing thinking robots believe
certain safety issues must be addressed before such machines are unleashed into the world. In other words, robots must possess
a built-in code of ethics, with protection of humans high on this list.
Thus the paradox we face is made clear. How can we build machines for war that are instructed
to never kill or harm humans?