Ethical And Moral Issues Concerning Artificial Intelligence
By James Donahue
Science fiction writer Isaac Asimov once established three laws of robotics while thinking about futuristic
probabilities of developing machines possessing an artificial intelligence.
He wrote that all intelligent robots should be programmed so they:
-- Not hurt a human being, or, through inaction, allow a human being to be hurt.
--Obey orders given to it by human beings except when such orders conflict with the First Law.
--Must protect its own existence as long as the protection does not conflict with the First or Second
Laws.
When Asimov wrote this over 60 years ago, the concept of robotics was little more than science fiction.
Now that we have not only developed active robotic machines that replace humans in our factories, mow our grass, sweep our
floors and play with our children, his proposed "Three Laws" have become much more meaningful.
That is because the march of science is even now rushing toward the creation of androids that not
only look and act human, but can respond to commands and perform certain household duties. The military is working on the
creation of robots that replace humans in the field of battle. And great minds like that of Physicist Stephen Hawking are
thinking of machines to replace the human body so that we can not only survive the death of our planet but explore the stars
in search of another home without being chained to our environment.
With these new concepts in mind, it is imperative that we take another look at the Asimov laws and
not only consider their implications in this new world of artificial and perhaps even actual intelligence driving robotic
machines, but think of where they fall short.
In South Korea, where advanced work in robotics has led to the recent development of an android with
an upper humanoid body that can move and make life-like facial expressions, leaders in the field are drafting a code of ethics
designed to prevent humans from abusing robots, and robots from hurting humans. And in Europe, a Robotics Network is lobbying
governments for this kind of legislation.
Their work will not be as easy as it sounds at first glance.
For example, if robots are being developed for conflict on the battlefield, it would be impossible
for them to be programmed not to hurt a human being. Yet if they are designed to attack humans, and perhaps other robots owned
by enemy forces, consider the problem of programming a machine to determine which human is the enemy and which is friendly.
Since we are all alike, and all one, it would be technically impossible. Such a machine would be too dangerous for everyone
around it.
Taking that thought one step farther, how can we program a machine to tell the difference between
a human, an animal, an android that looks human, or even a statue of a human? To a machine we may all look alike, and if the
machine is sensitive to body heat or heart rhythm, it might consider a stray dog or perhaps a cow in a nearby pasture to be
a human.
Under rule two, consider the problems a programmer will have creating a robot that can distinguish
between a human order and a casual request. Or determine the differences in word structures in the English language. Words
often have various meanings and humans understand the differences through inflections in voice, body movement, or the way
in which the phrase is used. For example: A person might use the phrase "go fly a kite" to express disgust over another person's
actions. A machine might take the statement as a serious order.
There is another problem that may have to be ironed out by lawmakers and the courts as scientists
continue making advancements in learning how the human brain works and using what they learn to develop artificial intelligence.
It is conceivable that we will soon have robots that not only can think and hold conversations with humans and one another,
but they might be aware of themselves and express emotional response to events occurring around them.
Once they reach this level of intelligence, have the machines become conscious life? Should they be
considered sentient beings? Should they be awarded the same rights as humans? And if they have improved bodies from our own,
and even if humans cannot find a way to move into them, will the machines not be superior to humans?
There are legal responsibility issues that may slam us in the face even before the above issues become
reality.
As robots become more intelligent it could become hard to decide responsibility if they injure someone.
The injury might not be physical, but financial. For example, an intelligent software program designed to give investment
advice might lead a person into making a bad investment. Who might be blamed: the designer, the user, or the robot? Imagine
a court jury attempting to work out the knots in an issue like that.
If robots eventually can feel mental anguish, develop emotions, and become like humans, should they
be granted certain rights? Can they be allowed to marry humans or own property?
As some science fiction writers have already proposed, imagine a world in which the machines gang
up on the humans and take over everything. Thus there develops the war between the humans and the machines.
For our own good, as the technology advances, we should be thinking seriously about moving into these
futuristic machines. And yes, when this day arrives, the machines should be granted all of the rights we now enjoy.