One of the more interesting
philosophical ideas of the past few years is the concept that
"machines" might be able to think. Most of the Sci Fi books I
read today include at least 1 character who is an Artificial Intelligence, or
AI. When the author embraces the concept and imbues the AI with a quirky
personality, or some sort of noble purpose, he makes them into a human-sort of
character we can all identify with and enjoy.
At the root of all this is the idea that a machine, a computer in
most cases, can think in the same way we all believe we think, including having
emotions, feelings, motivations, and so forth. Collectively we might call
the running cognitive process in our minds consciousness and identify it as
something special that only humans and other biologic close relatives might be
capable of having. Can your computer be said to have consciousness?
Can a collection of silicon chips and metal wires be conscious? What is
the underlying essence that makes human consciousness so special?
If you haven't run across John Searle's Chinese Room Experiment,
please look it up. In this thought experiment Searle shows how, at the
level of actually manipulating machine code, a computer of any kind does not
actually know what it is doing - it only manipulates information according to
the programming that runs it. For this reason the machine cannot be said
to actually "understand" the information it is dealing with, it only
manipulates 1's and 0's. Searle goes on
to talk about how a computer can be made to run in almost any sort of medium,
not just the silicon chips we talk about commonly today, so we need to keep a
very broad concept of what a computer might be in our minds.
Alan Turing's AI Test, which I like to call the If It Walks Like A
Duck, It's A Duck Test, says that if you cannot tell the difference between a machine
and a person in a blind test (you send in questions and read the answers without
knowing which is which, for example) there is no difference. If a computer can be programmed to replicate
any nuance of human responsiveness you can think of, and could then respond in
a way indistinguishable from a human response, then the machine can be said to
have all the cognitive properties a human has. In the movie Blade Runner
you might recall that the future robot police developed very sophisticated
tests to identify the androids/robots among us, since their responses to human
life and situations were indistinguishable from almost all human responses.
Which brings me back to
consciousness, and what it is. Like free
will, where we are free if we believe we are free, is consciousness ours to
claim if we simply believe we are conscious?
Is a dolphin not conscious only because a dolphin has never asked itself
that question? Or your dog? Are these other biologic creatures, who are
clearly intelligent in some ways but who act predominantly according to some
genetic or basic survival programming and not with willful intent, actually
conscious in the same way we feel we are?
And for the Trekkies out there,
are the “individuals” in the Borg conscious?
No comments:
Post a Comment