Is that free will as in free code or free beer: On Minsky’s artificial intelligence

33

Author: JT Smith

By Joab Jackson

Cyberpunk

Oy, that Marvin
Minsky
! If this godfather of artificial intelligence (AI)
research can’t get his computers to act like humans the way he
promised, well, he’ll just take all of humanity down with him.

Myself, I never understood the push to endow computers with human-like
consciousness. Why do AI researchers believe the tools that they work
with (transistors) can be fabricated into things that could think for
themselves? I mean, toaster designers don’t go around proclaiming that,
given enough heating coils, they could build a sentient machine that
could converse with us in the universal language of crispy bread. But these AI people,
they’re such Dr. Frankensteins, lusting to create life itself! — even
if the monsters they do build are pretty pathetic, judging from such
sorry examples of human simulation as
Eliza.

There are some pretty obvious reasons why AI doesn’t work, not the
least being scale. As the trade journal Electrical Engineering
Times
soberly points out
[“Chip stack aims
for brain-like connectivity
]: Brains consist of a trillion … or
so neurons that act as both processor and memory … Today’s
microprocessors, on the other hand, have just a few million logic gates
to process information.” Today’s computers are
Tinkertoys compared to human gray matter.

But even with the exponential gains future quantum, optical, or
superconductor-based computers are supposed to offer us — emphasis on
the “supposed” — there is still no proof that consciousness can be
replicated in a machine, or that humans are merely fleshy
input/output devices. Just as it took a non-Euclidian
geometry to help Einstein conjure relativity, something other than
strings of zeros and ones may be necessary to fire up a human noggin.
In other words, the difference between real minds and silicon ones may
not just one of degree, but of kind.

Heady stuff, so to speak. Accordingly, most AI researchers have
lowered their expectations over time. And divorcing AI research from
the goal of achieving consciousness and instead focusing on mimicking
simpler human thought patterns has produced scads of useful, or at
least workable, results, from beating world chess champions at their
own game to gigantic analytical databases that can figure out insurance
rates.

But Minsky, a professor of electrical engineering and computer
science at the Massachusetts Institute of Technology, is not one to
cower before 40 years of failure. He’s gone the opposite route.
Faced with the failure of AI to achieve consciousness, he attacks the
very idea of consciousness itself.

During a May 23 talk at the Game Developers Conference 2001 in
San Jose, Calif., Minsky discussed why AI hasn’t worked yet.
[For a transcript, see
Dr. Dobb’s TechNetCast.] “The reason consciousness has baffled
so many people, especially physicists, is very simple,” Minsky said.
“There isn’t any such thing. Consciousness is a word that we use as a
suitcase word. It’s a word we use as a name for a dozen very hard
problems about how the brain or the mind works, which are quite different from
one another.”

“Oh, good lord,” I thought when I first read that. Because
computers can’t emulate consciousness, it doesn’t exist? Because computers can’t
become human, he wants us to deny what is human about ourselves in the
first place? That, in effect is what Minsky’s saying — at least, that’s
how it sounds to me. It certainly saves him from having to defend his
failures.

This negation of consciousness strikes me as awfully dangerous. I don’t
believe in the supernatural, but as Descartes argued some 350 years ago
[all that “I think, therefore I am” stuff], the only thing we can be
certain of in this world is that we exist, by virtue of the fact we
have consciousness. We negate that at our own peril. Given that so much of
the information technology the Minskys of the world build is simply geared
toward getting us to buy more stuff, ridding ourselves of will smacks
of brainwashing — as if Minsky is now operating by technology’s
imperatives, not humanity’s.

I’m not sure if Minsky has ever come out and said publicly before that
he doesn’t believe in consciousness. But he’s been heading in that
direction since the mid-’80s, when he wrote the book The Society of
Mind
, in which he argued that the mind works not as a unified whole
but as a collection of many different agents, each handling one simple
task. Say you want to get a beer out of the refrigerator. Your mind
instructs your body to execute a whole batch of discrete actions:
Hefting your butt out of the recliner, grabbing the beer, finding a
bottle opener, opening the bottle, etc.

What Minsky doesn’t explain is what motivates all these agents of
the mind in the first place. He may have explained how we do
what we do, but not why we are impelled to do anything, such as
get a beer. Is it just self-survival that concocts this illusion of the
“I”? Heck if I know. But consciousness, however slippery, is still the
only game in town. If Minsky doesn’t work towards that end, his
intelligence will seem mighty artificial indeed.

Category:

  • Linux