Using Technology to Deepen Democracy, Using Democracy to Ensure Technology Benefits Us All

Thursday, April 09, 2009

Robot Cultists Decry My Pseudo-Science

"Thomas" sums up my skepticism on the Robot God priesthood and would-be Mind-Upload Immortalists in a word: "Vitalism."

Vitalism? Really?

It's like you want to pretend that noticing the actually salient facts connected to the materialization of consciousness in brains is tantamount to believing in phlogiston in this day and age.

I am a materialist in matters of mind and that means, among other things, that I don't discount the logical possibility that something enough akin to intelligence to deserve the description might be materialized on a different substrate. But logical possibility gives us no reasons to find ponies where there aren't any, and there is nothing in computer science or in actual computers on offer to justify analogies between them and intelligence, consciousness, and so on. That's all just bad poetry and media sensationalism and futurological salesmanship.

When Robot Cultists start fulminating about smarter-than-human-AI the first thing I notice is that they tend to have reduced human intelligence to something like a glandular calculator before going on thereupon to glorify calculators into imminent Robot Gods. I disapprove of both the reductionist impoverishment of human intelligence on which this vision depends as well as the faith-based unqualified deranging handwaving idealizations and hyperbolizations that follow thereafter.

The implications of the embodied materialization of human intelligence is even more devastating to superlative futurological wish-fulfillment fantasies of techno-immortalization via "uploading" into the cyberspatial sprawl, inasmuch as the metaphor of "migration" (yes, that's all it is, a metaphor) from brain-embodied mind to digital-materialized mind is by no means a sure thing if we accept, as materialists would seem to do, that the actual materialization is non-negligible after all.

UPDATE: Of course, "Roko" jumped right into the fray at this point.

In response to this comment of mine from the above -- “I don’t discount the logical possibility that something enough akin to intelligence to deserve the description might be materialized on a different substrate” --

Roko pants: "So let me get this straight: you think it is possible to build a computer that would deserve the name “intelligent”. From this I presume that you think it is possible to build a computer that is intelligent and smarter than any human -- as in, it can do any mental task as well as any human can, and it can do certain mental tasks that humans cannot do. Am I correct here?"

Of course, I said nothing about building a computer. You all can see that, right? You're all right here literally reading the same words Roko did. They're all right here in front of us. I get it that this is all Roko cares about since he thinks he gets to find his pony if only computers get treated as people. But computers are actually-existing things in the world, and they aren’t smart and they aren’t showing any signs of getting smart. Roko is hearing what he wants to hear here. Both in my response, but apparently also from his desktop (you don't understand, Jerry, he loves me, he loves me).

It should go without saying, but being a materialist about mind doesn’t give me or you or Roko permission to pretend to find a pony where there isn’t one. I can’t have the conversation Roko seems to want to have about whether he is “correct” or incorrect" to draw his conclusion from what I have said since all that has happened as far as I can see is that he has leaped off the deep end into spastic handwaving about computers being intelligent and smarter and acting this or that way, just because I pointed out that I don’t attribute mind to some mysterious supernatural force.

Despite all that, I also think, sensibly enough, that the words “smart” “intelligent” and “act” can’t be used literally to describe mechanical behavior, that these are only metaphors when so applied, and indeed metaphors that clearly seem utterly to have bewitched and derailed Roko (and his “community,” as he puts it) to the cost of sense.

I mean, who knows what beasts or aliens or what have you we might come upon who might be intelligent or not, however differently incarnated or materialized? But when we are talking about code and computers and intelligence we are talking about furniture that actually exist, and to attribute the traits we associate with intelligence with the things we call computers or the behaviors of software is to play fast and loose with language in ways that suggest either confusion or deception or both as far as I can see.

1 comment:

jimf said...

> "Thomas" sums up my skepticism on the Robot God priesthood and would-be
> Mind-Upload Immortalists in a word: "Vitalism."
>
> Vitalism? Really?

I've had this conversation, with a far-from-unintelligent friend of decades-long
acquaintance -- a computer programmer, naturally.

What Jaron Lanier calls "cyber-totalism" is deeply ingrained among many
otherwise-savvy computer-literate (but not philosophy- or even biology-literate,
unfortunately) folks as an unexamined assumption.

What I'm talking about is the assumption that a human mind, if it has
a purely material basis, **must** therefore be "implementable" on a digital
computer such as we know and love today.

It may even be true, but it's far from obvious that it **must** be so.

> When Robot Cultists start fulminating about smarter-than-human-AI the
> first thing I notice is that they tend to have reduced human intelligence
> to something like a glandular calculator before going on thereupon to
> glorify calculators into imminent Robot Gods. I disapprove of both the
> reductionist impoverishment of human intelligence on which this vision
> depends as well as the faith-based unqualified deranging handwaving
> idealizations and hyperbolizations that follow thereafter.

From _What Computers Still Can't Do: A Critique of
Artifical Reason_, Hubert L. Dreyfus, MIT Press,
1992

Introduction, pp. 67-70:

Since the Greeks invented logic and geometry, the idea that
all reasoning might be reduced to some kind of calculation --
so that all arguments could be settled once and for all --
has fascinated most of the Western tradition's rigorous
thinkers. Socrates was the first to give voice to this
vision. The story of artificial intelligence might well
begin around 450 B.C. when (according to Plato) Socrates
demands of Euthyphro, a fellow Athenian who, in the name
of piety, is about to turn in his own father for murder:
"I want to know what is characteristic of piety which
makes all actions pious. . . that I may have it to turn
to, and to use as a standard whereby to judge your actions
and those of other men." Socrates is asking Euthyphro
for what modern computer theorists would call an "effective
procedure," "a set of rules which tells us, from moment
to moment, precisely how to behave."

Plato generalized this demand for moral certainty into
an epistemological demand. According to Plato, all
knowledge must be stateable in explicit definitions
which anyone could apply. If one could not state his
know-how in terms of such explicit instructions -- if his
knowing **how** could not be converted into knowing
**that** -- it was not knowledge but mere belief.
According to Plato, cooks, for example, who proceed by
taste and intuition, and poets who work from inspiration,
have no knowledge; what they do does not involve
understanding and cannot be understood. More generally,
what cannot be stated explicitly in precise instructions --
all areas of human thought which require skill, intuition
or a sense of tradition -- are relegated to some kind of
arbitrary fumbling.

But Plato was not fully a cyberneticist (although according
to Norbert Wiener he was the first to use the term), for
Plato was looking for **semantic** rather than **syntactic**
criteria. His rules presupposed that the person understood
the meanings of the constitutive terms. . . Thus Plato
admits his instructions cannot be completely formalized.
Similarly, a modern computer expert, Marvin Minsky, notes,
after tentatively presenting a Platonic notion of effective
procedure: "This attempt at definition is subject to
the criticism that the **interpretation** of the rules
is left to depend on some person or agent."

Aristotle, who differed with Plato in this as in most questions
concerning the application of theory to practice, noted
with satisfaction that intuition was necessary to apply
the Platonic rules: "Yet it is not easy to find a formula
by which we may determine how far and up to what point a man
may go wrong before he incurs blame. But this difficulty
of definition is inherent in every object of perception;
such questions of degree are bound up with circumstances
of the individual case, where are only criterion **is**
the perception."

For the Platonic project to reach fulfillment one breakthrough
is required: all appeal to intuition and judgment must be
eliminated. As Galileo discovered that one could find
a pure formalism for describing physical motion by ignoring
secondary qualities and teleological considerations, so,
one might suppose, a Galileo of human behavior might succeed
in reducing all semantic considerations (appeal to meanings)
to the techniques of syntactic (formal) manipulation.

The belief that such a total formalization of knowledge must
be possible soon came to dominate Western thought. It
already expressed a basic moral and intellectual demand, and
the success of physical science seemed to imply to sixteenth-
century philosophers, as it still seems to suggest to
thinkers such as Minsky, that the demand could be satisfied.
Hobbes was the first to make explicit the syntactic conception
of thought as calculation: "When a man **reasons**, he
does nothing else but conceive a sum total from addition of
parcels," he wrote, "for REASON . . . is nothing but
reckoning. . ."

It only remained to work out the univocal parcels of "bits"
with which this purely syntactic calculator could operate;
Leibniz, the inventor of the binary system, dedicated
himself to working out the necessary unambiguous formal
language.

Leibniz thought he had found a universal and exact system of
notation, an algebra, a symbolic language, a "universal
characteristic" by means of which "we can assign to every
object its determined characteristic number." In this way
all concepts could be analyzed into a small number of
original and undefined ideas; all knowledge could be
expressed and brought together in one deductive system.
On the basis of these numbers and the rules for their
combination all problems could be solved and all controversies
ended: "if someone would doubt my results," Leibniz
said, "I would say to him: 'Let us calculate, Sir,' and
thus by taking pen and ink, we should settle the
question.'" . . .

In one of his "grant proposals" -- his explanations of how
he could reduce all thought to the manipulation of
numbers if he had money enough and time -- Leibniz remarks:
"[T]he most important observations and turns of skill
in all sorts of trades and professions are as yet unwritten.
This fact is proved by experience when passing from
theory to practice when we desire to accomplish something.
Of course, we can also write up this practice, since it
is at bottom just another theory more complex and
particular. . ."


Chapter 6, "The Ontological Assumption", pp. 209-213

Granting for the moment that all human knowledge can be
analyzed as a list of objects and of facts about each,
Minsky's analysis raises the problem of how such a large
mass of facts is to be stored and accessed. . .

And, indeed, little progress has been made toward
solving the large data base problem. But, in spite of
his own excellent objections, Minsky characteristically
concludes: "But we had better be cautious about
this caution itself, for it exposes us to a far more
deadly temptation: to seek a fountain of pure intelligence.
I see no reason to believe that intelligence can
exist apart from a highly organized body of knowledge,
models, and processes. The habit of our culture has
always been to suppose that intelligence resides in
some separated crystalline element, call it _consciousness_,
_apprehension_, _insight_, _gestalt_, or what you
will but this is merely to confound naming the problem
with solving it. The problem-solving abilities of
a highly intelligent person lies partly in his superior
heuristics for managing his knowledge-structure and
partly in the structure itself; these are probably
somewhat inseparable. In any case, there is no reason to
suppose that you can be intelligent except through the
use of an adequate, particular, knowledge or model
structure."

. . . It is by no means obvious that in order to be
intelligent human beings have somehow solved or needed to
solve the large data base problem. The problem may itself
be an artifact created by the fact that AI workers must
operate with discrete elements. Human knowledge does
not seem to be analyzable as an explicit description
as Minsky would like to believe. . . To recognize an
object as a chair, for example, means to understand its
relation to other objects and to human beings. This
involves a whole context of human activity of which
the shape of our body, the institution of furniture, the
inevitability of fatigue, consitute only a small part.
And these factors in turn are no more isolable than is
the chair. They all may get **their** meaning in
the context of human activity of which they form a
part. . .

There is no reason, only an ontological commitment,
which makes us suppose that all the facts we can make
explicit about our situation are already unconsciously
explicit in a "model structure," or that we
could ever make our situation completely explicit
even if we tried.

Why does this assumption seem self-evident to Minsky?
Why is he so unaware of the alternative that he takes
the view that intelligence involves a "particular,
knowledge or model structure," a great systematic array
of facts, as an axiom rather than as an hypothesis?
Ironically, Minsky supposes that in announcing this
axiom he is combating the tradition. "The habit of
our culture has always been to suppose that intelligence
resides in some separated crystalline element, call
it consciousness, apprehension, insight, gestalt. . ."
In fact, by supposing that the alternatives are either
a well-structured body of facts, or some disembodied
way of dealing with the facts, Minsky is so traditional
that he can't even see the fundamental assumption
that he shares with the whole of the philosophical
tradition. In assuming that what is given are facts
at all, Minsky is simply echoing a view which has been
developing since Plato and has now become so ingrained
as to **seem** self-evident.

As we have seen, the goal of the philosophical
tradition embedded in our culture is to eliminate
uncertainty: moral, intellectual, and practical.
Indeed, the demand that knowledge be expressed in
terms of rules or definitions which can be applied
without the risk of interpretation is already
present in Plato, as is the belief in simple elements
to which the rules apply. With Leibniz, the connection
between the traditional idea of knowledge and the
Minsky-like view that the world **must** be analyzable
into discrete elements becomes explicit. According
to Leibniz, in understanding we analyze concepts into
more simple elements. In order to avoid a regress
of simpler and simpler elements, then, there must
be ultimate simples in terms of which all complex
concepts can be understood. Moreover, if concepts
are to apply to the world, there must be simples
to which these elements correspond. Leibniz
envisaged "a kind of alphabet of human thoughts"
whose "characters must show, when they are used in
demonstrations, some kind of connection, grouping
and order which are also found in the objects."
The empiricist tradition, too, is dominated by
the idea of discrete elements of knowledge. For
Hume, all experience is made up of impressions:
isolable, determinate, atoms of experience.
Intellectualist and empiricist schools converge
in Russell's logical atomism, and the idea reaches
its fullest expression in Wittgenstein's _Tractatus_,
where the world is defined in terms of a set of
atomic facts which can be expressed in logically
independent propositions. This is the purest
formulation of the ontological assumption, and
the necessary precondition of all work in AI as long
as researchers continue to suppose that the world
must be represented as a structured set of descriptions
which are themselves built up from primitives.
Thus both philosophy and technology, in their appeal
to primitives, continue to posit what Plato sought:
a world in which the possibility of clarity, certainty
and control is guaranteed; a world of data structures,
decision theory, and automation.

No sooner had this certainty finally been made fully
explicit, however, than philosophers began to call it into
question. Continental phenomenologists [uh-oh, here
come those French. :-0] recognized it as the outcome
of the philosophical tradition and tried to show its
limitations. [Maurice] Merleau-Ponty calls the
assumption that all that exists can be treated as
determinate objects, the _prejuge du monde_,
"presumption of commonsense." Heidegger calls it
_rechnende Denken_ "calculating thought," and views
it as the goal of philosophy, inevitably culminating
in technology. . . In England, Wittgenstein less
prophetically and more analytically recognized the
impossibility of carrying through the ontological
analysis proposed in his _Tractatus_ and became his
own severest critic. . .

But if the ontological assumption does not square with
our experience, what does it have such power? Even if
what gave impetus to the philosophical tradition was
the demand that things be clear and simple so that
we can understand and control them, if things are not
so simple why persist in this optimism? What lends
plausibility to this dream? As we have already seen. . .
the myth is fostered by the success of modern
physics. . .


Chapter 8, "The Situation: Orderly Behavior Without
Recourse to Rules" pp. 256-257

In discussing problem solving and language translation
we have come up against the threat of a regress of rules
for determining relevance and significance. . . We
must how turn directly to a description of the situation
or context in order to give a fuller account of the
unique way human beings are "in-the-world," and the
special function this world serves in making orderly
but nonrulelike behavior possible.

To focus on this question it helps to bear in mind
the opposing position. In discussing the epistemological
assumption we saw that our philosophical tradition
has come to assume that whatever is orderly can be
formalized in terms of rules. This view has reached
its most striking and dogmatic culmination in the
conviction of AI workers that every form of intelligent
behavior can be formalized. Minsky has even
developed this dogma into a ridiculous but revealing
theory of human free will. He is convinced that all
regularities are rule governed. He therefore theorizes
that our behavior is either completely arbitrary
or it is regular and completely determined by the
rules. As he puts it: "[W]henever a regularity is
observed [in our behavior], its representation is
transferred to the deterministic rule region." Otherwise
our behavior is completely arbitrary and free.
The possibility that our behavior might be regular
but not rule governed never even enters his mind.


Dreyfus points out that when a publication anticipating
the first edition of his book came out in the late
1960s, he was taken aback by the hysterical tone of
the reactions to it:

Introduction, pp. 86-87

[T]he year following the publication of my first
investigation of work in artificial intelligence,
the RAND Corporation held a meeting of experts in
computer science to discuss, among other topics,
my report. Only an "expurgated" transcript of this
meeting has been released to the public, but
even there the tone of paranoia which pervaded the
discussion is present on almost every page. My
report is called "sinister," "dishonest,"
"hilariously funny," and an "incredible misrepresentation
of history." When, at one point, Dr. J. C. R. Licklider,
then of IBM, tried to come to the defense of my
conclusion that work should be done on man-machine
cooperation, Seymour Papert of M.I.T. responded:
"I protest vehemently against crediting Dreyfus with
any good. To state that you can associate yourself
with one of his conclusions is unprincipled. Dreyfus'
concept of coupling men with machines is based on
thorough misunderstanding of the problems and has nothing
in common with any good statement that might go by
the same words."

The causes of this panic-reaction should themselves be
investigated, but that is a job for psychology [;->],
or the sociology of knowledge. However, in anticipation
of the impending outrage I want to make absolutely clear
from the outset that what I am criticizing is the
implicit and explicit philosophical assumptions of
Simon and Minsky and their co-workers, not their
technical work. True, their philosophical prejudices
and naivete distort their own evaluation of their
results, but this in no way detracts from the
importance and value of their research on specific
techniques such a list structures, and on more
general problems. . .

An artifact could replace men in some tasks -- for
example, those involved in exploring planets --
without performing the way human beings would and
without exhibiting human flexibility. Research in
this area is not wasted or foolish, although a balanced
view of what can and cannot be expected of such an
artifact would certainly be aided by a little
philosophical perspective.


In the "Introduction to the MIT Press Edition" (pp. ix-xiii)
Dreyfus gives a summary of his work and reveals
the source of the acronym "GOFAI":

Almost half a century ago [as of 1992] computer pioneer
Alan Turing suggested that a high-speed digital
computer, programmed with rules and facts, might exhibit
intelligent behavior. Thus was born the field later
called artificial intelligence (AI). After fifty
years of effort, however, it is now clear to all but
a few diehards that this attempt to produce artificial
intelligence has failed. This failure does not mean
this sort of AI is impossible; no one has been able
to come up with a negative proof. Rather, it has
turned out that, for the time being at least, the
research program based on the assumption that human
beings produce intelligence using facts and rules
has reached a dead end, and there is no reason to
think it could ever succeed. Indeed, what John
Haugeland has called Good Old-Fashioned AI (GOFAI)
is a paradigm case of what philosophers of science
call a degenerating research program.

A degenerating research program, as defined by Imre
Lakatos, is a scientific enterprise that starts out
with great promise, offering a new approach that
leads to impressive results in a limited domain.
Almost inevitably researchers will want to try to apply
the approach more broadly, starting with problems
that are in some way similar to the original one.
As long as it succeeds, the research program expands
and attracts followers. If, however, researchers
start encountering unexpected but important phenomena
that consistently resist the new techniques, the
program will stagnate, and researchers will abandon
it as soon as a progressive alternative approach
becomes available.

We can see this very pattern in the history of GOFAI.
The work began auspiciously with Allen Newell and
Herbert Simon's work at RAND. In the late 1950's,
Newell and Simon proved that computers could do more
than calculate. They demonstrated that a computer's
strings of bits could be made to stand for anything,
including features of the real world, and that its
programs could be used as rules for relating these
features. The structure of an expression in the
computer, then, could represent a state of affairs
in the world whose features had the same structure,
and the computer could serve as a physical symbol
system storing and manipulating representations.
In this way, Newell and Simon claimed, computers
could be used to simulate important aspects of intelligence.
Thus the information-processing model of the mind
was born. . .

My work from 1965 on can be seen in retrospect as a
repeatedly revised attempt to justify my intuition,
based on my study of Martin Heidegger, Maurice
Merleau-Ponty, and the later Wittgenstein, that the
GOFAI research program would eventually fail.
My first take on the inherent difficulties of
the symbolic information-processing model of the
mind was that our sense of relevance was holistic and
required involvement in ongoing activity,
whereas symbol representations were atomistic and
totally detached from such activity. By the
time of the second edition of _What Computers Can't
Do_ in 1979, the problem of representing what I
had vaguely been referring to as the holistic
context was beginning to be perceived by AI researchers
as a serious obstacle. In my new introduction I
therefore tried to show that what they called the
commonsense-knowledge problem was not really a problem
about how to represent **knowledge**; rather, the
everyday commonsense background understanding that
allows us to experience what is currently relevant
as we deal with things and people is a kind of
**know-how**. The problem precisely was that this
know-how, along with all the interests, feelings,
motivations, and bodily capacities that go to make a
human being, would have had to be conveyed to the
computer as knowledge -- as a huge and complex belief
system -- and making our inarticulate, preconceptual
background understanding of what it is like to
be a human being explicit in a symbolic representation
seemed to me a hopeless task.

For this reason I doubted the commonsense-knowledge
problem could be solved by GOFAI techniques, but I could
not justify my suspicion that the know-how that made up
the background of common sense could not itself be
represented by data structures made up of facts and
rules. . .

When _Mind Over Machine_ came out, however, Stuart
[Dreyfus] and I faced the same objection that had been
raised against my appeal to holism in _What Computers
Can't Do_. You may have described how expertise
**feels**, our critics said, but our only way of
**explaining** the production of intelligent behavior
is by using symbolic representations, and so
that must be the underlying causal mechanism. Newell
and Simon resort to this type of defense of
symbolic AI: "The principal body of evidence for
the symbol-system hypothesis. . . is negative evidence:
the absence of specific competing hypotheses as to
how intelligent activity might be accomplished whether
by man or by machine [sounds like a defense of
Creationism!]"

In order to respond to this "what else could it be?" defense
of the physical symbol system research program, we
appealed in _Mind Over Machine_ to a somewhat vague and
implausible idea that the brain might store holograms
of situations paired with appropriate responses,
allowing it to respond to situations in way it had
successfully responded to similar situations in the
past. The crucial idea was that in hologram matching
one had a model of similarity recognition that did not
require analysis of the similarity of two pattersn
in terms of a set of common features. But the model
was not convincing. No one had found anything
resembling holograms in the brain.


Minsky gets the brunt of Dreyfus' exasperation and sarcasm.

Introduction to the Revised Edition, pp. 34-36:

In 1972, drawing on Husserl's phenomenological analysis,
I pointed out that it was a major weakness of AI that no
programs made use of expectations. Instead of
modeling intelligence as a passive receiving of
context-free facts into a structure of already stored
data, Husserl thinks of intelligence as a context-
determined, goal-directed activity -- as a **search**
for anticipated facts. For him the _noema_, or
mental representation of any type of object, provides
a context or "inner horizon" of expectations or
"predelineations" for structuring the incoming data. . .

The noema is thus a symbolic description of all the
features which can be expected with certainty in exploring
a certain type of object -- features which remain
"inviolably the same. . ." . . .

During twenty years of trying to spell out the components
of the noema of everyday objects, Husserl found that
he had to include more and more of what he called the
"outer horizon," a subject's total knowledge of the
world. . .

He sadly concluded at the age of seventy-five that he was
a "perpetual beginner" and that phenomenology was an
"infinite task" -- and even that may be too optimistic. . .

There are hints in an unpublished early draft of the
frame paper that Minsky has embarked on the same misguided
"infinite task" that eventually overwhelmed Husserl. . .

Minsky's naivete and faith are astonishing. Philosophers
from Plato to Husserl, who uncovered all these problems
and more, have carried on serious epistemological
research in this area for two thousand years without
notable success. Moreover, the list Minsky includes in
this passage deals only with natural objects, and
their positions and interactions. As Husserl saw, and
as I argue. . ., intelligent behavior also presupposes
a background of cultural practices and institutions. . .

Minsky seems oblivious to the hand-waving optimism of
his proposal that programmers rush in where philosophers
such as Heidegger fear to tread, and simply make explicit
the totality of human practices which pervade our lives
as water encompasses the life of a fish.