Sunday, March 09, 2008

Giulio Demands Clarifications, And I Provide Them. Nothing Changes.

Upgraded and adapted from the Moot, just for shits and giggles. (Sorry for all the transhumanist discussion this weekend, to those of you who read Amor Mundi for its discussion of other topics -- frankly I'd rather talk about Robot Cultists than the current Obama versus Clinton follies. Talk about flinging feces! Ugh.)

Giulio: [I]t appears to me that, behind the poop jokes, you really mean to celebrate the weakness of the body against any hopes and prospects for improvement.

If that's how it "appears" to you, then this suggests that you certainly read carelessly and probably think more carelessly still.

[L]et's suppose radical anti-aging and life extension medical technologies are developed within the next, say, 50 years.

Such suppositions are literally worse than useless. Let's suppose Jeebus raises the dead in a century, that unemployment numbers go down for four months thirteen years from now, and white calf length anti-gravity boots are distributed to everybody whose last name begins in letters O through Z in fifty years' time.

Let's also suppose that mind backup technologies are developed to offer the additional possibility to reload a mind lost to an accident to a new biological or robotic body.

Not only will I not suppose this, but I think this is a sentence filled with outright incoherent statements. To speak of a "mind backup" in this way is very likely to not understand what a mind is, and to speak of a "robotic body" in this way is very likely to not understand what a body is.

Would you call this a good thing or a bad thing?

Apart from the uselessness of some of this frame and the logical impossibility of the rest, well, I would have to say of this, as I would say of any technoscientific change or technodevelopmental outcome, that it is better the fairer and more democratic the actual distribution of risks, costs, and benefits attending its development and accomplishment.

Please answer just good or bad.

I won't be stupid for you. Sorry.

47 comments:

  1. Dale said:

    Sorry for all the transhumanist discussion this weekend, to those of you who read Amor Mundi for its discussion of other topics -- frankly I'd rather talk about Robot Cultists than the current Obama versus Clinton follies.

    Although one could argue that there are others topics that you could discuss besides Robot Cultism or the Clinton smear campaign against Obama like, I don't know, the recent study that details the Bush administration's war on science or the recent study that shows how reporters often can't provide scientific evidence for claims they repeatedly report as factual; I do think that some H+ thinkers (Bostrom, Hughes, More, Vita-More, Sandberg, Vaj) are worth engaging and critiquing but giving cranks on the lunatic fringe of H+ undeserved attention is a waste of time which only serves to give them the hand they need to climb out of obscurity...

    ReplyDelete
  2. Once upon a time a small klatch of silly white guys (mostly) who thought they were the smartest people in the room while saying the most idiotic imaginable things were a bunch of obscure marginal sociopathic cranks, but because what they had to say resonated with the agenda of certain incumbent interests and complemented opportunistically certain larger sociocultural forces accidentally afoot in the world at that particular juncture they acquired a power nobody could have imagined and used it to kill hundreds and hundreds of thousands of innocent people and bring the world to the utter brink of destruction in more ways than one. We call them "Neocons." One of them, Fukuyama, likes to call transhumanists the world's most dangerous ideologues. There is much that is wrong in what he is talking about when he says this, but one also does very well to remember the adage: It takes one to know one. Better to nip this thing in the bud than cry later, is what I'm thinking more and more these days.

    ReplyDelete
  3. Dale, I agree with you which is why I think we should focus on the relatively influential H+ thinkers that are worth engaging and critiquing and whose replies might be intellectually challenging rather than the rants of H+ clowns that seem to only provide us with a good belly laugh.

    ReplyDelete
  4. Anonymous3:58 PM

    Before you can upload a mind you've got to define it. This is where the >Hists make their mistake. This hasn't been done. There's the facile assumption that a mind is just a really, really, sooper computer implemented in bio-gunk. Abstract away the information and you've got the mind which can them be transferred to some other substrate. The problem is that no one really knows right now if any of this is correct and the whole problem quickly leaves the technical arena and drifts into metaphysics (is the guy who wakes up in the simulation really "you"). Mind could even end up being something quantized in some fundamental way that is beyond computation or requires a quantum computer and thus whose pattern can never be non-destructively copied properly. I would say the computational model needs more proof of concept before we embrace it wholesale and talk about how Moore's law will give us human level AIs in some few years.

    ReplyDelete
  5. I see what you are saying, Vladimir, but I actually disagree with it. First of all, I have heard lately from some of the non-transhumanists who read Amor Mundi and they report something that squares very much with my own feelings on this topic: Making fun of transhumanism is often funny and usually fun.

    I laugh a lot when I write posts for Amor Mundi and I like to think that people laugh a lot in reading them. I do try to offer insight and provocation and inspiration and outrage as well, where my poor powers can manage the thing, but I honestly write a lot about things that crack me up. I like to read blogs that are written in that spirit as well, like Eschaton and Sadly No! and James Wolcott's blog (none of which I claim to be equal to).

    It's true I take transhumanism rather more seriously than others do as a symptom of a certain default techno-transcendentalism in late-industrial-capitalist North Atlantic culture generally and of certain neoliberal developmental assumptions more specifically -- and it is also true that I know some transhumanist-identified folks and debate the finer points of their arguments in a somewhat more collegial way as well -- and all of this complicates the joy of ridiculing the most egregious weirdnesses of the viewpoint. But the fact is I spend a lot of my time puncturing pretensions (and I hardly excuse myself from the dartboard) and then laughing my ass off.

    So, I say there is a lot to be said for the value of a big belly laugh and I am happy to contribute to the joyful measure of them available in the world, especially when they are directed at Bushite Killer Clowns, Libertopian Free Marketeers, Randroids, Would-Be Theocrats, Biocons, and Robot Cultists.

    But as for your point that there are more intellectually challenging works that are transhumanist-identified out there, and it seems to me that the clownishness and dangerousness I ridicule is closer to the ticking Tin-Man heart of transhumanism than they are and that the clowns like to lean on more nuanced figures to lend undeserved legitimacy to their dumb dangerous Superlative project, a move I have absolutely no interest in abetting.

    Rather, the more interesting figures should be persuaded to jettison the clowns and reactionaries clinging to their pantslegs or own them definitively and pay the price. I'm happy to do my part to force that choice. Everybody benefits from this, ultimately including, as it happens, the more interesting intellectuals you would prefer I engage on their own terms (which, I must say, I also do here and there, anyway).

    ReplyDelete
  6. Also, what Greg said. Yes-indeedy.

    ReplyDelete
  7. Dale said:

    Rather, the more interesting figures should be persuaded to jettison the clowns and reactionaries clinging to their pantslegs or own them definitively and pay the price. I'm happy to do my part to force that choice. Everybody benefits from this, ultimately including, as it happens, the more interesting intellectuals you would prefer I engage on their own terms.

    I get it now. I won't bring this up again. ;)

    ReplyDelete
  8. "Greg in Portland" said:

    > Before you can upload a mind you've got to define it. . .
    > I would say the computational model needs more proof of concept
    > before we embrace it wholesale and talk about how Moore's law
    > will give us human level AIs in some few years.

    Some people would say the "computational model" of mind has
    already had all the "proof of concept" it deserves over the
    last 50 or 60 years, and that the only thing it needs now
    is the dumpster. Some people started saying this almost 40
    years ago. More are saying it now. In a way, the more
    naive transhumanists are to some of these ideas what modern
    American Christian fundamentalists are to Christianity,
    whose intellectual heyday is long over.

    de Thezier wrote:

    > We should focus on the relatively influential H+ thinkers. . .
    > whose replies might be intellectually challenging. . .

    I think these are disjoint groups of people (if the latter exist
    at all).

    > [G]iving cranks on the lunatic fringe of H+ undeserved attention
    > is a waste of time which only serves to give them the hand they
    > need to climb out of obscurity...

    I'd say keeping your mouth shut gives them the tacit legitimacy they
    need to climb out of obscurity. You think the Scientologists
    **wanted** "undeserved" (critical) attention to help them take
    over the world?

    Dale wrote:

    > I have heard lately from some of the non-transhumanists
    > who read Amor Mundi and they report [that]. . . [m]aking
    > fun of transhumanism is often funny and usually fun. . .
    >
    > It's true I take transhumanism rather more seriously than
    > others do as a symptom of a certain default techno-transcendentalism
    > in late-industrial-capitalist North Atlantic culture. . .

    It's a symptom of something, all right.

    Well, here's a book I was browsing in today that's sometimes
    (unintentionally) funny. . .

    From _Computer Power and Human Reason: From Judgment to Calculation_
    by Joseph Weizenbaum, W. H. Freeman & Co., 1976, Chapter 4,
    "Science and the Compulsive Programmer"

    p. 115

    "The computer programmer. . . is a creator of universes for which
    he alone is a lawgiver. So, of course, is the designer of any
    game. But universes of virtually unlimited complexity can be
    created in the form of computer programs. . . No playwright,
    no stage director, no emperor, however powerful, has ever
    exercised such absolute authority to arrange a stage or field
    of battle and to command such unswervingly dutiful actors or
    troops.

    One would have to be astonished if Lord Acton's observation that
    power corrupts were not to apply in an environment in which
    omnipotence is so easily achievable. . . [T]he corruption
    evoked by the computer programmer's omnipotence manifests itself
    in a form that is instructive in a domain far larger than the
    immediate environment of the computer. To understand it, we
    will have to take a look at mental disorder that, while actually
    very old, appears to have been transformed by the computer into
    a new genus: the compulsion to program.

    Wherever computer centers have become established, that is to say,
    in countless places in the United States, as well as in virtually
    all other industrial regions of the world, bright young men of
    disheveled appearance, often with sunken glowing eyes, can be seen
    sitting at computer consoles, their arms tensed and waiting to
    fire their fingers, already poised to strike, at the buttons and
    keys on which their attention seems to be riveted as a gambler's on
    the rolling dice. When not so transfixed, they often sit at
    tables strewn with computer printouts over which they pore like
    possessed students of a cabalistic text. They work until they nearly
    drop, twenty, thirty hours at a time. Their food, if they
    arrange it, is brought to them: coffee, Cokes, sandwiches. If
    possible, they sleep on cots near the computer. But only for
    a few hours -- then back to the console or the printouts. Their
    rumpled clothes, their unwashed and unshaven faces, and their
    uncombed hair all testify that they are oblivious to their
    bodies and to the world in which they move. They exist, at least
    when so engaged, only through and for the computers. These
    are computer bums, compulsive programmers. They are an international
    phenomenon. . .

    The compulsive programmer spends all the time he can working on one
    of his big projects. . . His grandiose projects. . . have the quality
    of illusions, indeed, of illusions of grandeur. He will construct
    the one grand system in which all other experts will soon write
    their systems.

    (It has to be said that not all hackers are pathologically compulsive
    programmers. Indeed, were it not for the often, in its own terms,
    highly creative labor of people who proudly claim the title
    "hacker," few of today's sophisticated computer time-sharing systems,
    computer language translators, computer graphics systems, etc.,
    would exist.) [ :-0 ]

    . . .

    The psychological situation the compulsive programmer finds himself
    in while so engaged is strongly determined by two apparently
    opposing facts: first, he knows that he can make the computer do
    anything he wants it to do; and second, the computer constantly
    displays undeniable evidence of his failure to him. It reproaches
    him. There is no escaping this bind. . . The computer challenges
    his power, his knowledge. . .

    It must be emphasized that the portrait I have drawn is instantly
    recognizable at computing installations all over the world. It
    represents a psychopathology that is far less ambiguous than, say,
    the milder forms of schizophrenia or paranoia. At the same time,
    it represents an extremely developed form of a disorder that afflicts
    much of our society. . .

    The magical world inhabited by the compulsive gambler is no different
    in principle than that in which others, equally driven by grandiose
    fantasies, attempt to realize their dreams of power. Astrology,
    for example. . .

    The gambler constantly defies the laws of probability; he refuses
    to recognize their operational significance. He therefore cannot
    permit them to become a kernel of a realistic insight. A particular
    program may be foundering on deep structural, mathematical, or
    linguistic difficulties about which relevant theories exist.
    But the compulsive programmer meets most manifestations of trouble
    with still more programming tricks, and thus, like the gambler,
    refuses them to nucleate relevant theories in his mind.
    Compulsive programmers are notorious for not reading the literature
    of the substantive fields in which they are nominally working. [ :-0 ].

    These mechanisms, called by Polyani circularity, self-expansion,
    and suppressed nucleation, constitute the main defensive armamentarium
    of the true adherent of magical systems of thought, and particularly
    the compusive programmer. Psychiatric literature informs us that
    this pathology deeply involves fantasies of omnipotence. The
    conviction that one is all-powerful, however, cannot rest; it must
    constantly be verified by tests. The test of power is control.
    The test of absolute power is certain and absolute control.
    When dealing with the compulsive programmer, we are therefore also
    dealing with his need to control and his need for certainty.

    The passion for certainty is, of course, also one of the great
    cornerstones of science, philosophy, and religion. And the quest
    for control is inherent in all technology. Indeed, the reason
    we are so interested in the compulsive programmer is that we
    see no discontinuity between his pathological motives and behavior
    and those of the modern scientist and technologist generally.
    The compulsive programmer is merely the proverbial mad scientist who
    has been given a theater, the computer, in which he can, and does,
    play out his fantasies.

    Let us reconsider Bergler's three observations about gamblers. First,
    the gambler is subjectively certain that he will win. So is the
    compulsive programmer -- only he, having created his own world on
    a universal machine, has some foundation in reality for his certainty.
    Scientists, with some exceptions, share the same faith: what science
    has not done, it has not **yet** done; the questions that science
    has not answered, it has not **yet** answered. Second, the gambler
    has an unbounded faith in his own cleverness. Well?! Third, the
    gambler knows that life itself is nothing but a gamble. Similarly,
    the compulsive programmer is convinced that life is nothing but
    a program running on an enormous computer, and that therefore every
    aspect of life can ultimately be explained in programming terms.
    Many scientists (again, there are notable exceptions) also believe
    that every aspect of life and nature can finally be explained in
    exclusively scientific terms. Indeed, as Polyani correctly points
    out, the stability of scientific beliefs is defended by the same devices
    that protect magical belief systems:

    'Any contradiction between a particular scientific notion and the
    facts of experience will be explained by other scientific notions;
    there is a ready reserve of possible scientific hypotheses available
    to explain any conceivable event. . . . **within science itself**,
    the stability of theories against experience is maintained by
    epicyclical reserves which suppress alternative conceptions in the
    germ.'

    Hence we can make out a continuum. At one of its extremes stand
    scientists and technologists who much resemble the compulsive
    programmer. At the other extreme are those scientists, humanists,
    philosophers, artists, and religionists who seek understanding, as
    whole persons and from all possible perspectives. The affairs of
    the world appear to be in the hands of technicians whose psychic
    constitutions approximate those of the former to a dangerous
    degree. Meanwhile the voices that speak the wisdom of the latter
    seem to be growing ever fainter. . .

    Science can proceed only by simplifying reality. The first step in
    its process of simplification is abstraction. And abstraction means
    leaving out of account all those empirical data which do not fit
    the particular conceptual framework within which science at the moment
    happens to be working, which, in other words, are not illuminated
    by the light of the particular lamp under which science happens
    to be looking for keys. Aldous Huxley remarked on this matter with
    considerable clarity:

    'Pragmatically, [scientists] are justified in acting in this odd and
    extremely arbitrary way; for by concentrating exclusively on the
    measurable aspects of such elements of experience as can be explained
    in terms of a causal system they have been able to achieve a great
    and every increasing control over the energies of nature. But
    power is not the same thing as insight and, as a representation of
    reality, the scientific picture of the world is inadequate for the
    simple reason that science does not even profess to deal with experience
    as a whole, but only with certain aspects of it in certain contexts.
    All this is quite clearly understood by the more philosophically minded
    men of science. But unfortunately some scientists, many technicians,
    and most consumers of gadgets have lacked the time and the inclination
    to examine the philosophical foundations and background of the
    sciences. Consequently they tend to accept the world picture implicit
    in the theories of science as a complete and exhaustive account of
    reality; they tend to regard those aspects of experience which scientists
    leave out of account, because they are incompetent to deal with them,
    as being somehow less real than the aspects which science has arbitrarily
    chosen to abstract from out of the infinitely rich totality of given
    facts.'

    One of the most explicit statements of the way in which science
    deliberately and consciously plans to distort reality, and then goes
    on to accept that distortion as a 'complete and exhaustive' account,
    is that of the computer scientist Herbert A. Simon, concerning his
    own fundamental theoretical orientation:

    '**An ant, viewed as a behaving system, is quite simple. The apparent
    complexity of its behavior over time is largely a reflection of the
    complexity of the environment in which it finds itself**. . . . the truth
    or falsity of [this] hypothesis should be independent of whether
    ants, viewed more microscopically, are simple or complex systems.
    At the level of cells or molecules, ants are demonstrably complex;
    but these microscopic details of the inner environment may be largely
    irrelevant to the ant's behavior in relation to the outer environment.
    That is why an automaton, though completely different at the macroscopic
    level, might nevertheless simulate the ant's gross behavior. . . .

    I should like to explore this hypothesis, but with the word "man" substituted
    for "ant."

    **A man, viewed as a behaving system, is quite simple. The apparent
    complexity of his behavior over time is largely a reflection of the
    complexity of the environment in which he finds himself** . . . .
    I myself believe that the hypothesis holds even for the whole man.'

    With a single stroke of the pen, by simply substituting "man" for "ant,"
    the presumed irrelevancy of the microscopic details of the ant's
    inner environment to its behavior has been elevated to the irrelevancy
    of the whole man's inner environment to his behavior! Writing 23
    years before Simon, but as if Simon's words were ringing in his
    ears, Huxley states:

    'Because of the prestige of science as a source of power, and because
    of the general neglect of philosophy, the popular Weltanschauung of our
    times contains a large element of what may be called 'nothing-but'
    thinking. Human beings, it is more or less tacitly assumed, are
    nothing but bodies, animals, even machines. . . values are nothing
    but illusions that have somehow got themselves mixed up with our
    experience of the world; mental happenings are nothing but epiphenomena. . .
    spirituality is noting but . . . and so on.'

    Except, of course, that here we are not dealing with the 'popular'
    Weltanschauung, but with that of one of the most prestigious of American
    scientists. Nor is Simon's assumption of what is irrelevant to the
    whole man's behavior 'more or less tacit'; to the contrary, he has,
    to his credit, made it quite explicit.

    Simon also provides us with an exceptionally clear and explicit description
    of how, and how thoroughly, the scientist prevents himself from
    crossing the boundary between the circle of light cast by his own
    presuppositions and the darkness beyond. In discussing how he went
    about testing the theses that underly his hypothesis, i.e., that man
    is quite simple, etc., he writes:

    'I have surveyed some of the evidence from a range of human performances,
    particularly those that have been studied in the psychological
    laboratory.

    The behavior of human subjects in solving cryptarithmetic problems,
    in attaining concepts, in memorizing, in holding information in
    short-term memory, in processing visual stimuli, and in performing
    tasks that use natural languages, provides strong support for these
    theses. . . Generalizations about human thinking. . . are emerging
    from the experimental evidence. They are simple things, just as our
    hypotheses led us to expect. Moreover, though the picture will
    continue to be enlarged and clarified, we should not expect it to
    become essentially more complex. Only human pride argues that the
    apparent intricacies of our path stem from a quite different source
    that the intricacy of the ant's path.'

    . . .

    There is thus no chance whatever that Simon's hypothesis will be falsified
    in his or his colleagues' minds. The circle of light that determines
    and delimits his range of vision simply does not illuminate any areas
    in which questions of, say, values of subjectivity can possibly arise.
    Questions of that kind being, as they must be, entirely outside his
    universe of discourse, can therefore not lead him out of his conceptual
    framework, which, like all other magical explanatory systems, has
    a ready reserve of possible hypotheses available to explain any
    conceivable event.

    Almost the entire enterprise of modern science and technology is afflicted
    with the drunkard's search syndrome and with the myopic vision which
    is the direct result. But, as Huxley also pointed out, this myopia
    cannot sustain itself without being nourished by experiences of success.
    Science and technology are sustained by their translations into
    power and control. To the extent that computers and computation
    may be counted as part of science and technology, they feed at the
    same table. The extreme phenomenon of the compulsive programmer teaches
    us that computers have the power to sustain megalomaniac fantasies.
    But the power of the computer is merely an extreme version of the
    power that is inherent in all self-validating systems of thought.
    Perhaps we are beginning to understand that the abstract systems --
    the games computer people can generate in their infinite freedom
    from the constraints that delimit the dreams of workers in the real
    world -- may fail catastrophically when their rules are applied in
    earnest. We must also learn that the same danger is inherent in other
    magical systems that are equally detached from authentic human
    experience, and particularly in those sciences that insist they can
    capture the **whole man** in their abstract skeletal frameworks."


    ;->

    Well, that was 30-odd years ago, already.

    ReplyDelete
  9. Re: "Such suppositions are literally worse than useless. Let's suppose Jeebus raises the dead in a century, that unemployment numbers go down for four months thirteen years from now, and white calf length anti-gravity boots are distributed to everybody whose last name begins in letters O through Z in fifty years' time."

    Nice nonsense, I hope your intention was to be fun.

    Contrary to Jeebus and anti-gravity boots, the possibility that "radical anti-aging and life extension medical technologies are developed within the next, say, 50 years" is regarded as very realistic by many experts.

    You might say that there are also many experts that do not regard it as very realistic, and you would certainly be correct. This is how science and engineering work: experts will hold and defend different opinions on the mechanics or feasibility of some proposed engineering objective, until a position is definitely proven wrong.

    I am not telling you that radical anti-aging and life extension medical technologies WILL BE developed within the next 50 years, but that radical anti-aging and life extension medical technologies MAY BE developed within the next 50 years. If you think this is BS, well, I concede that you may be right!

    But my non negotiable point is that the possibility that "radical anti-aging and life extension medical technologies may be developed within the next 50 years" IS an engineering problem rather than a philosophical or metaphysical problem.

    Unless we are talking of sacred cows. If this is the case, there is no point in discussing because in my worldview there is no place for such a thing as a sacred cow. A sacred cow is just as good for burgers as any other cow.

    ReplyDelete
  10. Giulio Prisco wrote:

    > Contrary to Jeebus and anti-gravity boots, the possibility
    > that "radical anti-aging and life extension medical
    > technologies are developed within the next, say, 50 years"
    > is regarded as very realistic by many experts.

    You know, not long ago I tuned the FM radio in my car
    to a random station, and came across a creationist giving
    a talk on some fundamentalist program. He made a
    statement such as "Of course, most scientists today
    realize that Darwinism is intellectually bankrupt,
    that it just doesn't hold water anymore." He said this
    in a completely confident, matter-of-fact tone of
    voice, and I could imagine his listeners nodding their
    heads sagely as he spoke.

    I get the same feeling reading Prisco's "regarded as very
    realistic by many experts" assertion. Are we speaking
    the same language? Can the English in that sentence
    mean the same thing to me as it did to the writer?
    Do I have a firm enough grasp of the words "realistic",
    "experts", or even "many"?

    ReplyDelete
  11. Re: Jim's comment "I get the same feeling reading Prisco's "regarded as very realistic by many experts" assertion..."

    You know the script from this point on:

    1- I provide a long list of experts who think that "radical anti-aging and life extension medical technologies may be developed within the next 50 years".

    2- Someone says that the opinions of the experts in 1- does not count because they are cranks without scientific credentials.

    3- I, or someone else, provide for each of the experts in 1- a list of scientific credentials, doctoral degrees, awards received, peer-reviewed publications etc.

    4- Someone says that, despite of the credentials in 3-, the experts in 1- are still cranks whose opinion does not count.

    5- And we start calling each other names as usual.

    Even if I am almost sure of the final outcome (5-), I look forward to doing this exercise and will start with the list 1- as soon as I receive a green light.

    But I am not sure that you really want to do this, as it seems to me that having a discussion on these terms means accepting my premise that aging and life extension are engineering problems rather than philosophical or metaphysical issues.

    I may be mistaken, and I will admit to a certain frustration at the apparent impossibility of understanding each other's point of view (for which of course I have to accept part of the blame), but I have formed the impression that no real discussion is wanted here.

    ReplyDelete
  12. > I will admit to a certain frustration at the apparent
    > impossibility of understanding each other's point of view. . .
    > but I have formed the impression that no real discussion
    > is wanted here.

    Yes, many of us have formed that impression. We can at
    least agree on that.

    ReplyDelete
  13. Giulio Prisco wrote:

    > 1- I provide a long list of experts. . .
    >
    > 2- Someone says that the. . . ["]experts["]. . . [do] not count
    > because they are cranks. . .
    >
    > 3- . . . 4- . . .
    >
    > 5- And we start calling each other names as usual.

    Yes, there is a deeper context here surrounding the questions
    of evidence and plausibility. Whole philosophical careers
    have been based on exploring these questions (see, e.g.,
    _Defending Science-Within Reason: Between Scientism and Cynicism_
    by Susan Haack
    http://www.amazon.com/exec/obidos/tg/detail/-/1591021170/
    for an "educated layman's" introduction to the field).

    One of the principles elucidated in Haack's book is that
    "Justification [of belief] is not exclusively one-directional, but
    involves pervasive relations of mutual support." Haack's
    explication of "pervasive relations of mutual support"
    relies largely on an analogy with how crossword puzzles are
    solved by fitting together clues and possible interlocking
    solutions.

    So yes, a list is going to be insufficient by itself.
    For you, there will be a whole network of (inexplicit)
    assumptions and beliefs buttressing your list, and for
    me those supporting assumptions and beliefs will be
    absent.

    It is **possible** to expose and criticize the whole
    submerged network, but you're not interested in doing
    that, either. You're just interested in PR.

    ReplyDelete
  14. Re: "So yes, a list is going to be insufficient by itself. For you, there will be a whole network of (inexplicit) assumptions and beliefs buttressing your list, and for me those supporting assumptions and beliefs will be absent."

    I never said that a list of experts who support a given hypothesis is sufficient to prove it, or even to consider it plausible. But I think such a list of experts (with credentials and all that) would constitute at least an indication that the hypothesis concerned should be at least considered before rejecting it.

    However, I will agree on your statement quoted. As, I am sure, you will agree that I can state exactly the same thing from my point of view: "For you, there will be a whole network of (inexplicit)
    assumptions and beliefs, and for
    me those supporting assumptions and beliefs will be absent".

    Re: "It is **possible** to expose and criticize the whole submerged network, but you're not interested in doing that, either. You're just interested in PR."

    Now suppose, just suppose, that this is not the case. How would you proceed to expose and criticize the whole submerged network?

    ReplyDelete
  15. > How would you proceed to expose and criticize the
    > whole submerged network?

    Well, **I** would suggest things for you to read. (As I've
    done, by implication, by posting things here. Do you think
    yesterday's excerpt from Weizenbaum is **irrelevant** to this
    discussion? Do you think earlier excerpts from George
    Lakoff are irrelevant? Did you think the posts I made on
    WTA-talk were irrelevant? -- apparently you did, as you were
    part of the process that got me banned there.)

    Dale, being a professional academic, might write things himself
    that you could read. As he has done, in case you hadn't
    noticed. But, like the compulsive programmer described by
    Weizenbaum, "[you]. . . cannot permit them to become a kernel
    of a realistic insight. [The transhumanist] program may
    be foundering on deep structural, mathematical, or
    linguistic difficulties about which relevant theories exist[,]
    [b]ut . . . [you] refuse [to allow] them to nucleate relevant
    theories in [your] mind [a mechanism called by Polyani. . .
    suppressed nucleation]. [You do] not read the literature
    of the substantive fields in which [you] are nominally working."

    Here's another long quote. Determination of its relevance to
    this discussion is left as an exercise for the reader. ;->

    http://michaelprescott.typepad.com/michael_prescotts_blog/2005/07/index.html
    ----------------
    The importance of being earnest

    One of the most useful intellectual skills to
    cultivate is the ability to enter into sympathetic
    engagement with any idea or argument you are considering.
    The only way to really understand what another person
    is saying is to listen closely, and the only way to
    listen closely is first to find, or at least pretend
    to find, some common ground between the other person
    and yourself. You need not maintain this sympathetic
    engagement, this provisional or illusionary agreement,
    for very long -- just long enough to absorb and
    grasp the points at issue.

    On the other hand, an inability or an unwillingness
    to drop your guard and make room, even temporarily,
    for an idea that you may find distasteful is the main
    impediment to really understanding what other people
    are saying and, therefore, to being able to effectively
    refute what they say.

    I thought of this today when flipping through a book
    that I admit to having bought in the expectation of
    a cheap laugh, and not for any intellectual merit
    that it may possess: Ayn Rand's Marginalia. That's
    right, her marginalia. In their continuing effort
    to publish every word that Ayn Rand ever committed
    to paper during the course of her 77 years, those
    in charge of her estate have published her private
    letters, her private journals, and yes, even the
    scribbled notes in the margins of books she was
    reading.

    Supposedly, these notes give us an insight into Rand's
    brilliant mind at work. No doubt this was editor
    Robert Mayhew's intention, and no doubt this is how
    the collection of jottings will be received by her
    more uncritical admirers. Not being an admirer of
    Ayn Rand myself, I had a rather different reaction.
    I was simply amazed -- and amused -- at how consistently
    she failed to understand the most basic points points
    of the books in question.

    In his introduction, Mayhew says he did not include many
    of Rand's positive comments because they were generally
    insubstantial. This collection, then, is not a representative
    sample of her reactions to her reading material. Even
    bearing this in mind, I found the fury and frustrated
    rage exhibited by Rand in these remarks to be extraordinary.
    Hardly a page goes by without encountering angry
    exclamation points, and even double and triple exclamation
    points, sometimes augmented by question marks in comic-book
    fashion. ("!!?!") The terms "God-damn" and "bastard" are
    unimaginatively and gratingly repeated. Repeatedly I came
    across another burst of venom to the effect that whatever
    sentence or paragraph Rand had just read is the worst,
    most horrible, most abysmal, most corrupt, most despicable
    thing she has ever, ever, ever encountered!!! The woman
    lived in a simmering stew of her own bile.

    She came at the books she read, it would seem, not from the
    perspective of honestly and conscientiously trying to
    understand the author's position, but instead by assuming
    an adversarial and combative stance from the very start
    and then finding the most negative and malicious spin to
    put on the author's formulations. This approach enabled
    her to vent a considerable amount of rage. It does not
    seem to have aided her comprehension of the material in
    front of her.

    To me this is most obvious in her treatment of [C. S. Lewis's]
    The Abolition of Man, which, other than John Herman Randall's
    Aristotle and Ludwig von Mises's Bureaucracy, is the only book
    in this collection that I've read. (I suppose someday I should
    get around to reading Friedrich Hayek's The Road to Serfdom,
    which is considered a classic of free-market polemic -- though
    Rand of course finds it poisonously wrongheaded. The rest
    of the books, except for von Mises's Human Action and two books
    by Henry Hazlitt and John Hospers, are largely forgotten today.)

    Lewis's book is hardly a difficult read. It was aimed at an
    educated but not highbrow segment of the public, and his
    cautions on the potential misuse of science seem chillingly
    prescient in these days of genetic engineering, animal cloning,
    and embryonic stem cell research. He develops his case
    methodically, building on the premise that man's power over
    nature translates into the power of some men over others.
    Rand furiously contests this idea, though she makes precious
    little argument against it, relying mainly on personal
    invective against Lewis himself, who is variously characterized
    as an "abysmal bastard ... monster ... mediocrity ... bastard ...
    old fool ... incredible, medieval monstrosity ... lousy bastard ...
    drivelling non-entity ... God-damn, beaten mystic ...
    abysmal caricature ... bastard ... abysmal scum." (These
    quotes give you the tenor of the master philosopher's coolly
    analytical mind.)

    In one marginal note Rand scrawls, "This monster literally
    thinks that to give men new knowledge is to gain power (!)
    over them." Of course what Lewis says is that it is the holders
    and utilizers of new knowledge, who do not "give" it to
    others but use it for themselves, who gain de facto power
    over their fellow human beings. He is fearful of the emerging
    possibilities of "eugenics ... prenatal conditioning [and]
    education and propaganda based on a perfect applied psychology,"
    which may someday be wielded by an elite he calls the
    Conditioners. "Man's conquest of Nature, if the dreams of
    some scientific planners are realized, means the rule of a
    few hundreds of men over billions upon billions of men."
    And "the power of Man to make himself what he pleases
    means ... the power of some men to make other men what
    they please." Should this come to pass, "the man-moulders
    of the new age will be armed with the power of an omnicompetent
    state and an irresistible scientific technique ...
    They [will] know how to produce conscience and [will] decide
    what kind of conscience they will produce."*

    Lewis was clearly arguing against one possible vision
    of the future, the dystopia best fictionalized in Aldous Huxley's
    Brave New World. I find his points compelling, but of course
    they are debatable. In order to be properly debated, however,
    they must first be understood. Rand shows no interest in
    even trying to understand what Lewis is saying -- which is
    unfortunate, since recent headlines have made his concerns
    more relevant than ever.

    Earlier, Lewis develops the argument that basic moral values
    cannot be rationally defended but must be accepted as given,
    as part of the fabric of human nature, common to all
    communities and societies, though not always equally
    well-developed or implemented. This view, known as
    moral intuitionism, is a serious ethical position and
    one that has been defended by many prominent philosophers,
    especially in the late 19th and early 20th centuries.
    (It is enjoying something of a resurgence today.)
    Rand was vehemently opposed to this view, believing that
    it smacked of faith, which was, as she understood it,
    the archenemy of reason.

    Lewis argues that in the realm of values, as in other
    realms of thought, you must begin with certain fundamental
    assumptions; "you cannot go on 'explaining away' forever:
    you will find that you have explained explanation itself.
    You cannot go on 'seeing through' things forever."
    Rand furiously rejects this idea, and you can practically
    hear her pen stabbing at the page as she writes,
    "By 'seeing through,' he means: 'rational understanding!'
    Oh, BS! -- and total BS!" But Lewis's entire point is that
    "rational understanding" must start somewhere, just as
    geometry or set theory must begin with certain axioms
    that cannot themselves be proven by the system in question.
    It takes more than declarations of "BS!" to vanquish
    this argument -- or, for that matter, any argument.

    Rand is always telling the authors she reads what they
    "actually" are saying. Most of the time what she thinks
    they are "actually" saying bears no relationship whatsoever
    to anything they have written or even implied. With
    regard to Lewis, she says that his view boils down to the
    claim that the more we know, the more we are bound by
    reality: "Science shrinks the realm of his whim. (!!)"
    This is a thorough misunderstanding of Lewis's essay --
    an essay, let me repeat, aimed at the intelligent
    general reader and not requiring any special expertise
    to to decipher.

    Thus, although Ayn Rand's Marginalia hardly demonstrates
    the genius that Rand's admirers believe she possessed,
    it does unintentionally serve an instructional purpose.
    It shows how important it is to enter into a temporary
    but sincere sympathy with an author whose view you are
    trying to understand -- that is, if you are trying to
    understand it at all. To put it another way, in reading,
    it's important to be earnest -- to embrace a spirit of
    respect, honest consideration, and goodwill. You'll find
    those qualities in most serious thinkers. You will not
    find them, I'm afraid, in Ayn Rand's marginal notes.
    ----------------

    ReplyDelete
  16. Anonymous1:14 PM

    If it were as simple as to agree to disagree, and it were done. Sure, Transhumanists, of whatever stripe, *believe* in an unprecedented technological potential, believe in the theoretical right to change aspects of existence even as such practice is far in faerie lala land. And sure, I have quite a few f2f friends that, like Dale, say I am a fucking idiot for believing in mind uploads and AIs to emerge soon.

    If it were only that simple! But it isn't. Nanotech is all but certain, WHATEVER that means in practical terms. AI will emerge, before 2050 (and most likely in some form before 2025) and a list of other remarkable technological shifts will produce a world both dystopian, utopian and most of all really weird.

    What I am worried about is whether or not the time stream I'll be in will be Apocalyptical in nature.

    Like with Dale I am in absolute certainty the world is ruled by a completely unaccountable elite of rich and powerful bastards, of assorted ilk, that screw us all every single day, and exploit people to a miserable pulp the lower we go - lower being poorer, more defenseless, more alienated and more undereducated.

    What I seriously am at odds with, -with dale- is the rosy colored delusion that anything will change as soon as Dale makes sure everybody knows that selfidge, selfinterested idealists, of whatever stripe, are a bunch of superstitious nuts - if they do not instantly move beyond the dreamy eyed rhetoric and act according to dale's dictates.

    I know dictates and I know dictators. I know good intentions, from whatever side of the debate. And I Do Not Care.

    I want humanity, ALL of humanity to get better, real soon.

    I am real poor, the poorest of the poor in one of the richest countries in the world. I am pretty sick too, close to puking 2 days of the week. So don't ever -EVER- classify me into some convenient category. I'd label myself as a Chomksyan and a socialist his week, but I'll stay a dreamy eyed neoraellian who'll fucking hope and pray to Old Cthulhu if need be that the technological change that WILL come, the technological and environmental collapse that IS certain, the conflict between rich fuckers and poor fuckers that IS unavoidable will not kill us all.

    Because we are destined for a world where technology and society is so strange, so weird, so dangerous and so marvelous we can't even put it in words yet. And even though any cinematic portrayal of that technology in poetic sounding slogans (with *official extropian!* stickers on it) will most likely fail, I will tattoo on anyone's ass in bright red dayglo that We Will Have Transhuman Technologies in our lifetime.

    My concern is that we have a livable world BESIDES. Because right now, I am living in one less so every fucking day and I am getting really really fed up with that.

    ReplyDelete
  17. I strongly sympathize with your circumstances and the outrage of your precarity and suffering in a world organized for the benefit of incumbent interests. Joining a Robot Cult will solve literally none of your problems, you can be sure, and I am not yet convinced that even well-meaning transhumanists aren't contributing to the consolidation of the power of the very incumbent elites who are making your life more miserable than it has to be.

    ReplyDelete
  18. Giulio:

    I am not telling you that radical anti-aging and life extension medical technologies WILL BE developed within the next 50 years, but that radical anti-aging and life extension medical technologies MAY BE developed within the next 50 years. If you think this is BS, well, I concede that you may be right!

    As you well know, you are on record as proclaiming that radical anti-aging and life extension medical technologies WILL BE developed within the next 50 years or statements involving mind uploading and immortality that were even more hysterical. It wasn't a problem when were you preaching to the H+choir but it become one when I and other people starting calling you on it. That's when you started using "MAY BE" to seem more reasonable.

    But my non negotiable point is that the possibility that "radical anti-aging and life extension medical technologies may be developed within the next 50 years" IS an engineering problem rather than a philosophical or metaphysical problem.

    I actually agree (depending what you mean by "radical") but, although you have, you can't and shouldn't say the same thing about "friendly artificial intelligence", "mind uploading" or "immortality" even if you had 50 more years to your timeline. And that's point we have trying to get through your thick head.

    ReplyDelete
  19. Anonymous3:23 PM

    jf: The woman
    lived in a simmering stew of her own bile.


    On Rand: But none of us really needed anyone to pour through her books to know that did we? Many thanks to Peikoff, et al for sharing though.

    Khannea Suntzu: My concern is that we have a livable world BESIDES. Because right now, I am living in one less so every fucking day and I am getting really really fed up with that.

    Well that's the basic problem isn't it? I read various "science blogs" and occasionally the source papers they reference every day. The world is getting cooler by the day isn't it. There's nano-this and gene-therapy-that. Yet the shit piles higher every day for all but a privileged few. Those of us who still believe in the "Enlightenment project" and still think that progress should actually benefit those of us who earn less than the GNP of a small country wait in earnest for the benefits to arrive. Tick, tick...

    ReplyDelete
  20. de Thezier wrote:

    > [Giulio Prisco wrote:]
    >
    > > But my non negotiable point is that the possibility that "radical
    > > anti-aging and life extension medical technologies may be developed
    > > within the next 50 years" IS an engineering problem rather than a
    > > philosophical or metaphysical problem.
    >
    > I actually agree (depending what you mean by "radical"). . .

    The usage here is provocative (or would be, for a native English speaker
    who was doing it intentionally). Calling something an "engineering
    problem" is not neutral -- it **suggests** that the basic
    science is well understood, and that all that's left is designing the
    factory plumbing. I think many medical researchers would bridle
    at being called "engineers", and would protest that the basic science
    here is **far** from being finished, or even properly begun.

    But neither would I call medicine and biology "metaphysical" fields,
    except to the extent that all human discourse is "metaphysical".

    I do think that exaggerating the near-term prospects of medicine and biology
    may be a symptom that has a psychological angle, and that mobilizing
    cheerleaders, soliciting money, and attempting to influence public
    policy has a political angle.

    Again, to reiterate the question that Dale has asked many times before,
    what's to be gained by framing anything in terms of "radical anti-aging and life
    extension" except to whip up both exaggerated hopes and exaggerated
    fears? Are we to believe that changing the rules for grant
    applications to favor those researchers who claim to be pursuing "radical anti-aging
    and life extension" will improve the quality of the research being
    done, or hasten the arrival of these goals? I doubt it!

    And I do not, I repeat, I do not say this because I'm trying to
    undermine anybody's chances of achieving immortality, or because
    I'm a crypto-Christian, or because I'm agin' blasphemy, or anything
    as sophisticated as that. I just don't trust hype. I don't trust
    TV preachers, I don't trust Tony Robbins, I don't trust the Scientologists.
    Call me a cynic -- I'll cop to that.

    > but, although you have, you can't and shouldn't say the same thing about
    > "friendly artificial intelligence", "mind uploading" or "immortality"
    > even if you had 50 more years to your timeline. And that's point we have
    > trying to get through your thick head.

    Yeah, I'm afraid that's true. And I say that as somebody who's been reading SF,
    and reading about AI (and getting excited about Arthur C. Clarke's books,
    and Moravec's books, and Kurzweil's books) my whole life, and who came sniffing
    around the on-line transhumanist community more than a decade ago thinking
    (hoping) that they might actually know what they were talking about. I discovered
    that, alas, they don't. They may be young, they may have high IQs, they may
    be enthusiastic, they may be well-intentioned and idealistic (at least on the surface)
    but they just haven't done their homework. And there's a dark underbelly
    to the movement that smacks of Scientology, Jim Jones, and other cultishness
    and flim-flam that's as old as recorded history.


    "It ain't necessarily so
    It ain't necessarily so
    De things dat yo' liable to read in de Bible
    It ain't necessarily so

    . . .

    To get into Hebben don' snap for a sebben
    Live clean, don' have no fault
    Oh I takes dat gospel whenever it's pos'ble
    But wid a grain of salt

    Methus'lah lived nine hundred years
    Methus'lah lived nine hundred years
    But who calls dat livin' when no gal'll give in
    To no man what's nine hundred years?

    I'm preachin' dis sermon to show
    It ain't nessa, ain't nessa
    Ain't nessa, ain't nessa
    It ain't necessarily so."

    ReplyDelete
  21. Re: "that's point we have trying to get through your thick head"

    You may get an answer, if of course you want one, if you reword leaving "thick head" out. I will ignore comments containing personal insults.

    ReplyDelete
  22. Re: "Calling something an "engineering
    problem" is not neutral -- it **suggests** that the basic
    science is well understood, and that all that's left is designing the
    factory plumbing."

    Not my intention - I am using "engineering problem" to indicate something that takes places within the physical world, is framed in a way compatible with the current scientific understanding of the physical world, and can be tackled by designing the factory plumbing once the basic science is understood well enough.

    Saying that the basic science will never be understood well enough smells of preconceived religious notions to me.

    I am ready to concede that radical life extension may not be achieved within the timeframe I usually refer to (the next 50 years), or that it may not be achieved within the next 1000 years, but I will not concede that it is in principle not achievable.

    ReplyDelete
  23. > Saying that the basic science will never be understood well enough
    > smells of preconceived religious notions to me.

    Saying that "the basic science will never be understood well
    enough" smells of preconceived religious notions to me, too.

    But see, I didn't say that!

    I said the basic science is clearly not well enough understood
    **now** to say much of anything about when, or if, "radical
    life extension" etc. will be achievable. I said the basic
    science is not **yet** well enough understood even to be able
    to predict **when** the basic science will be well enough
    understood to predict when (**or** whether) those
    "transhumanist" goals will be achievable. I suggested that
    it's the business of sober scientists to leave the gee-whiz
    goals (ones that also "smell" of religious notions, or
    religious aspirations, dontcha know) to the SF authors,
    and concentrate on this or that genome, or this or that
    bit of biochemistry -- i.e., the business at hand.

    Getting all worked up about radical transformations that **must be**,
    **must be**, I tell you! near at hand -- Ray Kurzweil has
    seen them in the entrails of a PC -- smacks of religious
    enthusiasm to me (or worse).

    "Comfort ye, comfort ye my people, saith your God.
    Speak ye comfortably to Jerusalem, and cry unto her,
    that her warfare is accomplished, that her iniquity is pardoned.
    The voice of him that crieth in the wilderness: Prepare ye the
    way of the Lord, make straight in the desert a highway for our God."

    ReplyDelete
  24. BTW, speaking of usage, Giulio wrote:

    > I am impermeable to Carrico’s insults. . .

    I suspect he was thinking of the more idiomatically
    mainstream "I am **impervious** to Carrico's
    insults."

    "Impermeable" works too, sort of, but it conjures up
    visions of Prisco in a raincoat with -- poop? --
    running off it.

    ;->

    ReplyDelete
  25. Re: "I said the basic science is clearly not well enough understood
    **now** to say much of anything about when, or if, "radical
    life extension" etc. will be achievable. I said the basic
    science is not **yet** well enough understood even to be able
    to predict **when** the basic science will be well enough
    understood to predict when (**or** whether) those
    "transhumanist" goals will be achievable."

    I am more optimist with respect to whether those "transhumanist" goals will be achievable, and also with respect to the timeline, but I can certainly agree that your formulation is correct.

    ReplyDelete
  26. Re: ""Impermeable" works too, sort of, but it conjures up visions of Prisco in a raincoat with -- poop? --
    running off it."

    I would indeed say "impermeable" in my mother language, the term being used to conjure a raincoat with liquid running off it. The liquid should be rainwater, but I can certainly see some merit in your alternative suggestion.

    ReplyDelete
  27. Giulio Prisco wrote:

    > I am more optimist with respect to whether those "transhumanist"
    > goals will be achievable, and also with respect to the timeline,
    > but I can certainly agree that your formulation is correct.

    Now this is a non-trivial piece of common ground.

    I also agree that one is entitled to be more-or-less optimistic,
    pessimistic, or uncertain, about 'whether [and how soon] those
    "transhumanist" goals will be achievable'. Reasonable
    people can disagree about such things, or suspend judgment
    about such things.

    ReplyDelete
  28. First of all, I can't wait to go start lauding the oncoming anti-gravity boot revolution, since I'm one of the lucky group of people who get a pair! And now I can point people to Dale's blog as an expert in the field willing to entertain this idea.

    More to the point:

    I had several on-campus job interviews in the field of philosophy of mind - and I cannot even begin to tell you how often I ran into people asking questions that were either in principle incoherent or else simply re-edified the traditional cartesian split that I find incoherent (or at least as incoherent as ghosts and magic and such). The experience taught me a few things: that a fully embodied view of mind is still extremely hard for people to understand, after centuries (millenia?) of language and culture reinforcing the split, and it also showed me that even I can still be drawn into the discussion and stymied by the framing of certain questions with very implicit cartesian assumptions.

    All this is by way of saying - it's hard to blame people with incoherent views of mind and body for being incompetent when even philosophers, who are supposed to be specialists at uncovering assumptions and dragging out implicit concepts, still can't get a handle on the implications of embodied mind. I found that many other philosophers *of mind* today DO, in fact, understand why the mind-body split is incoherent, but only a handful have useful approaches to the problem after a lifetime of education that assumes such a split. I nearly hugged someone I was in direct competition with for a job when he began speaking of Andy Clark, Katherine Hayles, and Lakoff and Johnson.

    It's frustrating to understand the world in an entirely different way than most people you encounter. I wonder if this is what religion feels like, except with pure faith in the place of evidence and observation...

    ReplyDelete
  29. Robin [Zebrowski] wrote:

    > I had several on-campus job interviews in the field of
    > philosophy of mind - and I cannot even begin to tell you
    > how often I ran into people asking questions that were
    > either in principle incoherent or else simply re-edified
    > the traditional cartesian split that I find incoherent
    > (or at least as incoherent as ghosts and magic and such).
    > The experience taught me a few things: that a fully
    > embodied view of mind is still extremely hard for people
    > to understand, after centuries (millenia?) of language
    > and culture reinforcing the split, and it also showed me
    > that even I can still be drawn into the discussion and
    > stymied by the framing of certain questions with very
    > implicit cartesian assumptions.

    In a book I mentioned 7 years ago (my God!) on the Extropians'
    list, _Going Inside: A Tour Round a Single Moment of Consciousness_
    by John McCrone, 1999; Chapter 12 "Getting It Backwards",
    the author remarks:

    "[P]ersonally speaking, the biggest change for me
    was not how much new needed to be learnt, but how much that was
    old and deeply buried needed to be unlearnt. I thought my
    roundabout route into the subject would leave me well prepared.
    I spent most of the 1980s dividing my time between computer
    science and anthropology. Following at first-hand the attempts
    of technologists to build intelligent machines would be a good
    way of seeing where cognitive psychology fell short of the mark,
    while taking in the bigger picture -- looking at what is known
    about the human evolutionary story -- ought to highlight the
    purposes for which brains are really designed. It would be a
    pincer movement that should result in the known facts about the
    brain making more sense.

    Yet it took many years, many conversations, and many false starts
    to discover that the real problem was not mastering a mass of
    detail but making the right shift in viewpoint. Despite
    everything, a standard reductionist and computational outlook on
    life had taken deep root in my thinking, shaping what I expected
    to see and making it hard to appreciate anything or anyone who
    was not coming from the same direction. Getting the fundamental
    of what dynamic systems were all about was easy enough, but then
    moving on from there to find some sort of balance between
    computational and dynamic thinking was extraordinarily difficult.
    Getting used to the idea of plastic structure or guided
    competitions needed plenty of mental gymnastics...

    [A]s I began to feel more at home with this more organic way of
    thinking, it also became plain how many others were groping their
    way to the same sort of accommodation -- psychologists and brain
    researchers who, because of the lack of an established vocabulary
    or stock of metaphors, had often sounded as if they were all
    talking about completely different things when, in fact, the same
    basic insights were driving their work."

    > I found that many other philosophers *of mind* today DO, in fact,
    > understand why the mind-body split is incoherent, but only a
    > handful have useful approaches to the problem after a lifetime
    > of education that assumes such a split.

    Gerald M. Edelman is a neurscientist (getting old now, alas)
    who seems to understand these things pretty well. He quotes
    Unamuno's critique of Descartes as the first chapter epigraph in
    _Bright Air, Brilliant Fire: On the Matter of the Mind_.

    "cogito, ergo sum." -- Rene Descartes

    "The defect of Descartes' _Discourse on Method_ lies in his resolution
    to empty himself of himself, of Descartes, of the real man, the
    man of flesh of bone, the man who does not want to die, in order
    that he might be a mere thinker -- that is, an abstraction. But
    the real man returned and thrust himself into his philosophy. . .

    The truth is _sum, ergo cogito_ -- I am, therefore I think, although
    not everything that is thinks." -- Miguel de Unamuno

    ReplyDelete
  30. That McCrone excerpt is fantastic. Thanks for sharing it. I share his experience thoroughly!

    I'm a huge fan of Edelman's work. It's funny - Merleau-Ponty said "I am, therefore I think" back in the 1950s (I'd have to look up the chapter reference) but because he HAD a fully coherent viewpoint about the body and the world he was more or less entirely incomprehensible to most people (myself included) for a very long time.

    One thing I constantly try to remember when dealing with these issues is just how hard it is for people who don't spend their entire lives working on these problems to understand how completely wrong our commonsense assumptions about them really are.

    ReplyDelete
  31. Re: "I also agree that one is entitled to be more-or-less optimistic, pessimistic, or uncertain, about 'whether [and how soon] those "transhumanist" goals will be achievable'. Reasonable people can disagree about such things, or suspend judgment about such things."

    And this is exactly what I propose we do. Short of providing an actual example, there is not much I can say to persuade you to share my more optimistic outlook. Similarly, short of an actual proof of impossibility, there is not much you can say to persuade me to share your less optimistic outlook.

    Of course I admit to that there is a certain bias on my side: I would _like_ to live in a universe where "transhumanist" goals are achievable and near, and tend to attribute importance to scientific findings that seem to suggest that this may be the case. And I am sure there is a similar bias on your and Dale's side. This is what tend to happen with deep core values: it is very difficult to talk people out of them.

    But the question I wish to ask is: does it really matter? If we were called to vote on one or another of the many issues discussed on this website, we would probably vote the same. Isn't this enough? Why can't we just agree to disagree on this point?

    I do not particularly wish to discuss transhumanism on this blog, because I know that the result is that I will insult and be insulted most of the times.

    But whenever I intervene on this blog to say something about other things (and, I repest, _completely unrelated_ things), Dale quickly brings the discussion back to transhumanism. Which is _exactly the same thing_ as saying that your political opinions are stupid because you are black, or that her sporting preferences are stupid because she is gay.

    And this is an attitude that I find disgusting when it comes from someone who claims to represent the dem-left. To the extent that, even if I believe that Dale is a very smart person and an excellent writer, I question his intellectual honesty.

    I hope I have made my point cleary enough and I am wondering how many "you stupid transhumanist jerk" insults I will find when I come back later.

    ReplyDelete
  32. Giulio wrote:

    > [t]here is not much I can say to persuade you to share
    > my more optimistic outlook. Similarly, . . . there is not much
    you can say to persuade me to share your less optimistic outlook. . .
    > This is what tend to happen with deep core values: it is
    > very difficult to talk people out of them.
    >
    > But the question I wish to ask is: does it really matter?
    > If we were called to vote on one or another of the many issues
    > discussed on this website, we would probably vote the same.
    > Isn't this enough? Why can't we just agree to disagree on this point?

    If it were entirely a matter of an individual's private opinions,
    then yes, of course we would (if we happened to be acquainted)
    just agree to disagree.

    But it isn't just a private outlook. Further, the transhumanist
    identity movement is no longer just a matter of a bunch of SF-geeks
    hanging out together on line. Now, it's a project. It has Principles.
    It has Institutes. It has Conferences. It's looking for Money.

    And what was "quirkiness" when it was just a bunch of geeks hanging
    out -- the sort of SF Con atmosphere wryly portrayed by Sharyn McCrumb
    in _Bimbos of the Death Sun_ -- now has far more unsavory potential:

    -- It misrepresents the current and likely future state of science.
    The enthusiasts are so noisy that they drown out the weaker voices
    of any experts who can be bothered to comment at all. This is a disservice
    to any members of the general public who might be listening in,
    and it's a distraction to people of genuine talent who might otherwise
    be doing genuine intellectual work.

    -- It's an attractor for certain personality types. Autistic spectrum,
    yes, but also Narcissistic Personality Disorder. These sorts of
    people live inside "warp bubbles" of self-generated reality, and
    tend to suck other people into their (self-aggrandizing) bubble
    universes.

    -- The default politics (usually disclaimed as "politics" at all)
    are both naive, and ugly. Some of this follows from the psychology,
    I guess.

    -- It's a fertile field for outright con artists and flim-flam men,
    out to make a buck.

    > But whenever I intervene on this blog to say something about
    > other things (and, I repeat, _completely unrelated_ things),
    > Dale quickly brings the discussion back to transhumanism. Which is
    > _exactly the same thing_ as saying that your political opinions
    > are stupid because you are black, or that her sporting preferences
    > are stupid because she is gay.

    You are on record (in other forums) as being among the more, er,
    enthusiastic (not to say "rabid") proselytizers for transhumanism.
    I think I even saw you say, once, something like "Well, what's wrong,
    then, with deliberately starting a religion, to spread transhumanist
    memes? Whatever works!"

    So why are you here, then? My answer -- you're here, whatever you
    may ostensibly be "intervening" about, as part of the damage control,
    the spinmeistering, in which folks like Michael Anissimov feel compelled
    to engage in response to Dale's **daring** to criticize transhumanism
    in public.

    You think Dale should stop critizing transhumanism to spare your feelings?
    Give me a break!!

    ------------------

    "Exclusive: Kirstie Alley's Lawyers Demand That 'US Weekly' Fire Writer
    Who Cracked A Scientology Joke"
    http://defamer.com/351242

    ReplyDelete
  33. Re: "You think Dale should stop critizing transhumanism to spare your feelings? Give me a break!!"

    My feelings are unimportant here, and actually I don't have hard feelings against Dale. If I had any, I could simply stop reading him and read other things. And this is his blog after all, of course he can write whatever he wants. Nobody forces me to read it.

    Re: "So why are you here, then? My answer -- you're here, whatever you
    may ostensibly be "intervening" about, as part of the damage control, the spinmeistering,"

    Now, this is certainly an astute observation and a possible explanation of my presence here. I don't think it is correct.

    But let's suppose it is.

    Then I would say that Dale is falling into the trap and helping a lot with the damage control and spinmeistering.

    Because, you see, what I wrote above is provably true. He has attacked and insulted people here, including but not limited to me, for comments on issues completely unrelated to transhumanism, and used their transhumanist afflitation as an "argument". See my analogy above, which is correct.

    This is available in Amor Mundi archives for everyone to see. Next time I write "sunflowers are yellow" in a comment to a post about sunflowers, and Dale answers "shut up you transhumanist idiot, and screw you and your Robot God", he will give us more precious help in damage control by showing the world that _he_ is the one who does not know the difference between identity based insults and rational arguments.

    Something for you guys to think about.

    ReplyDelete
  34. Robin wrote:

    > One thing I constantly try to remember when dealing with
    > these issues is just how hard it is for people who don't spend
    > their entire lives working on these problems to understand
    > how completely wrong our commonsense assumptions about them
    > really are.

    And even, sometimes (apparently), for people who **do**
    spend their entire lives working on them!

    Here's some more stuff you might enjoy, from my archives.
    You may well already be familiar with the Hubert Dreyfus
    book.

    ---------------------

    Subject: Let us calculate, Sir Marvin

    This book makes very entertaining read after
    my brush with the Extropians:
    _What Computers Still Can't Do: A Critique of
    Artifical Reason_, Hubert L. Dreyfus, MIT Press,
    1992

    Introduction, pp. 67-70:

    Since the Greeks invented logic and geometry, the idea that
    all reasoning might be reduced to some kind of calculation --
    so that all arguments could be settled once and for all --
    has fascinated most of the Western tradition's rigorous
    thinkers. Socrates was the first to give voice to this
    vision. The story of artificial intelligence might well
    begin around 450 B.C. when (according to Plato) Socrates
    demands of Euthyphro, a fellow Athenian who, in the name
    of piety, is about to turn in his own father for murder:
    "I want to know what is characteristic of piety which
    makes all actions pious. . . that I may have it to turn
    to, and to use as a standard whereby to judge your actions
    and those of other men." Socrates is asking Euthyphro
    for what modern computer theorists would call an "effective
    procedure," "a set of rules which tells us, from moment
    to moment, precisely how to behave."

    Plato generalized this demand for moral certainty into
    an epistemological demand. According to Plato, all
    knowledge must be stateable in explicit definitions
    which anyone could apply. If one could not state his
    know-how in terms of such explicit instructions -- if his
    knowing **how** could not be converted into knowing
    **that** -- it was not knowledge but mere belief.
    According to Plato, cooks, for example, who proceed by
    taste and intuition, and poets who work from inspiration,
    have no knowledge; what they do does not involve
    understanding and cannot be understood. More generally,
    what cannot be stated explicitly in precise instructions --
    all areas of human thought which require skill, intuition
    or a sense of tradition -- are relegated to some kind of
    arbitrary fumbling.

    But Plato was not fully a cyberneticist (although according
    to Norbert Wiener he was the first to use the term), for
    Plato was looking for **semantic** rather than **syntactic**
    criteria. His rules presupposed that the person understood
    the meanings of the constitutive terms. . . Thus Plato
    admits his instructions cannot be completely formalized.
    Similarly, a modern computer expert, Marvin Minsky, notes,
    after tentatively presenting a Platonic notion of effective
    procedure: "This attempt at definition is subject to
    the criticism that the **interpretation** of the rules
    is left to depend on some person or agent."

    Aristotle, who differed with Plato in this as in most questions
    concerning the application of theory to practice, noted
    with satisfaction that intuition was necessary to apply
    the Platonic rules: "Yet it is not easy to find a formula
    by which we may determine how far and up to what point a man
    may go wrong before he incurs blame. But this difficulty
    of definition is inherent in every object of perception;
    such questions of degree are bound up with circumstances
    of the individual case, where are only criterion **is**
    the perception."

    For the Platonic project to reach fulfillment one breakthrough
    is required: all appeal to intuition and judgment must be
    eliminated. As Galileo discovered that one could find
    a pure formalism for describing physical motion by ignoring
    secondary qualities and teleological considerations, so,
    one might suppose, a Galileo of human behavior might succeed
    in reducing all semantic considerations (appeal to meanings)
    to the techniques of syntactic (formal) manipulation.

    The belief that such a total formalization of knowledge must
    be possible soon came to dominate Western thought. It
    already expressed a basic moral and intellectual demand, and
    the success of physical science seemed to imply to sixteenth-
    century philosophers, as it still seems to suggest to
    thinkers such as Minsky, that the demand could be satisfied.
    Hobbes was the first to make explicit the syntactic conception
    of thought as calculation: "When a man **reasons**, he
    does nothing else but conceive a sum total from addition of
    parcels," he wrote, "for REASON . . . is nothing but
    reckoning. . ."

    It only remained to work out the univocal parcels of "bits"
    with which this purely syntactic calculator could operate;
    Leibniz, the inventor of the binary system, dedicated
    himself to working out the necessary unambiguous formal
    language.

    Leibniz thought he had found a universal and exact system of
    notation, an algebra, a symbolic language, a "universal
    characteristic" by means of which "we can assign to every
    object its determined characteristic number." In this way
    all concepts could be analyzed into a small number of
    original and undefined ideas; all knowledge could be
    expressed and brought together in one deductive system.
    On the basis of these numbers and the rules for their
    combination all problems could be solved and all controversies
    ended: "if someone would doubt my results," Leibniz
    said, "I would say to him: 'Let us calculate, Sir,' and
    thus by taking pen and ink, we should settle the
    question.'" . . .

    In one of his "grant proposals" -- his explanations of how
    he could reduce all thought to the manipulation of
    numbers if he had money enough and time -- Leibniz remarks:
    "[T]he most important observations and turns of skill
    in all sorts of trades and professions are as yet unwritten.
    This fact is proved by experience when passing from
    theory to practice when we desire to accomplish something.
    Of course, we can also write up this practice, since it
    is at bottom just another theory more complex and
    particular. . ."


    Chapter 6, "The Ontological Assumption", pp. 209-213

    Granting for the moment that all human knowledge can be
    analyzed as a list of objects and of facts about each,
    Minsky's analysis raises the problem of how such a large
    mass of facts is to be stored and accessed. . .

    And, indeed, little progress has been made toward
    solving the large data base problem. But, in spite of
    his own excellent objections, Minsky characteristically
    concludes: "But we had better be cautious about
    this caution itself, for it exposes us to a far more
    deadly temptation: to seek a fountain of pure intelligence.
    I see no reason to believe that intelligence can
    exist apart from a highly organized body of knowledge,
    models, and processes. The habit of our culture has
    always been to suppose that intelligence resides in
    some separated crystalline element, call it _consciousness_,
    _apprehension_, _insight_, _gestalt_, or what you
    will but this is merely to confound naming the problem
    with solving it. The problem-solving abilities of
    a highly intelligent person lies partly in his superior
    heuristics for managing his knowledge-structure and
    partly in the structure itself; these are probably
    somewhat inseparable. In any case, there is no reason to
    suppose that you can be intelligent except through the
    use of an adequate, particular, knowledge or model
    structure."

    . . . It is by no means obvious that in order to be
    intelligent human beings have somehow solved or needed to
    solve the large data base problem. The problem may itself
    be an artifact created by the fact that AI workers must
    operate with discrete elements. Human knowledge does
    not seem to be analyzable as an explicit description
    as Minsky would like to believe. . . To recognize an
    object as a chair, for example, means to understand its
    relation to other objects and to human beings. This
    involves a whole context of human activity of which
    the shape of our body, the institution of furniture, the
    inevitability of fatigue, consitute only a small part.
    And these factors in turn are no more isolable than is
    the chair. They all may get **their** meaning in
    the context of human activity of which they form a
    part. . .

    There is no reason, only an ontological commitment,
    which makes us suppose that all the facts we can make
    explicit about our situation are already unconsciously
    explicit in a "model structure," or that we
    could ever make our situation completely explicit
    even if we tried.

    Why does this assumption seem self-evident to Minsky?
    Why is he so unaware of the alternative that he takes
    the view that intelligence involves a "particular,
    knowledge or model structure," great systematic array
    of facts, as an axiom rather than as an hypothesis?
    Ironically, Minsky suppose that in announcing this
    axiom he is combating the tradition. "The habit of
    our culture has always been to suppose that intelligence
    resides in some separated crystalline element, call
    it consciousness, apprehension, insight, gestalt. . ."
    In fact, by supposing that the alternatives are either
    a well-structured body of facts, or some disembodied
    way of dealing with the facts, Minsky is so traditional
    that he can't even see the fundamental assumption
    that he shares with the whole of the philosophical
    tradition. In assuming that what is given are facts
    at all, Minsky is simply echoing a view which has been
    developing since Plato and has now become so ingrained
    as to **seem** self-evident.

    As we have seen, the goal of the philosophical
    tradition embedded in our culture is to eliminate
    uncertainty: moral, intellectual, and practical.
    Indeed, the demand that knowledge be expressed in
    terms of rules or definitions which can be applied
    without the risk of interpretation is already
    present in Plato, as is the belief in simple elements
    to which the rules apply. With Leibniz, the connection
    between the traditional idea of knowledge and the
    Minsky-like view that the world **must** be analyzable
    into discrete elements becomes explicit. According
    to Leibniz, in understanding we analyze concepts into
    more simple elements. In order to avoid a regress
    of simpler and simpler elements, then, there must
    be ultimate simples in terms of which all complex
    concepts can be understood. Moreover, if concepts
    are to apply to the world, there must be simples
    to which these elements correspond. Leibniz
    envisaged "a kind of alphabet of human thoughts"
    whose "characters must show, when they are used in
    demonstrations, some kind of connection, grouping
    and order which are also found in the objects."
    The empiricist tradition, too, is dominated by
    the idea of discrete elements of knowledge. For
    Hume, all experience is made up of impressions:
    isolable, determinate, atoms of experience.
    Intellectualist and empiricist schools converge
    in Russell's logical atomism, and the idea reaches
    its fullest expression in Wittgenstein's _Tractatus_,
    where the world is defined in terms of a set of
    atomic facts which can be expressed in logically
    independent propositions. This is the purest
    formulation of the ontological assumption, and
    the necessary precondition of all work in AI as long
    as researchers continue to suppose that the world
    must be represented as a structured set of descriptions
    which are themselves built up from primitives.
    Thus both philosophy and technology, in their appeal
    to primitives, continue to posit what Plato sought:
    a world in which the possibility of clarity, certainty
    and control is guaranteed; a world of data structures,
    decision theory, and automation.

    No sooner had this certainty finally been made fully
    explicit, however, than philosophers began to call it into
    question. Continental phenomenologists [uh-oh, here
    come those French. :-0] recognized it as the outcome
    of the philosophical tradition and tried to show its
    limitations. [Maurice] Merleau-Ponty calls the
    assumption that all that exists can be treated as
    determinate objects, the _prejuge du monde_,
    "presumption of commonsense." Heidegger calls it
    _rechnende Denken_ "calculating thought," and views
    it as the goal of philosophy, inevitably culminating
    in technology. . . In England, Wittgenstein less
    prophetically and more analytically recognized the
    impossibility of carrying through the ontological
    analysis proposed in his _Tractatus_ and became his
    own severest critic. . .

    But if the ontological assumption does not square with
    our experience, why does it have such power? Even if
    what gave impetus to the philosophical tradition was
    the demand that things be clear and simple so that
    we can understand and control them, if things are not
    so simple why persist in this optimism? What lends
    plausibility to this dream? As we have already seen. . .
    the myth is fostered by the success of modern
    physics. . .


    Chapter 8, "The Situation: Orderly Behavior Without
    Recourse to Rules" pp. 256-257

    In discussing problem solving and language translation
    we have come up against the threat of a regress of rules
    for determining relevance and significance. . . We
    must how turn directly to a description of the situation
    or context in order to give a fuller account of the
    unique way human beings are "in-the-world," and the
    special function this world serves in making orderly
    but nonrulelike behavior possible.

    To focus on this question it helps to bear in mind
    the opposing position. In discussing the epistemological
    assumption we saw that our philosophical tradition
    has come to assume that whatever is orderly can be
    formalized in terms of rules. This view has reached
    its most striking and dogmatic culmination in the
    conviction of AI workers that every form of intelligent
    behavior can be formalized. Minsky has even
    developed this dogma into a ridiculous but revealing
    theory of human free will. He is convinced that all
    regularities are rule governed. He therefore theorizes
    that our behavior is either completely arbitrary
    or it is regular and completely determined by the
    rules. As he puts it: "[W]henever a regularity is
    observed [in our behavior], its representation is
    transferred to the deterministic rule region." Otherwise
    our behavior is completely arbitrary and free.
    The possibility that our behavior might be regular
    but not rule governed never even enters his mind.


    Dreyfus points out that when a publication anticipating
    the first edition of his book came out in the late
    1960s, he was taken aback by the hysterical tone of
    the reactions to it:

    Introduction, pp. 86-87

    [T]he year following the publication of my first
    investigation of work in artificial intelligence,
    the RAND Corporation held a meeting of experts in
    computer science to discuss, among other topics,
    my report. Only an "expurgated" transcript of this
    meeting has been released to the public, but
    even there the tone of paranoia which pervaded the
    discussion is present on almost every page. My
    report is called "sinister," "dishonest,"
    "hilariously funny," and an "incredible misrepresentation
    of history." When, at one point, Dr. J. C. R. Licklider,
    then of IBM, tried to come to the defense of my
    conclusion that work should be done on man-machine
    cooperation, Seymour Papert of M.I.T. responded:
    "I protest vehemently against crediting Dreyfus with
    any good. To state that you can associate yourself
    with one of his conclusions is unprincipled. Dreyfus'
    concept of coupling men with machines is based on
    thorough misunderstanding of the problems and has nothing
    in common with any good statement that might go by
    the same words."

    The causes of this panic-reaction should themselves be
    investigated, but that is a job for psychology [;->],
    or the sociology of knowledge. However, in anticipation
    of the impending outrage I want to make absolutely clear
    from the outset that what I am criticizing is the
    implicit and explicit philosophical assumptions of
    Simon and Minsky and their co-workers, not their
    technical work. True, their philosophical prejudices
    and naivete distort their own evaluation of their
    results, but this in no way detracts from the
    importance and value of their research on specific
    techniques such a list structures, and on more
    general problems. . .

    An artifact could replace men in some tasks -- for
    example, those involved in exploring planets --
    without performing the way human beings would and
    without exhibiting human flexibility. Research in
    this area is not wasted or foolish, although a balanced
    view of what can and cannot be expected of such an
    artifact would certainly be aided by a little
    philosophical perspective.


    In the "Introduction to the MIT Press Edition" (pp. ix-xiii)
    Dreyfus gives a summary of his work and reveals
    the source of the acronym "GOFAI":

    Almost half a century ago [as of 1992] computer pioneer
    Alan Turing suggested that a high-speed digital
    computer, programmed with rules and facts, might exhibit
    intelligent behavior. Thus was born the field later
    called artificial intelligence (AI). After fifty
    years of effort, however, it is now clear to all but
    a few diehards that this attempt to produce artificial
    intelligence has failed. This failure does not mean
    this sort of AI is impossible; no one has been able
    to come up with a negative proof. Rather, it has
    turned out that, for the time being at least, the
    research program based on the assumption that human
    beings produce intelligence using facts and rules
    has reached a dead end, and there is no reason to
    think it could ever succeed. Indeed, what John
    Haugeland has called Good Old-Fashioned AI (GOFAI)
    is a paradigm case of what philosophers of science
    call a degenerating research program.

    A degenerating research program, as defined by Imre
    Lakatos, is a scientific enterprise that starts out
    with great promise, offering a new approach that
    leads to impressive results in a limited domain.
    Almost inevitably researchers will want to try to apply
    the approach more broadly, starting with problems
    that are in some way similar to the original one.
    As long as it succeeds, the research program expands
    and attracts followers. If, however, researchers
    start encountering unexpected but important phenomena
    that consistently resist the new techniques, the
    program will stagnate, and researchers will abandon
    it as soon as a progressive alternative approach
    becomes available.

    We can see this very pattern in the history of GOFAI.
    The work began auspiciously with Allen Newell and
    Herbert Simon's work at RAND. In the late 1950's,
    Newell and Simon proved that computers could do more
    than calculate. They demonstrated that a computer's
    strings of bits could be made to stand for anything,
    including features of the real world, and that its
    programs could be used as rules for relating these
    features. The structure of an expression in the
    computer, then, could represent a state of affairs
    in the world whose features had the same structure,
    and the computer could serve as a physical symbol
    system storing and manipulating representations.
    In this way, Newell and Simon claimed, computers
    could be used to simulate important aspects of intelligence.
    Thus the information-processing model of the mind
    was born. . .

    My work from 1965 on can be seen in retrospect as a
    repeatedly revised attempt to justify my intuition,
    based on my study of Martin Heidegger, Maurice
    Merleau-Ponty, and the later Wittgenstein, that the
    GOFAI research program would eventually fail.
    My first take on the inherent difficulties of
    the symbolic information-processing model of the
    mind was that our sense of relevance was holistic and
    required involvement in ongoing activity,
    whereas symbol representations were atomistic and
    totally detached from such activity. By the
    time of the second edition of _What Computers Can't
    Do_ in 1979, the problem of representing what I
    had vaguely been referring to as the holistic
    context was beginning to be perceived by AI researchers
    as a serious obstacle. In my new introduction I
    therefore tried to show that what they called the
    commonsense-knowledge problem was not really a problem
    about how to represent **knowledge**; rather, the
    everyday commonsense background understanding that
    allows us to experience what is currently relevant
    as we deal with things and people is a kind of
    **know-how**. The problem precisely was that this
    know-how, along with all the interests, feelings,
    motivations, and bodily capacities that go to make a
    human being, would have had to be conveyed to the
    computer as knowledge -- as a huge and complex belief
    system -- and making our inarticulate, preconceptual
    background understanding of what it is like to
    be a human being explicit in a symbolic representation
    seemed to me a hopeless task.

    For this reason I doubted the commonsense-knowledge
    problem could be solved by GOFAI techniques, but I could
    not justify my suspicion that the know-how that made up
    the background of common sense could not itself be
    represented by data structures made up of facts and
    rules. . .

    When _Mind Over Machine_ came out, however, Stuart
    [Dreyfus] and I faced the same objection that had been
    raised against my appeal to holism in _What Computers
    Can't Do_. You may have described how expertise
    **feels**, our critics said, but our only way of
    **explaining** the production of intelligent behavior
    is by using symbolic representations, and so
    that must be the underlying causal mechanism. Newell
    and Simon resort to this type of defense of
    symbolic AI: "The principal body of evidence for
    the symbol-system hypothesis. . . is negative evidence:
    the absence of specific competing hypotheses as to
    how intelligent activity might be accomplished whether
    by man or by machine [sounds like a defense of
    Creationism!]"

    In order to respond to this "what else could it be?" defense
    of the physical symbol system research program, we
    appealed in _Mind Over Machine_ to a somewhat vague and
    implausible idea that the brain might store holograms
    of situations paired with appropriate responses,
    allowing it to respond to situations in way it had
    successfully responded to similar situations in the
    past. The crucial idea was that in hologram matching
    one had a model of similarity recognition that did not
    require analysis of the similarity of two pattersn
    in terms of a set of common features. But the model
    was not convincing. No one had found anything
    resembling holograms in the brain.


    Minsky gets the brunt of Dreyfus' exasperation and sarcasm.

    Introduction to the Revised Edition, pp. 34-36:

    In 1972, drawing on Husserl's phenomenological analysis,
    I pointed out that it was a major weakness of AI that no
    programs made use of expectations. Instead of
    modeling intelligence as a passive receiving of
    context-free facts into a structure of already stored
    data, Husserl thinks of intelligence as a context-
    determined, goal-directed activity -- as a **search**
    for anticipated facts. For him the _noema_, or
    mental representation of any type of object, provides
    a context or "inner horizon" of expectations or
    "predelineations" for structuring the incoming data. . .

    The noema is thus a symbolic description of all the
    features which can be expected with certainty in exploring
    a certain type of object -- features which remain
    "inviolably the same. . ." . . .

    During twenty years of trying to spell out the components
    of the noema of everyday objects, Husserl found that
    he had to include more and more of what he called the
    "outer horizon," a subject's total knowledge of the
    world. . .

    He sadly concluded at the age of seventy-five that he was
    a "perpetual beginner" and that phenomenology was an
    "infinite task" -- and even that may be too optimistic. . .

    There are hints in an unpublished early draft of the
    frame paper that Minsky has embarked on the same misguided
    "infinite task" that eventually overwhelmed Husserl. . .

    Minsky's naivete and faith are astonishing. Philosophers
    from Plato to Husserl, who uncovered all these problems
    and more, have carried on serious epistemological
    research in this area for two thousand years without
    notable success. Moreover, the list Minsky includes in
    this passage deals only with natural objects, and
    their positions and interactions. As Husserl saw, and
    as I argue. . ., intelligent behavior also presupposes
    a background of cultural practices and institutions. . .

    Minsky seems oblivious to the hand-waving optimism of
    his proposal that programmers rush in where philosophers
    such as Heidegger fear to tread, and simply make explicit
    the totality of human practices which pervade our lives
    as water encompasses the life of a fish.

    ReplyDelete
  35. Jaron Lanier is a contemporary computer scientist who seems
    to have a clue, unlike many of his colleagues.

    Unfortunately, all these people, Lanier included, are stonily
    ignored by the transhumanists. They're just party-poopers,
    dontcha know.

    --------------------------------

    "The first fifty years of general computation, which
    roughly spanned the second half of the twentieth
    century, were characterized by extravagant swings
    between giddy overstatement and embarrassing near-
    paralysis. The practice of overstatement was
    initiated by the founders of computer science:
    Alan Turing wondered whether machines, particularly
    his abstract 'universal machines,' might
    eventually become the moral equivalents of people;
    in a similar vein, Claude Shannon defined the
    term 'information' as having ultimate breadth,
    spanning all thermodynamic processes.

    One could just as well claim that since all
    life is made of chemical interactions, any
    chemical apparatus can be understood as a
    nascent version of a person. The reason this
    claim isn't made is that the difference in
    complexity between the chemistry of living things
    and what can be studied in contemporary
    chemistry laboratories is apparent. We have
    an intuition about the distinction. In contrast,
    we do not have a clear intuition about the
    differences in complexity between the various
    kinds of information systems. A serious and
    intelligent community of researchers who
    describe themselves as studying 'artificial
    intelligence' believed, in some cases as
    early as the late 1950s, that computers would
    soon become fluent natural-language speakers.
    This hasn't happened yet, of course, and we
    still don't have an intuition of how large
    a problem it is to understand natural languages,
    or how long it might take to solve.

    The practice of overstatement continues, and it
    is even common to find members of elite computer
    science departments who believe in an inevitable
    'singularity,' which is expected sometime in
    the next half century. This singularity would
    occur when computers become so wise and
    powerful that they not only displace humans
    as the dominant form of life but also attain
    mastery over matter and energy so as to live
    in what might be described as a mythic or
    godlike way, completely beyond human conception.
    While it feels odd even to type the previous
    sentence, it is an accurate description of
    the beliefs of many of my colleagues. . .

    Because we've had no intuition of the relative
    scales of information structures, we've had a
    hard time comparing our computational
    accomplishments to what nature has accomplished.
    Both the technical and popular press are
    awash with claims that human computational
    prowess is about to catch up with natural
    complexity. Examples include the repeated
    claims that computers are about to finally
    understand human emotions or language, or
    that computers are about to allow us to bridge
    the gap between complex organisms and
    the simple sequences of DNA we have learned
    merely to catalog.

    One way to frame the nature of our ignorance
    in this matter is to ask whether natural evolution
    was a bumbling, slow, inefficient process or
    the result of a naturally self-assembling
    parallel supercomputer (perhaps even operating
    on a quantum level in some cases) that
    self-optimized to bring about an irreducibly
    complex result in roughly the shortest possible
    time. These two alternatives are the outer
    bounds of what could be true. The truth,
    which we don't know, is somewhere in between.
    My bias is toward the latter bound: Evolution
    was probably pretty efficient at performing
    an irreducibly complex task. It seems, however,
    that the other extreme -- that all it will
    take is another thirty to fifty years of Moore's
    Law magic and our computers will outrun
    nature -- is accepted in most contemporary
    dialog about the future of science and
    technology. . .

    A new computer and information science would
    incorporate a theory of legacy. . .
    We must learn to give up the illusion that we
    can overcome legacies. This is the illusion
    in play when otherwise well-informed technologists
    propose radical additions to the human
    metabolism or brain structure (and yes, there
    are many such proposals).

    One idea worthy of investigation is whether
    'legacy' is the same thing as 'semantics.'
    'Semantics' is a word that has been used to
    describe whatever mysterious thing lies beyond
    the syntax barrier characteristic of protocol-
    based systems: For instance, natural-language
    systems are always said to be progressing
    but lacking in their understanding of semantics.
    A legacy creates an immutable context in an
    information system. Legacies are complex.
    Legacies, in reducing the configuration space
    of a system, act like lenses that enhance
    the causal potential of bits. . .

    [A] marriage ceremony is a legacy, a pattern
    with a history that it is expensive to undo.
    Similarly, DNA takes on meaning only in
    the context of an embryo; an isolated strand
    of DNA would almost certainly not be informative
    enough for. . . clever aliens. . . to
    re-create a creature. . .

    In fifty years, if we're lucky, we might be
    able not just to describe how DNA works and what
    DNA is present (as we are beginning to now)
    but to have a way of describing the intermediate
    levels of complexity within which changes to
    DNA are constrained. In effect, we might
    learn to see the world to some degree from
    evolution's point of view, instead of from
    a molecule's or an organism's point of view."

    -- Jaron Lanier, "The Complexity Ceiling",
    in _The Next Fifty Years: Science in the First Half
    of the 21st Century_

    ReplyDelete
  36. It took me awhile to come around to the arguments of Dreyfus (and Dreyfus!) after working in traditional AI theory for many years, but I began to "get" their objections around the same time I "got" Merleau-Ponty. I'm a fan of theirs now, and find it sort of funny how much time they spent on Minsky considering he's now pretty much a (great) historical figure with nothing *new* to contribute (in spite of his recent publications). I can't remember the last time I had to actually argue against Minsky's work (although I do still remember fondly an email exchange I had with him circa 1998 or 1999 - he was still quite a hero of mine back then!) And to bring this full circle to transhumanism - Minsky and I discussed the wonderful fiction (maybe I should capitalize that) FICTION of Greg Egan.

    ReplyDelete
  37. Robin wrote:

    > And to bring this full circle to transhumanism - Minsky
    > and I discussed the wonderful fiction (maybe I should
    > capitalize that) FICTION of Greg Egan.

    Greg Egan is FABULOUS. Fabulous, I say.

    Most of the first-rank SF authors are pretty clued-in and
    sophisticated folks. Egan. William Gibson. Bruce Sterling.
    David Brin. Even Vernor Vinge.

    It's interesting that most of them (certainly all the
    above, even arguably Vinge) steer plenty clear of what passes
    for transhumanism or Singularitarianism these days. Except,
    of course, to take the occasional pot-shot.

    Sterling's talk "The Singularity: Your Future as a Black Hole"
    from the summer of 2004 was pretty funny.
    http://bruce-sterling-the-singularity-your-futu-mp3-download-kohit.net/_/71151

    Vinge's latest, _Rainbows End_ (notice the absent apostrophe)
    was interpreted, rightly I think, as a bit of a slap at the
    Singularitarians (although I suppose we don't know how that
    "story arc" is ultimately going to end).

    One of Egan's later novels (which I never actually got around
    to reading) contained what was interpreted as a rebuke of the
    Singularitarians. It was quoted with some dismay on the
    Extropians' list back in 2002.

    `What do you think you're going to find in there [a new region of altered
    spacetime]? Some great shining light of transcendence?'

    `Hardly.' _Transcendence_ was a content-free word left over from religion,
    but in some moribund planetary cultures it had come to refer to a mythical
    process of mental restructuring that would result in vastly greater
    intelligence and a boundless cornucopia of hazy superpowers--if only the
    details could be perfected, preferably by someone else. It was probably an
    appealing notion if you were so lazy that you'd never actually learnt
    anything about the universe you inhabited, and couldn't quite conceive of
    putting in the effort to do so; this magical cargo of transmogrification
    was sure to come along eventually and render the need superfluous.

    Tchicaya said, `I already possess general intelligence, thanks. I don't
    need anything more.' It was a rigorous result in information theory that
    once you learn in a sufficiently flexible manner--something humanity had
    achieved in the Bronze Age--the only limits you faced were speed and
    storage; all other structural changes were just a matter of style.

    -- Egan, _Schild's Ladder_, p.55 (UK edition)

    ReplyDelete
  38. Funny topical turn -- I happen to love Egan's and Vinge's fiction also (and have done for nearly twenty years by now). Some amazing stuff. Fiction.

    ReplyDelete
  39. Jim, could you *please* quit associating autism with narcissism? You may have read a lot of psychological literature, but you don't seem to have spent much time engaging with actual autistic people (and no, I don't mean "people with geeky personality types that to you match up with some stereotype of autistic-spectrum you have in mind). Having atypical emotional and reciprocal responses and an atypical style of thinking/perceiving does not mean that a person is stuck in a "static warp bubble".

    And lest you think I'm just talking about myself here, I have spent time around other folks on the spectrum, and mostly I've just been shocked at the degree to which others (not autistics!) assume the autistics to be "locked in [our] own worlds".

    That said, some people are just jerks, and there's no reason that someone on the autistic spectrum can't be a jerk. Jerkiness is pretty equal-opportunity. But don't mistake the jerkiness of a few people for some kind of scientific fact about an entire demographic.

    And I probably don't need to say this, but hopefully you don't see this as an "attack" on you or on your posting style or anything like that -- I've mentioned on several occasions that the quoting doesn't bother me, and I can generally see its relevance. I just get irritated when I see "autistic" so frequently alongside "narcissistic" in your comments.

    ReplyDelete
  40. Anne's right -- there should be much more care around this. I mean, if we have to throw anecdotal evidence around, not a single person I know who is autistic-identified is a jerk at all, while many who are not are jerks! But I honestly don't think the force of Jim's point about narcissism and True Belief and the authoritarian dynamic and certain tendencies to reductionism among techno-utopians really requires the problematic association anyway, I think it's as much an outdated habit as anything else. You know, I have this odd sense that the term "Autism" has come to be deployed much more carelessly in recent years in public discourse while at the very same time it has acquired especially moralistic freighting... never an especially good discursive combination. It's good people are grappling with this and getting smarter and more sensitive in their thinking on this.

    ReplyDelete
  41. Dale: regarding "autism" being deployed far too much in recent years: I wholeheartedly agree. I mean, we have people claiming that everything from "food allergies and yeast overgrowth" to "self-centered economic systems" are somehow autistic, which to me just seems like linguistic laziness (coupled with a bit of sociopolitical opportunism and a dash of plain old ignorance).

    It's really ridiculous, and while I am all for making sure that people who are actually autistic (in the sense that they benefit from being recognized as such, nurtured with regard to their particular strengths, and accommodated as needed), I do think some lines ought to be drawn. There's way too much mouse research going on right now attempting to claim that mice who either engage in "excessive grooming" or who don't socialize much with other mice are somehow autistic, which I think is a very misguided approach.

    IMO, autism is best understood as a how rather than a what -- that is, not as a behavior or set of behaviors (behaviorism be damned!), but as a way of perceiving and processing information. There's some neat research going on in that direction now, which makes me very happy. In particular I think folks like Morton Ann Gernsbacher are on the right track.

    But I'll stop derailing the topic here for now. Just wanted to put in my figurative 2 cents on this one since it's been bugging me for a while.

    ReplyDelete
  42. Of the derailments presently in play in this Moot, I must say I like yours best, Anne. I feel I have plenty to learn from it.

    ReplyDelete
  43. Anne Corwin wrote:

    > Jim, could you *please* quit associating autism with narcissism?

    I don't, particularly, in my own thoughts.

    The reason the association exists in my observations about
    transhumanism is that the "movement" seems to be an attractor,
    more-or-less independently, for two kind of folks:

    1. The kinds of folks attracted to math, computer programming,
    and science fiction. Also to elaborate rule-bound systems like
    role-playing games, contract law, and a certain idea of what
    artificial intelligence might be like. More of these people
    than in a random sample are likely to be on the autistic spectrum
    (according to Simon Baron-Cohen et al.).

    2. Folks to consider themselves "geniuses", who feel held back
    by the ignorance of the masses, who don't like to have to pay
    taxes to support their fellow human beings, who feel
    chafed by "limits" in general. Many of **these** folks
    exhibit the symptoms of what the DSM calls "Narcissistic
    Personality Disorder".

    That being said, I gather from my Web-surfing that the differential
    diagnosis among Attention-Deficit Hyperactivity Disorder
    (ADHD), Bipolar (manic-depressive) disorder (in certain
    phases), Asperger's and other autistic-spectrum disorders,
    and Narcissistic Personality Disorder, can be tricky even
    for trained psychiatrists. Apparently they all have that
    characteristic obliviousness to social context and social
    feedback. See, e.g., "Misdiagnosing Narcissism - Asperger's Disorder"
    by Sam Vaknin ( http://samvak.tripod.com/journal72.html ).

    ReplyDelete
  44. Anne Corwin also wrote:

    > [Y]ou don't seem to have spent much time engaging with
    > actual autistic people. . .

    On the contrary. "Some of my best friends. . .", etc.

    I'm probably at least borderline Asperger's myself, at
    any rate according to Simon Baron-Cohen's "Autism-Spectrum
    Quotient" test
    http://www.wired.com/wired/archive/9.12/aqtest.html
    The diagnosis didn't exist when I was a kid, but some of
    the problems I had in school would no doubt be laid, today,
    at the feet of Asperger's.

    It is true that I don't know anybody who is **profoundly**
    autistic.

    ReplyDelete
  45. A personal anecdote:

    I have dinner, practically every week, with a married couple
    with two boys (a year apart, they're in 5th and 6th grade).
    (I'm not going to mention anybody by name here, of course,
    but I still hope they don't read any of this! I **think**
    I'm safe. ;-> ).

    Both the husband and wife are trained mathematicians. The
    wife, though she has a PhD in math, basically gave up an
    academic career to raise the kids, though she still has
    a side job tutoring kids in math. The husband (a Russian
    Jewish immigrant to the US, via Australia) once had aspirations
    to become a world-class mathematician; when he decided,
    for whatever reasons, that he didn't have a shot at the
    first tier of mathematical brains, he quit short of his PhD
    and is now a computer security honcho at a major New York
    brokerage firm).

    The husband is clearly spectrum -- a fact which nearly destroyed
    their marriage. It was the **wife** who had to do all the
    psychological research, figure out what was going on, and figure
    out how to accommodate it. She has to do things like,
    during a social event, signal her husband by pointing at her
    eyes, to get him to make eye contact with his interlocutor.

    The husband can be sweet, but he can also be **extremely**
    abrasive (see, I hope nobody I know is reading this!). He's
    very smart, and very knowledgeable in many fields (not only
    math and physics, but economics, and history, and linguistics --
    he speaks fluent Russian, and knows a lot of Hebrew, too),
    but he tends to **lecture** people rather than conversing with
    them. It's very difficult, sometimes, to get a word in
    edgeways, as they say.

    There's another fellow, a friend of the husband's from work,
    who often shows up at dinner. He has two overriding topics
    of conversation. Either he and his host go on and on about
    computer issues at work (or about Net BSD, or about encrypted
    file systems, or about Kerberos, or about Postfix, or what
    have you), or this guest goes on and on about his libertarian
    political views -- the awfulness of paying taxes, etc.
    Everybody else at the table just grins and bears it. (or not --
    one woman, a friend of the **wife's** -- just decided she
    couldn't take it anymore and stopped coming. I rather missed
    her.)

    Of the two boys, the younger one is clearly "normal" (and though very
    smart, suffers a bit from less-than-ideally articulate
    speech), and the older boy his is father's son (spectrum, very
    smart, hard to engage in a balanced conversation, but with
    beautifully articulated speech as clear as a bell -- he sounds like
    Mr. Spock [just as I did, at that age]).

    The older boy (both kids have gone to private schools for their
    entire academic careers) has been worrisome to his parents
    because of his tendency to withdraw and "tune out" the
    world -- both the adult world and his peers. He has always
    been a natural bully magnet, and his parents assumed, from
    the beginning, that he would not be able to function in
    a public-school environment. He does very well in his present
    academic circumstances, but a few weeks ago the parents mentioned
    at table that some of the girls in S.'s 6th grade class had
    come up with a "joke" with a slightly nasty edge. They
    accost S. with the Vulcan salute (you know -- the Live Long
    and Prosper spread-finger thing that Leonard Nimoy is said to
    have invented) and ask him if he knows what it means.
    Apparently this is a source of entertainment. I suggested that
    S. give these girls Valentine's cards with an IDIC depicted
    on the front. The adults thought this was a great idea, but
    conceded that the girls in question probably wouldn't "get"
    the deeper implications of the response.

    Anyway -- yeah, I have personal experience with all this stuff.

    ReplyDelete
  46. Dale wrote:

    > Anne's right -- there should be much more care around this.

    Well, I'm perfectly content to agree never to mention the "A"
    word (**either** "A" word) on this blog again.

    In any case, I'm far more interested in the "N" word. ;->

    ReplyDelete
  47. Hm, I just noticed:

    > Having atypical emotional and reciprocal responses and an
    > atypical style of thinking/perceiving does not mean that
    > a person is stuck in a "static warp bubble".

    That was careless of me. I meant the "warp bubble" comment
    to apply to NPD, not necessarily to spectrum.

    "Warp bubble" is an allusion to a _Star Trek: The Next Generation_
    episode -- "Remember Me". ;-> Something similar must have been
    meant when Steve Jobs was said by his subordinates to generate
    a "reality distortion field."

    The technical term is "alloplastic defenses" -- "Of course
    there's nothing wrong with **me** -- if anything's wrong,
    it must be because **you**, or **they**, or somebody
    else in this sick sad world full of incompetent mediocrities,
    screwed up." Self-confidence, the certainty of one's own
    superiority, can affect the social world much like a black
    hole affects nearby matter. It reorients all frames of
    reference to itself, and tends to suck everything else in.

    This can have an effect similar to that of "gaslighting" -- a
    term you seem to be familiar with, because you used it once
    about me. ;->

    Joanna Ashmun discusses this warping of reality on her site
    about NPD ( http://www.halcyon.com/jmashmun/npd/traits.html ).
    (This Web site is really worth reading, BTW -- it's relatively
    short, to the point, and less verbose and technical than
    Sam Vaknin's stuff.)

    "The most telling thing that narcissists do is contradict themselves.
    They will do this virtually in the same sentence, without even
    stopping to take a breath. It can be trivial (e.g., about what they
    want for lunch) or it can be serious (e.g., about whether or not
    they love you). When you ask them which one they mean, they'll deny
    ever saying the first one, though it may literally have been only
    seconds since they said it -- really, how could you think they'd ever
    have said that? You need to have your head examined! They will
    contradict FACTS. They will lie to you about things that you did
    together. They will misquote you to yourself. If you disagree with them,
    they'll say you're lying, making stuff up, or are crazy. [At this point,
    if you're like me, you sort of panic and want to talk to anyone who
    will listen about what is going on: this is a healthy reaction;
    it's a reality check ("who's the crazy one here?"); that you're
    confused by the narcissist's contrariness, that you turn to another
    person to help you keep your bearings, that you know something is
    seriously wrong and worry that it might be you are all signs that
    you are not a narcissist]. NOTE: Normal people can behave irrationally
    under emotional stress -- be confused, deny things they know, get
    sort of paranoid, want to be babied when they're in pain. But normal
    people recover pretty much within an hour or two or a day or two,
    and, with normal people, your expressions of love and concern for
    their welfare will be taken to heart. They will be stabilized by
    your emotional and moral support. Not so with narcissists -- the
    surest way I know of to get a crushing blow to your heart is to tell
    a narcissist you love her or him. They will respond with a nasty
    power move, such as telling you to do things entirely their way or
    else be banished from them for ever."

    ReplyDelete