Wednesday, October 24, 2007

Richard Jones Critiques Superlativity

Over on the blog Soft Machines yesterday, Richard Jones -- a professor of physics, science writer, and currently Senior Advisor for Nanotechnology for the UK's Engineering and Physical Sciences Research Council -- offered up an excellent (and far more readable than I tend to manage to be) critique of Superlative Technology Discourses, in a nicely portentiously titled post, “We will have the power of the gods”. Follow the link to read the whole piece, here a some choice bits:
"Superlative technology discourse… starts with an emerging technology with interesting and potentially important consequences, like nanotechnology, or artificial intelligence, or the medical advances that are making (slow) progress combating the diseases of aging. The discussion leaps ahead of the issues that such technologies might give rise to at the present and in the near future, and goes straight on to a discussion of the most radical projections of these technologies. The fact that the plausibility of these radical projections may be highly contested is by-passed by a curious foreshortening….

[T]his renders irrelevant any thought that the future trajectory of technologies should be the subject of any democratic discussion or influence, and it distorts and corrupts discussions of the consequences of technologies in the here and now. It’s also unhealthy that these “superlative” technology outcomes are championed by self-identified groups -- such as transhumanists and singularitarians -- with a strong, pre-existing attachment to a particular desired outcome - an attachment which defines these groups’ very identity. It’s difficult to see how the judgements of members of these groups can fail to be influenced by the biases of group-think and wishful thinking….

The difficulty that this situation leaves us in is made clear in [an] article by Alfred Nordmann -- “We are asked to believe incredible things, we are offered intellectually engaging and aesthetically appealing stories of technical progress, the boundaries between science and science fiction are blurred, and even as we look to the scientists themselves, we see cautious and daring claims, reluctant and self- declared experts, and the scientific community itself at a loss to assert standards of credibility.” This seems to summarise nicely what we should expect from Michio Kaku’s forthcoming series, “Visions of the future”. That the program should take this form is perhaps inevitable; the more extreme the vision, the easier it is to sell to a TV commissioning editor…

Have we, as Kaku claims, “unlocked the secrets of matter”? On the contrary, there are vast areas of science -- areas directly relevant to the technologies under discussion -- in which we have barely begun to understand the issues, let alone solve the problems. Claims like this exemplify the triumphalist, but facile, reductionism that is the major currency of so much science popularisation. And Kaku’s claim that soon “we will have the power of gods” may be intoxicating, but it doesn’t prepare us for the hard work we’ll need to do to solve the problems we face right now.

More like this, please.

10 comments:

  1. Richard Jones wrote (in
    http://www.softmachines.org/wordpress/?p=354 ),
    quoting Alfred Nordmann (in
    http://www.uni-bielefeld.de/(en)/ZIF/FG/2006Application/PDF/Nordmann_essay.pdf ):

    > ". . .[T]he boundaries between science and science fiction
    > are blurred,. . . and the scientific community itself at a
    > loss to assert standards of credibility.” . . .
    > [T]he more extreme the vision, the easier it is to sell to a
    > TV commissioning editor. And, as Nordmann says:
    > “The views of nay-sayers are not particularly interesting and
    > members of a silent majority don’t have an incentive to
    > invest time and energy just to 'set the record straight.'
    > The experts in the limelight of public presentations or
    > media coverage tend to be enthusiasts of some kind or another
    > and there are few tools to distinguish between credible and
    > incredible claims especially when these are mixed up in
    > haphazard ways.”

    This succinctly elucidates a point I was attempting (not very
    successfully, I'm afraid) to make in an exchange with "Utilitarian"
    in the comments of
    http://amormundi.blogspot.com/2007/10/superlative-summary.html

    -------------------------------
    > ["Utilitarian" wrote:]
    >
    > In my view, while Kurzweil, Bostrom, Yudkowsky, et al are very
    > intelligent people, the key area for 'Singularitarian' activism
    > now is getting people who are still smarter than them to examine
    > these problems carefully.

    You might as well be calling for the "people who are still smarter"
    than Tom Cruise to be "carefully examining" the Scientologists'
    case against psychiatry. You'll recall that Ayn Rand was piqued
    that the mainstream philosophical community never deigned to
    take her ideas seriously enough to discuss them. I suspect
    that the really smart people simply have better things to do.
    -------------------------------

    "Utilitarian" wrote:

    "You might as well be calling for the "people who are still smarter"
    than Tom Cruise to be "carefully examining" the Scientologists'
    case against psychiatry."
    Unlike Cruise, Kurzweil has demonstrated both a high level of
    intelligence and a strong grasp of technology. While his predictions
    have included systematic errors on the speed of consumer adoption
    of technologies, he has done quite well in predicting a variety of
    technological developments (including according to Bill Gates),
    not to mention inventing many innovative technologies. Bostrom has
    published numerous articles in excellent mainstream journals and
    venues, from Nature to Ethics to Oxford University Press.
    Yudkowsky is not conventionally credentialed, but was a prodigy
    and clearly has very high fluid intelligence.

    The charge against these people has to be bias rather than lack of ability.

    . . .

    "I suspect that the really smart people simply have better things to do."
    Yes, e.g. string theory, winning a Fields medal, becoming a billionaire.
    These are better things to do for them personally, but not necessarily
    for society.
    -------------------------------

    The problem is similar to the "coverage" of Creationism in the popular
    media. If you scan through the FM dial on your car radio, chances
    are excellent you'll come across a Creationist lecturer "demolishing"
    the "pretenses" of Darwinism, telling you that "everybody" in the
    scientific community knows that evolutionary theory doesn't hold
    water (maybe even invoking the eighth-grade schema of the scientific
    method that's been used here recently to "debunk" the categories
    of psychiatric diagnosis, by pointing out that there's no possibility
    of experimental confirmation of an evolutionary explanation for
    the existence of life on earth).

    The "silent majority" of professional biologists don't have an incentive
    to invest the time and energy just to "set the record straight."
    In fact, they might even be putting their careers as well as their
    leisure time at risk by doing so.

    ReplyDelete
  2. Anonymous12:55 PM

    James,

    Your point was clear enough, but old news and not dispositive. Varied incentives of funding, status, career, etc might not motivate people to expend the energy to think about and debunk a worthless area, or conversely to contribute to an important one. When I have already adjusted my understanding of scientific opinion for a silence, you can't make the same evidence count double by repeating a known possible explanation for silence. That's why I described my interest in acquiring new evidence.

    ReplyDelete
  3. "Utilitarian" wrote:

    > Varied incentives. . . might not motivate people to expend
    > the energy to. . . contribute to an important [area]. . .
    >
    > I have already adjusted my understanding of scientific opinion
    > for [the] silence [in mainstream scientific circles surrounding,
    > presumably, MNT and/or AGI].

    And come to a conclusion the opposite of mine, it would seem. Well, your conclusion
    **is** the one popular among folks who contribute to on-line discussions
    of these things. There are very, very few contributions from
    people who (1) bother to think about these things at all and
    (2) are not, or have ceased to be, "enthusiasts of some kind or another".

    > Your point was. . . not dispositive.

    So few are, in discussions of this kind. ;->

    Well, YMMV, as they say.

    Or, as Sir Thomas More says in _A Man For All Seasons_,
    "The world must construe according to its wits."

    And, as Elrond says to Aragorn, "The years will bring
    what they will."

    ReplyDelete
  4. Anonymous4:17 PM

    > I have already adjusted my understanding of scientific opinion
    > for [the] silence [in mainstream scientific circles surrounding,
    > presumably, MNT and/or AGI].

    "And come to a conclusion the opposite of mine, it would seem."
    We'd have to break down various issues. For the feasibility of pursuing nanotechnology research along more Drexler/CRN lines I take the mix of silence and a smattering of criticism as being a fairly strong negative signal about the usefulness of Drexlerian ideas as a design path, although the funding shenanigans related to the NNI probably had some role. (It's also hard to avoid drawing the parallel between Smalley's conclusion that complex molecular machines were beyond human design ability and his contemporaneous adoption of Christian Intelligent Design Creationism, concluding that the molecular machines of living organisms were too complex for abiogenesis.)

    For AI feasibility, what I can glean of the view within the field indicates that near-term development is very unlikely, but hardware improvements, accumulating software techniques, the allocation of more human capital to the technology industry, improving neuroscience, the likelihood of biological intelligence enhancement, and increasing economic incentives for marginal AI improvements within fields such as finance, biometrics, and robotics make it seem like we should assign higher probabilities over time.

    At the AI@50 conference 41% of attendees indicated that AI would never fully simulate human intelligence, 41% that it would but not for at least 50 years, and 18% that it would in less than 50 years. Many of those saying that AI will never be able to simulate every function probably have consciousness in mind, which is of little interest for my purposes. Nevertheless data like this push me in the direction of a probability distribution for AI development weighted heavily towards the further future. I don't outright adopt the central tendency of this opinion distribution, however, so as to take into account other factors, like wild card biotech enhancements to intelligence (which I think are generally not considered at all by scientists estimating progress in their fields for the 21st century).

    ReplyDelete
  5. "Utilitarian" wrote:

    > It's also hard to avoid drawing the parallel between
    > Smalley's conclusion that complex molecular machines were
    > beyond human design ability and his contemporaneous adoption
    > of Christian Intelligent Design Creationism, concluding that
    > the molecular machines of living organisms were too complex
    > for abiogenesis.

    Oh dear. I didn't know about **that**.

    It is true that there's been frustratingly little progress
    in clarifying the pathways of abiogenesis since the Miller-Urey
    experiment that was pretty much all the school biology textbooks
    had to say about it back in my day.

    But I think retreating into Intelligent Design was a bit of an
    overreaction on Smalley's part.

    > For AI feasibility, what I can glean of the view within the
    > field indicates that near-term development is very unlikely,
    > but. . . we should assign higher probabilities over time.

    OK, sure. Artificial-**anything** feasibility depends on what
    kinds of artifacts we'll be capable of making. For intelligence
    (understood in some kind of biological-analogical sense; I don't
    really know what the word means otherwise)
    we'll need a physical substrate that fulfills the kinds of
    morphological and functional constraints that neuroscience
    is beginning to suggest. Whether that physical substrate will
    function anything like contemporary digital computers do is
    an open question at this point.

    > I. . . take into account other factors,
    > like wild card biotech enhancements to intelligence (which I
    > think are generally not considered at all by scientists estimating
    > progress in their fields for the 21st century).

    Oh, yeah, sure, there'll be wildcards.

    All this is uncontroversial, in my view.

    It's the other -- baggage (not all or even primarily content-related)
    of the >Hist community that I find more disturbing.

    ReplyDelete
  6. Anonymous8:51 PM

    "It's the other -- baggage (not all or even primarily content-related)
    of the >Hist community that I find more disturbing."
    Could you allocate your >Hist distaste among the following in relation to AI?

    1. GOFAI/AI based on more formal algorithms. (I'm not as convinced as you appear to be that this area, defined broadly, won't produce results. I would say that improvements in pattern recognition and statistical algorithms (in search, translation, biometrics) have been quite significant, even though the past failures of GOFAI should substantially lower our estimates of its success.)
    2. Grandiose claims of personal programming or problem-solving ability. (These are to be discounted.)
    3. Cultish psychological/sociological characteristics. (We've discussed this.)
    4. Claims of strong ethical implications flowing from limited influence over AI development. (This, less so.)
    5. Factors X, Y, Z...

    ReplyDelete
  7. Huh, that's the BBC thing they interviewed me for back in May -- I'm still not sure why they wanted to talk to me of all people, but I sort of saw the whole project as akin to those "gee whiz, what if this happened?" speculative science shows I loved to watch as a youngster.

    Those shows captured my imagination. They did not turn me into a True Believer(TM) or convince me of the inevitability of any outcome(s) in particular. In fact, I find that one of the main values of such media is found in the realm of cultural anthropology -- it is always enlightening and entertaining to look back on all the neat (or frightening) stuff that never actually ended up happening according to the speculations presented.

    This is not to say that superlative critique is not needed -- of course it is, and people do need to be educated as to how they can avoid being seduced by wishful thinking and the "I don't need to think for myself anymore!" laziness that can come about as a result of discovering persons they perceive as Superlatively Smart. I guess I just see this kind of media piece (the BBC thing) as a "future cultural artifact" moreso than anything else. I liked having the opportunity to say some words about longevity research and about how morphological freedom should result in a proliferation (rather than a contraction) of diversity, but mainly I saw it as a sort of "fun" thing. But of course it's plenty OK to have fun with something and offer needed critiques of it at the same time.

    ReplyDelete
  8. "Utilitarian" wrote:

    > Could you allocate your >Hist distaste among the following
    > in relation to AI?

    All right, I'll make a stab at this. All these points are,
    however, as Dale would say, "inter-implicated", as I've
    come to realize.

    > 1. GOFAI/AI based on more formal algorithms.

    Call it 10%.

    > I'm not as convinced as you appear to be that this area,
    > defined broadly, won't produce results. I would say that
    > improvements in pattern recognition and statistical algorithms
    > (in search, translation, biometrics) have been quite significant. . .

    So that maybe a build-up of the tools of "weak AI" will
    coalesce into a capability for "strong AI". I'm not sanguine.

    > . . .even though the past failures of GOFAI should substantially
    > lower our estimates of its success.)

    Indeed.

    There is another view of the whole question of intelligence which is,
    rather oddly, simply not bruited about in >Hist circles.
    There are plausible reasons for this. One is that it goes against
    both the philosophical (Aristotelian, or crude Ayn Randian)
    and political (what George Lakoff calls politics based on
    "strict-father" morality) prejudices of the >Hist community.
    Another is that it goes against the personal prejudices of some
    of the most vocal of the >Hists (e.g., in that it simply wouldn't
    do if we're going to **guarantee** "Friendliness").

    I'm thinking of intelligence as a "selectional" rather than an
    "instructional" process.

    As evolutionary epistemologist Henry Plotkin puts it:

    "[W]hy should the brain be seen as a Darwinian
    kind of machine rather than as a Lamarckian
    machine?... Forced to take sides,... there
    are two... reasons for choosing the selectionist
    camp. One is the problem of creativity...
    Intelligence... involves... the production of
    novel solutions to the problems posed by
    change -- solutions that are not directly
    given in the experienced world... Such
    creativity cannot occur if change is slavishly
    tracked by instructionalist devices. So
    what we see here is that while selection
    can mimic instruction, the reverse is never
    true... Instructional intelligence comprises
    only what has been actually experienced...
    Indeed, according to D. T. Campbell, the father
    of modern evolutionary epistemology, selectional
    processes are required for the acquisition of
    any truly new knowledge about the world:
    'In going beyond what is already known, one
    cannot but go blindly. If one goes wisely,
    this indicates already achieved wisdom of
    some general sort.' Instruction is never
    blind. Selection always has an element...
    of blindness in it. At the heart of all
    creative intelligence is a selectional
    process, no matter how many instructional
    processes are built on top of it.

    The [other] reason for choosing selection
    over instruction is one of parsimony and
    simplicity. If the primary heuristic
    [i.e., phylogenetic evolution]
    works by selectional processes, which it
    most certainly does,... and if that other
    embodiment of the secondary heuristic
    that deals with our uncertain chemical
    futures, namely the immune system, works
    by selectional processes, which is now
    universally agreed, then why should one be
    so perverse as to back a different horse
    when it comes to intelligence?

    A nested hierarchy of selectional processes is
    a simple and elegant conception of the nature
    of knowledge. There will have to be good
    empirical reasons for abandoning it."

    -- _Darwin Machines and the Nature of Knowledge_,
    Chapter 5, "The Evolution of Intelligence",
    p. 171

    Or Gerald M. Edelman:

    "Clearly, if the brain evolved in such a fashion, and
    this evolution provided the biological basis for the eventual
    discovery and refinement of logical systems in human cultures,
    then we may conclude that, in the generative sense, selection is
    more powerful than logic. It is selection -- natural and somatic
    -- that gave rise to language and to metaphor, and it is
    selection, not logic, that underlies pattern recognition and
    thinking in metaphorical terms. Thought is thus ultimately based
    on our bodily interactions and structure, and its powers are
    therefore limited in some degree. Our capacity for pattern
    recognition may nevertheless exceed the power to prove
    propositions by logical means... This realization does not, of
    course, imply that selection can take the place of logic, nor
    does it deny the enormous power of logical operations. In the
    realm of either organisms or of the synthetic artifacts that we
    may someday build, we conjecture that there are only two
    fundamental kinds -- Turing machines and selectional systems.
    Inasmuch as the latter preceded the emergence of the former in
    evolution, we conclude that selection is biologically the more
    fundamental process. In any case, the interesting conjecture is
    that there appear to be only two deeply fundamental ways of
    patterning thought: selectionism and logic. It would be a
    momentous occasion in the history of philosophy if a third way
    were found or demonstrated"

    -- _A Universe of Consciousness_, p. 214

    Or Jean-Pierre Changeux:

    "If the hypotheses put forward [in this book] are correct,
    the formation of. . . representations, although using
    different elements and different levels of organization, obeys
    a common rule, inspired by Darwin's original hypothesis. A
    process of selective stabilization takes over from diversification
    by variation. The mechanisms associated with evolution of the
    genome[,]... [c]hromosomal reorganization, duplication of genes,
    recombinations and mutations, all create genetic diversity, but
    only a few of the multiple combinations that appear in each
    generation are maintained in natural populations. During
    postnatal epigenesis, the "transient redundancy" of cells
    and connections and the way in which they grow produce a
    diversity not restricted to one dimension like the genome,
    but existing in the three dimensions of space. Here again,
    only a few of the geometric configurations that appear during
    development are stabilized in the adult... Does such a
    model apply for the more "creative" aspects of our thought
    processes? Is it also valid for the acquisition of knowledge?

    ...

    It is... worth noting that in the history of ideas "directive"
    hypotheses have most often preceded selective hypotheses.
    When Jean-Baptiste de Lamarck tried to found his theory of
    "descendance" on a plausible biological mechanism, he proposed
    the "heredity of acquired characteristics", a tenet that
    advances in genetics would eventually destroy. One had to
    wait almost half a century before the idea of selection was
    proposed by Charles Darwin and Alfred Wallace and validated
    in principle, if not in all the details of its application.
    In the same way the first theories about the production of
    antibodies were originally based on directive models before
    selective mechanisms replaced them. It could conceivably be
    the same for theories of learning."

    -- _Neuronal Man_, Chapter 9, "The Brain -- Representation of the
    World"

    Not **all** >Hists are unsympathetic to these ideas. I've
    mentioned Eugen Leitl. Another example is John Smart:

    http://www.accelerationwatch.com/specu.html
    "Emergent AI: Stable, Moral, and Interdependent vs.
    Unpredictable, Post-Moral, or Isolationist? . . .

    Are complex systems naturally convergent,
    self-stabilizing and symbiotic as a function of
    their computational depth? Is the self-organizing
    emergence of 'friendliness' or 'robustness to
    catastrophe' as inevitable as 'intelligence,'
    when considered on a universal scale?"

    (Smart clearly thinks the answer is "yes").
    He goes on to comment:

    "I tend to disagree with many assumptions of Yudkowsky['s
    'Friendly AI',] but his is a good example of top-down models which
    express a 'conditional confidence' in future friendliness.
    I share his conclusion but without invoking a 'consciousness
    centralizing' world view, which assumes that human-imposed
    conditions will continue to play a central role in the
    self-balancing, integrative, and information-protecting
    processes that are emerging within complex adaptive
    technological systems. While it is true that consciousness
    and human rationality play central roles in the self-organizing
    of the collective human complex adaptive system
    (human civilization, species consciousness), and that
    these processes often control the perceptions and models
    we build of the universe (ie, the quality of our individual
    and collective simulations) such systems do not appear
    to control the evolutionary development of the universe
    itself, and are thus peripheral to the self-organization
    of all other substrates, be they molecular, genetic,
    neural, or most importantly in this case, technologic.

    It is deceptively easy to assume that because humans
    are catalysts in the production of technology to increase
    our local understanding of the universe, that we ultimately
    'control' that technology, and that it develops at a
    rate and in a manner dependent on our conscious understanding
    of it. Such may approximate the actual case in the initial
    stages, but all complex adaptive systems rapidly develop
    local centers of control, and technology is proving to be
    millions of times better at such 'environmental learning'
    than the biology that it is co-evolving with. It can be
    demonstrated that all evolutionary developmental substrates
    take care of these issues on their own, from within.
    Technological evolutionary development is rapidly engaged
    in the process of encoding, learning, and self-organizing
    environmental simulations in its own contingent fashion,
    and with a degree of M[atter]E[nergy]S[pace]T[ime -- a most
    unfortunate Scientological choice of terminology]
    compression at least ten million times faster than human
    memetic evolutionary development. Thus humans are both
    partially-cognizant spectators and willing catalysts in
    this process. This appears to be the hidden story of
    emergent A.I.."

    > ["Utilitarian" continued:]
    >
    > 2. Grandiose claims of personal programming or problem-solving
    > ability. (These are to be discounted.)
    >
    > 3. Cultish psychological/sociological characteristics. (We've
    > discussed this.)

    These are inseparable for me, and together I'd count them at 70%.

    A Web commentator wrote:

    http://www.blog.speculist.com/archives/2006_07.html
    --------------------------------------------------
    Hired Help

    Michael Anissimov writes that achieving Friendly AI is a
    serious proposition -- so serious, in fact, that we might
    ought to go ahead and pay somebody to do it.

    It's really not that radical a proposition. You want a
    radical proposition? How about this, written by the
    "someone" whom Michael has in mind to hire to solve the
    friendly AI problem (as quoted elsewhere on Accelerating Future):

    "There is no evil I have to accept because 'there’s nothing
    I can do about it'. There is no abused child, no oppressed peasant,
    no starving beggar, no crack-addicted infant, no cancer patient,
    literally no one that I cannot look squarely in the eye.
    I’m working to save everybody, heal the planet, solve all the
    problems of the world."

    If it was anybody else saying it, it would sound kind of,
    well, crazy.
    --------------------------------------------------

    Yeah, kind of. (Anybody **else**?!) :-0

    Some people have very little defense against this kind of
    "guru whammy", and other folks are all too willing to
    exploit it for their own ends.

    I found a rather provocative characterization of another
    putatively historical figure on the Web recently:

    "Jesus Christ, narcissist"
    by Sam Vaknin
    http://health.groups.yahoo.com/group/narcissisticabuse/message/5148

    > ["Utilitarian" continued:]
    >
    > 4. Claims of strong ethical implications flowing from limited influence
    > over AI development.

    You mean that if we can't control the outcome of the development
    of >H intelligence (in the form of AI), then maybe it's unethical
    to do it at all?

    I dunno, it sometimes seems to me that **some** >Hists are eager to
    instantiate Hugo de Garis' "artilect war" before there's even as
    good a reason as de Garis seems to think there would have to be before it
    would happen. How ethical is that?

    I'm suspicious of claims of "superior" ethicality. It's part of
    the guru-whammy, for one thing. It's a rhetorical ploy to cut off
    criticism.

    Also, I think that ethical discussions among >Hists, like discussions
    of intelligence, tend to over-rely on formal deontological systems.

    I prefer Bertrand Russell's characterization:

    WOODROW WYATT: Well now, if you don't believe in religion,
    and you don't; and if you don't, on the whole,
    think much of the assorted rules thrown up by
    taboo morality, do you believe in any system of ethics?

    BERTRAND RUSSELL: Yes, but it's very difficult to separate
    ethics altogether from politics. Ethics, it seems
    to me, arises in this way: a man is inclined to do
    something which benefits him and harms his neighbor.
    Well, if it harms a good many of his neighbors, they
    will combine together and say, "Look, we don't like
    this sort of thing; we will see to it that it
    **doesn't** benefit the man." And that leads
    to the criminal law. Which is perfectly rational:
    it's a method of harmonizing the general and private
    interest.

    WYATT: But now, isn't it, though, rather inconvenient
    if everybody goes about with his own kind of private
    system of ethics, instead of accepting a general one?

    RUSSELL: It would be, if that were so, but in fact
    they're not so private as all that because, as I was
    saying a moment ago, they get embodied in the criminal
    law and, apart from the criminal law, in public
    approval and disapproval. People don't like to
    incur public disapproval, and in that way, the
    accepted code of morality becomes a very potent
    thing.

    -- LP "Bertrand Russell Speaking" (1959)
    (Woodrow Wyatt Interviews)

    Or Antonio R. Damasio:

    "The essence of ethical behavior does not begin with
    humans. Evidence from birds (such as ravens)
    and mammals (such as vampire bats, wolves, baboons,
    and chimpanzees) indicates that other species
    can behave in what appears, to our sophisticated
    eyes, as an ethical manner. They exhibit sympathy,
    attachments, embarrassment, dominant pride,
    and humble submission. They can censure and
    recompense certain actions of others. Vampire
    bats, for example, can detect cheaters among
    the food gatherers in their group and punish
    them accordingly. Ravens can do likewise. Such
    examples are especially convincing among primates,
    and are by no means confined to our nearest
    cousins, the big apes. Rhesus monkeys can
    behave in a seemingly altruistic manner toward
    other monkeys. In an intriguing experiment
    conducted by Robert Miller and discussed by
    Marc Hauser, monkeys abstained from pulling a
    chain that would deliver food to them if pulling
    the chain also caused another monkey to receive
    an electric shock. Some monkeys would not
    eat for hours, even days. Suggestively, the
    animals most likely to behave in an altruistic
    manner were those that knew the potential target
    of the shock. Here was compassion working better
    with those who are familiar than with strangers.
    The animals that previously had been shocked
    also were more likely to behave altruistically.
    Nonhumans can certainly cooperate or fail to do
    so, within their group. This may displease
    those who believe just behavior is an exclusively
    human trait. As if it were not enough to be
    told by Copernicus that we are not in the center
    of the universe, by Charles Darwin that we have
    humble origins, and by Sigmund Freud that we
    are not full masters of our behavior, we have
    to concede that even in the realm of ethics there
    are forerunners and descent."

    -- _Looking for Spinoza: Joy, Sorrow, and the Feeling Brain_,
    Chapter 4, "Ever Since Feelings" (pp. 160 - 161)

    OK, so call this 10%

    > ["Utilitarian" continued]
    >
    > 5. Factors X, Y, Z...

    Yeah, well there's the politics. Disappointingly right-wing.

    As Nietzsche realized, once you've rejected 100%
    pure foundationalist epistemology and ethics
    (derived from God, or the universal rules of Logic
    as discovered by Aristotle), then all **guarantees** are
    off. It **doesn't** mean that the world instantly dissolves
    into total chaos, but it **does** mean that things can
    drift, over decades, centuries, or millennia (to say
    nothing of geological ages) enough to make a lot
    of people radically motion-sick. And it does indeed
    mean that a powerful technology for the control of human
    behavior, if it were ever invented, could allow a few
    people to impose their will on the majority.
    "For into the midst of all these policies comes the Ring
    of Power, the foundation of Barad-dur, and the hope of Sauron.
    'Concerning this thing, my lords, you now all know enough for the
    understanding of our plight, and of Sauron's. If he regains it, your valour
    is vain, and his victory will be swift and complete: so complete that none
    can foresee the end of it while this world lasts.'" C. S. Lewis
    also points out this unpleasant truth in _The Abolition of Man_
    (his defense of foundationalist ethics; unfortunately, IMO, just
    because something admits of unpleasant consequences,
    that in itself is no ground for rejecting it as untrue).

    And apropos the transhumanists, as Dale once pointed out,
    "Lately, I have begun to suspect that at the temperamental
    core of the strange enthusiasm of many technophiles for
    so-called 'anarcho-capitalist' dreams of re-inventing the
    social order, is not finally so much a craving for liberty
    but for a fantasy, quite to the contrary, of TOTAL EXHAUSTIVE
    CONTROL. This helps account for the fact that negative
    libertarian technophiles seem less interested in discussing
    the proximate problems of nanoscale manufacturing and the
    modest benefits they will likely confer, but prefer to barrel
    ahead to paeans to the 'total control over matter.'
    They salivate over the title of the book From Chance to Choice
    (in fact, a fine and nuanced bioethical accounting of
    benefits and quandaries of genetic medicine), as if
    biotechnology is about to eliminate chance from our live
    and substitute the full determination of morphology --
    when it is much more likely that genetic interventions
    will expand the chances we take along with the
    choices we make. Behind all their talk of efficiency
    and non-violence there lurks this weird micromanagerial
    fantasy of sitting down and actually contracting explicitly
    the terms of every public interaction in the hopes of
    controlling it, getting it right, dictating the details.
    As if the public life of freedom can be compassed
    in a prenuptual agreement. . .

    But with true freedom one has to accept an ineradicable
    vulnerability and a real measure of uncertainty. We live
    in societies with peers, boys. Give up the dreams of total
    invulnerability, total control, total specification.
    Take a chance, live a little. Fairness is actually
    possible. . ."

    The "weird micro-managerial fantasy" isn't so weird after
    all, it's a temperamental hankering after old (lost, for
    good, but a lot of smart people aren't ready to acknowledge
    it) religious certainties.

    "[T]hat we are not inviolate selves but a pandemonium
    or parliament of contesting inner voices, that we are
    constructed not given from eternity, that even universal
    mathematics might be as gapped and fissured as any
    poststructuralist text..., once deeply shocking, has
    become familiar news."

    -- Damien Broderick, _Transrealist Fiction_, p. 56

    "Please observe that the whole dilemma revolves pragmatically
    about the notion of the world's possibilities. Intellectually,
    rationalism invokes its absolute principle of unity as a
    ground of possibility for the many facts. Emotionally, it
    sees it as a container and limiter of possibilities, a
    guarantee that the upshot shall be good, Taken in this way,
    the absolute makes all good things certain, and all bad
    things impossible (in the eternal, namely), and may be
    said to transmute the entire category of possibility into
    categories more secure. One sees at this point that
    the great religious difference lies between the men who
    insist that the world **must and shall be**, and those who
    are contented with believing that the world **may be**, saved.
    The whole clash of rationalistic and empiricist religion
    is thus over the validity of possibility. . .

    In particular **this** query has always come home to me:
    May not the claims of tender-mindedness go too far?
    May not the notion of a world already saved in toto
    anyhow, be too saccharine to stand? May not religious
    optimism be too idyllic? Must **all** be saved? Is **no**
    price to be paid in the work of salvation? Is the last
    word sweet? Is all 'yes, yes' in the universe? Doesn't
    the fact of 'no' stand at the very core of life?
    Doesn't the very 'seriousness' that we attribute to life
    mean that ineluctable noes and losses form a part of it,
    that there are genuine sacrifices somewhere, and that
    something permanently drastic and bitter always
    remains at the bottom of its cup?

    I can not speak officially as a pragmatist here;
    all I can say is that my own pragmatism offers no
    objection to my taking sides with this more moralistic
    view, and giving up the claim of total reconciliation.
    The possibility of this is involved in the pragmatistic
    willingness to treat pluralism as a serious hypothesis.
    In the end it is our faith and not our logic that
    decides such questions, and I deny the right of any
    pretended logic to veto my own faith. I find myself
    willing to take the universe to be really dangerous
    and adventurous, without therefore backing out and
    crying 'no play.' I am willing to think that the
    prodigal-son attitude, open to us as it is in many
    vicissitudes, is not the right and final attitude
    towards the whole of life. I am willing that there
    should be real losses and real losers, and no total
    preservation of all that is. I can believe in the
    ideal as an ultimate, not as an origin, and as an
    extract, not the whole. When the cup is poured off,
    the dregs are left behind forever, but the possibility
    of what is poured off is sweet enough to accept.

    -- William James, _Pragmatism_,
    Lecture 8, "Pragmatism and Religion"

    So call that another 10%

    ReplyDelete
  9. Anne Corwin wrote:

    > I guess I just see this kind of media piece (the BBC thing)
    > as a "future cultural artifact" moreso than anything else.

    Yes, although fiction seems to hold its "cultural artifactual"
    value longer than non-fiction.

    From my childhood, _The Outer Limits_ is still eminently watchable
    That was more psychological/Gothic horror than SF, but "The Sixth
    Finger" is right-on-the-money transhumanistically (not surprising, since
    it was a rip-off of Shaw's _Back to Methuselah_ -- not that I hold
    that against it). David McCallum's portrayal of the >H (**not** >Hist ;-> )
    Gwyllm Griffiths is a fantastic piece of acting. The narcissism (so irrational
    that it's almost an embarrassing plot hole, for somebody who's supposed to be so smart,
    but we can forgive it in the transcendental light of the finale) is
    there, too -- "Life should go forward, see, not backward. But how can
    a man go forward here? -- it's the most backward place in the world!"
    "You'll go forward, Gwyllm; you're smarter than the others." "Well I'm
    too smart to go on eating coal dust for the rest of my life. All I
    need is a chance to use my brain, and I'd show 'em. I'd be
    drivin' 'round in a sports car with a big gold ring on my finger."

    _Star Trek_ is still eminently watchable, with glosses on >Hist themes
    (in "Where No Man Has Gone Before", "Errand of Mercy", "What Are
    Little Girls Made Of?" and "The Return of the Archons", among
    other episodes) deserving of more credit than the contemporary
    >Hists have ever given them.

    And these were both mainstream network (ABC and NBC, respectively)
    TV shows, for cryin' out loud!

    Even a cartoon like _The Jetsons_ retains its entertainment value
    (all the more so in that flying cars that fold up into briefcases
    and apartment buildings on stalks that can be elevated above
    the rainclouds at the touch of a button have yet to materialize).

    There was a non-fiction show that came on Sunday nights (IIRC)
    called "The 21st Century" that I used to watch religiously.
    Narrated by Walter Cronkite, of all people. The only thing I
    remember about that show now is the opening title-sequence that
    showed a counter running from the current year (1967, or whatever it was)
    up through the 70's, 80's, and 90's, and finally rolling up
    2000 and 2001, where it stopped. (My God, all those years have
    been lived through, now.)
    http://www.retrofuture.com/spaceage.html

    Syd Mead's artwork (which I remember well from 1960's car magazines
    like _Motor Trend_) is still **fabulous**.
    http://www.scrubbles.net/sydmead.html

    An exception I'll grant to the ephemerality of non-fiction is
    Arthur C. Clarke's _Profiles of the Future_, which is still
    eminently readable even though it's starting to be disappointingly
    off-target.

    ReplyDelete
  10. "Utilitarian" wrote:

    > GOFAI/AI based on more formal algorithms. (I'm not as convinced as
    > you appear to be that this area, defined broadly, won't produce
    > results. I would say that improvements in pattern recognition
    > and statistical algorithms (in search, translation, biometrics)
    > have been quite significant. . .

    Maybe even more significant than we think!

    An entertaining thread on /. from a couple of years ago:

    > [C]ompany GTX Global. . . claim[s] they've developed the
    > first 'true' AI.

    http://developers.slashdot.org/article.pl?sid=05/12/03/065211

    Hey, wasn't "GTX" the name of the media/telecommunications
    conglomerate in James Tiptree, Jr.'s story "The Girl Who
    Was Plugged In"? (Forerunner of William Gibson's Sense/Net
    and Tally Isham, the girl with the Zeiss-Ikon eyes.)

    From the thread:

    > Interesting to see how the guy went from selling satellite TV
    > equipment to having the best AI ever. This is a truly amazing
    > trajectory. . .

    ReplyDelete