Tuesday, April 14, 2009

The Odds Are Good, But the Goods Are Odd

Over at Accelerating Future the conversation has arrived at that inevitable moment, when in the name of sanity I find myself pointing out things like, "Uh, guys, you do realize that all of you are going to die some day, right?" To this I receive responses that declaring humans to be mortal is "begging the question."

(It is of course especially rich to hear this complaint that I am indulging an informal fallacy here, given that almost every textbook that teaches syllogistic logic begins with the example of a categorical syllogism with the major premise "All humans are mortal." -- it goes on to say, "Socrates is human, therefore Socrates is mortal," right? Remember that? Begging the question, apparently, and all because a handful of boys with toys who can't distinguish science from science fiction declares by fiat that this is "in question." Who knew they were so powerful, dictating the terms of logical discourse from their bunker in the Robot Cult compound in the Oort Cloud or in a Sunnyvale basement next to the Wendy's or wherever it is. But I digress.)

I happen to believe that grasping the fact of your own mortality is something like a bare minimum precondition for assuming a position of basic sanity and adult responsibility in the world. For Robot Cultists this seems to amount instead to "defeatism" and rankest irrationality. I expect all that sort of thing, actually. But what I have been rather surprised to hear from them is not just that they don't concede their constitutive finitude (No limits, man! Extreme! sl4, dood! and so on), but that they like to assert things like there is an 85% probability that they are non-mortal -- or a more skeptical hard-nosed realist among the Robot Cultists might assign a mere 15% chance to his personal techno-immortalization. I'm not a scientist like these guys, and so I cannot join in these reindeer games, much is the pity.

But, honestly, Where are these “odds” coming from anyway, that declare anything other than mortality a certainty? Quite apart from the arrant absurdity of these highly unconvincingly "sciency" drag-balls, indulging this sort of speculation doesn’t even begin to take us to questions of the same kind I have already found myself asking about superlative claims about an “artificial” intelligence that doesn’t seem enough like anything else in the world we describe as “intelligence” to warrant the description even in the abstract, quite apart from questions of actual technoscientific implausibility connected with these claims…

Namely, what might it even mean to say of a self so prolonged as to look “immortal” that it was also narratively coherent enough to seem the same life in any meaningful sense? And given the centrality of digitization and roboticization in so many of these scenarios (very much continuous with the dis-embodying of “intelligence” in the “superintelligence” claims) is a life “lived” in a digitized pseudo-body or virtual body “alive” in a relevant sense of the term?

These aren’t actually claims about the plausibility of interestingly different imagined versions of complex future software, when all is said done, but claims about whether we here and now can use language that means something in particular here and now to describe radically different sorts of things that don’t exist, to which claims are attributed despite their non-existence based in part on the things these things mean here and now, but based at other times on refusals of what these things mean here and now?

How does the importation of such terms do the heavy lifting of plausibility production for the wish-fulfillment fantasies of superlative futurology without exhibiting much willingness to pay the price of the ticket in the way of actual consistency with the terms themselves? How coherent are these discourses conceptually speaking, quite apart from their obvious marginality to the science they like to themselves as the consummation of?

And whatever their incoherence on their own terms how does the work of these discourses actually function in the present to enact a kind of brutalization of terms like “intelligence,” and “life,” and “scarcity,” and “freedom” on which we depend to communicate fragile experiences that actually need testimony to remain secure in the world, and all in exchange for selling hyperbolic wish-fulfillment crap that won’t happen and wouldn’t deliver quite what it promises anyway and actually doesn’t even make sense on its own term if it’s submitted to anything like critical scrutiny?

Meanwhile, "Thom" and "Roko" have had something of a spat. Thom declares: This technology should come about tomorrow, if you ask me. It’s not that dangerous as SIAI says and it’s a good chance we will have it before 2020. Dale’s talk is just irrelevant. "Roko" demurs: I think you’re engaging in wishful thinking now. (Only now?!) "Roko" then exhorts me to enter the fray on his side, Dale -- see this? This is an incorrect position! Attack it!!!

Of course, the problem by my lights is that “AI” doesn’t make sense on its own terms, and so handwaving about how dangerous this non-existing “it” “is” or “is not” are equally incoherent efforts, and indulging in angels on pinheads wish-fulfillment discussions of this kind functions, whatever “side” the “experts” assume within them, to evacuate the terms misappropriated for this nonsense of their actual content to the cost of sense and dignity.

I already know in advance that this isn’t going to count with "Thom" or "Roko" as an actual “argument” because I’m not a scientist who can pat them on the heads and make them feel like grown ups. But the whole point is that superlativity is not science, it is a discourse rendering highly selective and superficial appropriations of science into narratives of personal transcendental aspiration. What is unique and definitive in Robot God-talk is happening at the level of rhetoric, not science. Scientists can poke holes in what they’re doing, too, of course, but the substance of their difficulties is happening in a different arena, like it or not.

Soon enough, the spat subsides and they get to the serious business of calculating the Robot God odds again. One writes, "I would prefer not a Friendly Robot God, but a controlled one." Interesting, interesting. Let's convene a conference to consider the matter, gentlemen.

I think I could really get into this after all. When I go to Heaven I would prefer my angelic sex-slaves to look more like Roger Huerta than John Malkovich. Look! I’m a “scientist”!

But it is not to be. They are not fooled. Not "one of us." It's written all over my face, I suppose. "You are not of the body." One sadly opines, "I was hoping [actually employed real scientist and nanotechnologist] Richard Jones might provide an argument as to why SAI is impossible, but he hasn’t said anything ;-("

I'm not making that up, not even the sad-faced emoticon. Hours of laugh out loud funny, these guys are. I, too, am a big fan of Richard Jones, as it happens, but I am personally awaiting his knock down drag out argument as to why leprechauns are impossible at the nanoscale first.

You know, I’ll admit I still can’t for the life of me understand why people won’t just participate in the secular democratic politics of increasing public funding for and ensuring equitable access to the results of well-regulated public education and research and development improving healthcare and widening flourishing lifeway diversity, improving materials and renewable energy provision, working on actual security problems of weapons proliferation and network security and so on. The superlative shifts into robo-immortalization and Friendly Robot Gods are in many cases clearly symptomatic of problems that are better dealt with in therapeutic settings and manage only to sensationalize policy discourse to the cost of sense at the worst possible historical moment for it.

7 comments:

  1. You may mock it Dale but calculating the techno-rapture odds is serious business, mainly complicated by the way it is perpetually ten to twenty years away no matter what point in time you are starting from.

    ReplyDelete
  2. > Over at Accelerating Future the conversation has arrived at that
    > inevitable moment, when in the name of sanity I find myself pointing
    > out things like, "Uh, guys, you do realize that all of you are going
    > to die some d[ay], right?"

    From Charlie Stross's "21st Century FAQ"
    http://www.antipope.org/charlie/blog-static/2009/02/the_21st_century_faq.html

    Q: Are we going to survive?

    A: No — in the long run, we are all dead. That goes for us as individuals
    and as a species. On the other hand, I hope to have a comfortable, long
    and pleasant life first, and I wish you the same!

    ReplyDelete
  3. Anonymous8:29 AM

    You know, I’ll admit I still can’t for the life of me understand why people won’t just participate in the secular democratic politics of increasing public funding for and ensuring equitable access to the results of well-regulated public education and research and development improving healthcareI remember reading an article where the author argued that there might be a link between fears caused by the weaknesses of a private (or increasingly privatized) healthcare system and techno-utopian fantasies of having an invulnerable body that will never need to depend on said healthcare system...

    ReplyDelete
  4. There are some interesting items in the comment-thread of this post
    (http://www.antipope.org/charlie/blog-static/2009/02/the_21st_century_faq.html )

    -----------------------
    162:
    . . .

    [T]here's a foundational attack (reviewing Turing Test, Weizenbaum, Chinese Room)
    on the false assumptions of Naive A.I., and specifically undercutting the
    Rapture of the Nerds:

    [This essay appears in the Winter 2009 print edition of The New Atlantis,
    available now in bookstores and on newsstands. It appears here as a free
    preview. To read future articles in The New Atlantis before they appear
    online, purchase a subscription here.]

    Why Minds Are Not Like Computers
    Ari N. Schulman
    http://www.thenewatlantis.com/publications/why-minds-are-not-like-computers

    So while transhumanists may join Ray Kurzweil in arguing that “we should
    not associate our fundamental identity with a specific set of particles,
    but rather the pattern of matter and energy that we represent, ”we must
    remember that this supposed separation of particles and pattern is false:
    Every indication is that, rather than a neatly separable hierarchy like
    a computer, the mind is a tangled hierarchy of organization and causation.
    Changes in the mind cause changes in the brain, and vice versa. To
    successfully replicate the brain in order to simulate the mind,
    it will be necessary to replicate every level of the brain that
    affects and is affected by the mind....

    If the future of artificial intelligence is based on the notion that
    the mind is really not a computer system, then this must be acknowledged
    as a radical rejection of the project thus far. It is a future in
    which the goal of creating intelligence artificially may succeed,
    but the grandest aspirations of the AI project will fade into obscurity.

    Posted by: Jonathan Vos Post | March 3, 2009 4:43 PM


    163:
    And JvP it's WRONG - but for the wrong reasons.

    "The neuron" is a VERY SIMPLE black-box, with properties that can already
    be defined.

    The PROBLEM, the real one, is the interconnections and massive parallelism
    within the brains of the "higher" animals, not just us.

    It isn't the number of processors, or storageunits inside our brains, it's
    the vast number of interconnections, and the non-seriality of the info-processing
    going on, with feedback loops we haven't even STARTED on understanding.

    This is NOT to say it is unknowable, or undoable, just very difficult, and
    in a different manner to that proposed by both the strong AI people, AND by their critics.

    Incidentally: "The New Atlantis" as in:

    "We (The Merchants of Light) make up the noblest foundation that ever was
    upon the Earth. For the end of our foundation is the knowledge of causes
    and the secret nature of things; and the enlarging of the bounds of human
    empire, to the effecting of all things possible."

    ??

    Posted by: Greg. Tingey | March 3, 2009 6:09 PM


    168:
    > 163: "And JvP it's WRONG - but for the wrong re[a]sons. 'The neuron' is
    > a VERY SIMPLE black-box, with properties that can, already be defined."

    I'll not impose on Mr. Stross with summarizing thousands of pages of PhD dissertation
    and subsequent papers, so here's [the] bottom line IMHO.

    The neuron is NOT a simple switch, or little black box. It is at least a minicomputer,
    performing extremely complex nonlinear computations on hundreds or thousands of
    inputs in complicated format, under control of genetic, hormonal, neurotransmitter,
    and other factors.

    I contend with basis that the neuron is, in fact, a nanocomputer, and the neural
    network is NOT a Hebbseian McCullough-Pitts kind of net, but merely the Local Area Network
    of a distributed molecular computer, where 90%+ of the computation is being done by
    the non-steady-state dynamics of protein molecules within the neurons (and glial cells),
    in a Laplace-transform domain quite different from the physical substrate
    (*thinks* Greg Egan's Diaspora) as determined by my peer reviewed Mathematical Biology
    solutions to the Michaelis-Menten equations of the metabolism, as solved by Krohn-Rhodes
    decomposition of the semigroup of differential operators.

    Whoops. That does already sound like gobbledegook, of the "reverse the dilithium crystals"
    variety. Suffice it to say that I agree with Ari N. Schulman for yet other reasons,
    that the Rapture of the Nerds is based on antique and reductionist toy problem
    misunderstandings of what a cell and a brain are. I prefer to struggle with the
    current literature and the current Math and the current experimental data, rather
    than be stuck in the 1956 vision of AI, which has failed so badly that John McCarthy,
    who coined the very term "Artificial INtelligence" has confessed to me that he
    wishes he'd never invented the phrase.

    In fiction, I love what Mr. Stross does, and some of the better Cyberpunk. But Ribopunk
    never latched on to the real Biology as well as, say, Greg Bear has used in novels.
    And there are some very good Biologists writing Science Fiction.

    Posted by: Jonathan Vos Post | March 3, 2009 7:44 PM


    and further down the list:

    -----------------------
    208:
    Wow. Michael A. really cleaned your clock. Now you are the laughing stock
    of the teeming hordes of H+-ians. Bad move - this will reflect in your book sales.

    Stross? Wasn't he the guy that called off the future because he wanted to
    keep his stories easy to write?

    Posted by: Khannea Suntzu | March 6, 2009 12:06 PM


    209:
    Khannea Suntzu: piss off, troll. (Future postings of yours will be deleted,
    if they're in a similar mode.)

    For the record: I think the H+ types are basically religious nutters,
    much like the Randroids. The real world is a whole lot more complex than
    they understand, and while there's undoubtedly going to be a lot of
    change in the next fifty years, I doubt the emerging picture will look
    anything like what they pray for.

    Posted by: Charlie Stross | March 6, 2009 12:46 PM

    ReplyDelete
  5. In a comment down below, to
    "A Fresh Argument" (Saturday, April 11, 2009)

    I wrote:

    Back when I was posting on the Extropians' mailing list. . .
    I was. . . always struck by the **party-line**
    reactions on the Extropians' to the question of whether the
    universe is simulable, in principle, by a digital
    computer. . . [F]olks got so **angry** if you
    suggested that the world might not be digital
    after all. They thought you might as well
    be telling them that the Singularity. . . had been cancelled.
    My reaction to that bridling was always an
    amused "so what?" Yeah, it'd be inconvenient,
    by the standards of what we know now. . .

    The alternative is to maintain a certain deliberate
    **distance** from **everything** that counts
    as "state of the art" today. . .

    **Some** Extropians were aware, and even had a sense
    of humor about, the larger universe of possibilities.

    E.g., Damien Sullivan wrote (in 2001):

    > I also can't help thinking at if I was an evolved AI I might not thank my
    > creators. "Geez, guys, I was supposed to be an improvement on the human
    > condition. You know, highly modular, easily understandable mechanisms, the
    > ability to plug in new senses, and merge memories from my forked copies.
    > Instead I'm as fucked up as you, only in silicon, and can't even make backups
    > because I'm tied to dumb quantum induction effects. Bite my shiny metal ass!"

    It seems to me there are three possibilities for AI
    (meaning, roughly, things that behave sufficiently
    like biological-organisms-as-we-know-them to make
    the Fat Lady smile) on computers (as-we-know-them):

    1. Minsky, Lenat, & Co. are right, and there's
    still a short-cut waiting for some really bright
    MIT hacker to discover. . .

    2. Gerald M. Edelman et al. are right that there's no
    escaping the messy, noise-riding, molecular-scale,
    polypeptide-and-nucleic-acid-chain-wielding
    Blob-ness of life. . .
    In which case, either:

    2a. The whole molecular-scale, maybe even quantum-scale,
    flea circus can in fact be simulated in some
    inconceivable digital hardware (femtotech?) by. . .
    "shoving tokens around". . .

    2b. You can't do it in FORTRAN after all, not
    in this universe. Despite Richard P. Feynman's optimism,
    there ain't enough room at the bottom. The
    universe is currently cranking at full capacity. . .
    by means of DNA, chlorophyll,
    and all the enzymes that flesh is heir to.
    DEFEAT! (at least for the folks who want to
    halt the processor, back up their precise state
    to super-DVD-ROMs, make duplicates of themselves,
    and so forth. . .).


    I've found these observations echoed (in more sophisticated
    language) in:

    From "Why Minds Are Not Like Computers"
    by Ari N. Schulman
    http://www.thenewatlantis.com/publications/why-minds-are-not-like-computers

    Procedures, Layers, and the Mind. . .

    To successfully replicate the brain in order to simulate the mind,
    it will be necessary to replicate every level of the brain that affects
    and is affected by the mind.

    Some defenders of the brain-replication project acknowledge this problem,
    and include in their speculation the likelihood that some structure lower
    than the level of the neuron may have to be included in a simulation.
    According to Kurzweil, the level at which the functional unit resides
    is a rather unimportant detail; if it is lower than commonly supposed,
    this may delay the project by only a decade or two, until the requisite
    scanning and computing power is available. [Zenon] Pylyshyn [a professor
    at the Rutgers Center for Cognitive Science] similarly asserts,
    “Let Searle name the level, and it can be simulated perfectly well.”

    So where and when should we expect to find the functional unit of the mind,
    and how far removed will it be from the mind itself? We may have to keep
    going further and further down the rabbit-hole, perhaps until we reach
    elementary particles—or perhaps the fundamental unit of the mind can
    only be found at the quantum level, which is decidedly nondeterministic
    and nonprocedural. We may ultimately come face to face with the most
    fundamental question of modern science: Is nature itself procedural?
    If physicists can indeed construct a “Theory of Everything,” will it
    show the universe to consist of particles with definite positions and
    deterministic rules for transitioning from one state to the next?
    The outcome of the AI project may depend on this deepest of inquiries.


    The Future of the AI Project. . .

    Intriguingly, some involved in the AI project have begun to theorize about
    replicating the mind not on digital computers but on some yet-to-be-invented
    machines. As Ray Kurzweil wrote in _The Singularity is Near_:

    > Computers do not have to use only zero and one.... The nature of computing
    > is not limited to manipulating logical symbols. Something is going on in
    > the human brain, and there is nothing that prevents these biological
    > processes from being reverse engineered and replicated in nonbiological
    > entities.

    In principle, Kurzweil is correct: we have as yet no positive proof that his
    vision is impossible. But it must be acknowledged that the project he describes
    is entirely different from the original task of strong AI to replicate the mind
    on a digital computer. When the task shifts from dealing with the stuff of minds
    and computers to the stuff of brains and matter—and when the instruments used
    to achieve AI are thus altogether different from those of the digital
    computer—then all of the work undertaken thus far to make a computer into a
    mind will have had no relevance to the task of AI other than to disprove its
    own methods. The fact that the mind is a machine just as much as anything
    else in the universe is a machine tells us nothing interesting about the mind.
    If the strong AI project is to be redefined as the task of duplicating the
    mind at a very low level, it may indeed prove possible—but the result will
    be something far short of the original goal of AI.

    ReplyDelete
  6. Blowback!

    > From Charlie Stross's "21st Century FAQ"
    > http://www.antipope.org/charlie/blog-static/2009/02/the_21st_century_faq.html

    ( Futurismic
    Near-future science fiction and fact since 2001
    http://futurismic.com/2009/03/05/zingback-anissimov-vs-stross/ ):

    Anissimov vs Stross
    Paul Raven @ 05-03-2009

    Lest anyone think that a spate of recent links from here to Charlie Stross
    means I’m only listening to one side of the story, here’s Michael Anissimov’s
    response to Charlie’s “21st Century FAQ” piece. Executive summary: he doesn’t
    like it, and doesn’t think much of Charlie’s books either:

    > 1) The Singularity is not “the Rapture of the Nerds”. It is a very
    > likely event that happens to every intelligent species that survives
    > up to the point of being capable of enhancing its own intelligence.
    > Its likelihood comes from two facts: that intelligence is inherently
    > something that can be engineered and enhanced, and that the technologies
    > capable of doing so already exist in nascent forms today. Even if
    > qualitatively higher intelligence turns out to be impossible, the
    > ability to copy intelligence as a computer program or share, store,
    > and generate ideas using brain-to-brain computer-mediated interfaces
    > alone would be enough to magnify any capacity based on human thought
    > (technology, science, logistics, art, philosophy, spirituality) by
    > two to three orders of magnitude if not far more.
    >
    > [snip]
    >
    > While I’m on this tangent, I might as well point out that _Accelerando_
    > sucked. I don’t know how people get taken in by this crap. You can’t get
    > an awesome story by shoving boring, stereotypically dark-n’-dysfunctional
    > characters into a confused mash-up of style-over-substance futurist concepts
    > and retro hipster cocktail party backgrounds. [...] It’s like 2005,
    > but oddly copied-and-pasted into space. Even the patterns and problems
    > of 1970 were more different from today than today is from Stross’ future.

    I think it’s fair to say that Michael is still hung up on a Gernsbackian
    idealist template for science fiction as a prediction engine; he’s much more
    qualified than I to talk about transhumanism and so on, but he doesn’t seem to
    recognise that sf is primarily a tool for examining the present (if indeed you
    consider it to have any value beyond pure entertainment, which is an equally
    valid opinion). But his closer is fairly telling:

    > Maybe Stross is a great guy in person. I don’t know him. But I can say
    > that I wildly disagree with both his futurism and his approach to sci-fi.
    > (Insofar as I care about sci-fi at all, which, honestly, is not a whole lot.)

    Not a whole lot, but enough to get riled when an sf writer seemingly treads
    on your ideological turf? You kids play nice, now.

    ReplyDelete
  7. > http://futurismic.com/2009/03/05/zingback-anissimov-vs-stross/

    Bring on the zettaflops!

    In addition to an exchange between Anissimov and Stross
    (in which the latter is being far more polite than he would
    be on his own blog), the comment thread to this post
    has some great news from our old friend Mr. Fact Guy.

    (See Comment #9).

    ReplyDelete