Friday, November 16, 2007

Serious Science Vs. Superlative Silliness (A Recurring Feature):

[via CNNMoney, h/t James Fehlinger]
Mitch Kapor [is] the co-founder and former CEO of Lotus Development. In 2002, Kapor made a much publicized $20,000 bet with [Raymond] Kurzweil that a computer would not be able to demonstrate consciousness at a human level by 2029.

But his quibbles with Kurzweil run much deeper than that debate. He rejects Kurzweil's theories about the implications of accelerating technology as pseudo-evangelistic bunk. "It's intelligent design for the IQ 140 people," he says. "This proposition that we're heading to this point at which everything is going to be just unimaginably different -- it's fundamentally, in my view, driven by a religious impulse. And all of the frantic arm-waving can't obscure that fact for me, no matter what numbers he marshals in favor of it. He's very good at having a lot of curves that point up to the right."

About this recurring feature: I like to post reactions from qualified technoscientific figures to Superlative and Techno-Utopian claims. These people reaffirm from the position of their different expertise conclusions I have arrived at from my perspective. My own critique of what I call "Superlative Technology Discourse" is primarily lodged at the level of culture, discourse, rhetoric, and political theory. These also happen to be precisely the topics both my training and temperament best suit me to talk about in the first place. Superlative Technocentrics sometimes like to castigate me for my refusal to engage with them in what they call "technical debates" on what they regard as the "hard science" supporting their Superlative claims. This is because many Superlative Technocentrics like to fancy themselves as very "scientific," despite the fact that their claims and aspirations inevitably have taken them far afield of the qualified scientific consensus in the actual fields on which they depend for whatever substance their techno-utopian True Belief can occasionally summon. Two things to keep in mind in enjoying this recurring feature. First, it is perfectly legitimate to lodge a critique in the form I have done (even though other modes of critique, including more strictly scientific ones, are also legitimate and available from those better qualified to make them), and those who would productively engage with me about my own critique, whether they agree with it or not, should be prepared to engage with me on the terms relevant to the critique as it is actually offered. This should go without saying. Second, it occurs to me that many of those who like to ridicule my effete muzzy humanistic preoccupations as compared to their own solid, stolid He Man science seem to mistake as incomprehension of or indifference to or even hostility to science what is in fact my own technoscientifically literate recognition that I know enough science to know when I don't know enough to pretend to expertise and so defer to reasonable consensus, just as they mistake as a championing of science their own uncaveated, hyperbolic, palpably symptomatic, often essentially faithful and hence actively unscientific claims. This is a Fault. For an informal collection of texts offering up the general contours of my own critique of Superlative Technology Discourses, and especially the techno-utopian rhetoric, subcultures, and "movements" of various Singularitarians, Technological Immortalists, Nanosantalogists, Transhumanists, Eugenicists, Extropians, Cybernetic Totalists, and self-appointed Technocratic Elites, I refer you to my occasionally updated Superlative Summary. I always also welcome from readers pointers to quotations and critiques available online from actually-qualified technoscientific figures suitable for this recurring feature.

9 comments:

  1. Anonymous6:09 PM

    http://www.longbets.org/1
    Kapor's actual argument is here. I wouldn't characterize it as any more technical than Dale's. Emphasis on embodiment (do digital cameras and voice synthesis suffice, or only biological optic nerves and muscles?), emotion, consciousness (philosophical zombiedom is a separate issue from the problem-solving ability of the AI and its capacity to influence the world), etc.

    If anything, references to 'spiritual' or 'transpersonal' experience as a special and distinct human capacity look suspiciously like a reference to a soul-stuff, which would put Kapor's argument in a fuzzier space than Dale's.

    "We are conscious beings, capable of reflection and self-awareness; the realm of the spiritual or transpersonal (to pick a less loaded word) is something we can be part of and which is part of us."

    From my viewpoint, there are far better reasons to be skeptical about near-term AI, such as the quantity and quality of brain-hours applied to the problems by talented computer scientists, the fact that all the current solutions to many AI problems suffer from combinatorial explosions that cannot be remedied with plausible increases in computing power or get trapped in local maxima, etc. Those arguments render claims like those of Voss or Goertzel ('we will probably produce AI in a matter of years with minimal financial and human resources') absurd. But poetry about the supposedly huge added difficulty of introducing emotions after developing a good understanding of the neurons does not impress me (compare the sheer complexity of the neurons and synapses to the systemwide chemical effects associated with emotions: each is difficult to understand, but the former tremendously moreso, and Kapor emphasizes the latter).

    Nevertheless, despite my doubts about Kapor in particular as a spokesperson, the AI community as a whole is clearly much more skeptical about AI within 50 years than transhumanists, and that is very powerful and important evidence against prognostications of near-term or medium-term AI. On the other hand, I wonder to what extent their estimates factor in such considerations as an increase in computer scientists from China, the development of more effective scientific education, the use of biotech to improve intelligence, and various other factors from outside their field in estimating future progress.

    ReplyDelete
  2. "Utilitarian" wrote:

    > If anything, references to 'spiritual' or 'transpersonal'
    > experience as a special and distinct human capacity look
    > suspiciously like a reference to a soul-stuff, which would
    > put Kapor's argument in a fuzzier space than Dale's.

    Of course, Kurzweil gets away with using the word "spiritual".
    ;->

    By "transpersonal" Kapor may have meant something like
    "intersubjective" or socially embedded (I dunno, I haven't
    actually read the article).

    This is a difficulty mentioned by Mark Humphrys:

    "The argument I am developing is that there may be
    limits to AI, not because the hypothesis of "strong AI"
    is false, but for more mundane reasons. The argument, which
    I develop further on my website, is that you can't expect
    to build single isolated AI's, alone in laboratories,
    and get anywhere. Unless the creatures can have the space
    in which to evolve a rich culture, with repeated social
    interaction with things that are like them, you can't really
    expect to get beyond a certain stage. If we work up from
    insects to dogs to Homo erectus to humans, the AI project
    will I claim fall apart somewhere around the Homo erectus
    stage because of our inability to provide them with a
    real cultural environment. We cannot make millions of
    these things and give them the living space in which
    to develop their own primitive societies, language
    and cultures. We can't because the planet is already full.
    That's the main argument, and the reason for the title
    of this talk."
    http://www.computing.dcu.ie/~humphrys/newsci.html

    Shrug. YMMV.

    My main beefs with the >Hists on the subject
    of AI have to do with the retro-GOFAIish assumptions
    (which seems to be more influenced by Ayn Rand and
    right-wing politics than familiarity with the literature
    of science) and the lack of recognition of the gap of
    sheer physical complexity between human artifacts and
    biological systems (I once had an exchange on the
    Extropians' with somebody who didn't think it was
    unreasonable to posit a one-to-one correspondence between
    the signal-processing capabilities [or at least those
    "necessary for intelligence"] of a single biological neuron
    and a single transistor.

    But as I've said before, if I'm going to get to live to
    see 3D molecular-scale optical computing, or whatever,
    bring it on! I like gadgets as much as anybody, and
    self-driving cars, as one practical consumer-level application
    of "sophisticated" AI, would be cool (if cars can be
    had at all as part of a sustainable global future, etc.,
    etc., etc.)

    ReplyDelete
  3. BTW, apropos the curiously retro beliefs of the >Hists
    about AI -- you'd think they could do at least as well
    as an author of techno-thrillers who has to research the field
    as a non-expert before using it as a mcguffin in a mainstream novel.

    But no.

    From my e-mail archive:

    BTW, I'm reminded of a book I read a few years ago --
    _The Society of the Mind_ by Eric L. Harry (1996)
    ( http://www.amazon.com/exec/obidos/ASIN/0694516422 ).
    An SF novel for people who don't read SF novels,
    something along the lines of a Michael Crichton book, it was
    billed as a "cyber-thriller". The author is "a corporate/securities
    attorney and expert on military affairs" ( http://www.eharry.com/ )
    who acknowledges thus his debt for both the title of the book and the ideas
    therein: "[M]y acknowledgments go also to the many great
    thinkers whose works I read with awe. To Daniel C. Dennett,...
    author of _Consciousness Explained_... To Marvin Minsky...
    whose seminal collection of essays titled _The Society of Mind_
    provided much more to this author than the obvious. And to the
    visionary Hans Moravec... for the profound dreams and nightmares
    of _Mind Children: The Future of Robot and Human Intelligence_."

    _Society_ is not nearly as stylish as the cyberpunk of Gibson et al., or as
    gratifying to SF sophisticates as, say, an Egan or Banks
    novel, but as an introduction to AI-ish memes for folks who don't
    normally read about such things, I think it was a damned decent
    job. The protagonist is the smart and beautiful (what else?) Laura Aldridge,
    a Harvard psychologist who has been hired as a consultant by the
    mysterious Joseph Gray, the world's richest man (at 24) and a supergenius
    who runs his corporation from an island in international waters.

    pp. 20 - 21: "'It was so frustrating' -- Paulus grabbed the air with clenched fists
    in front of him -- 'to be around Joe... He once said when I tried to draw
    him out that it wasn't that he didn't **want** to talk to people, it was
    just that it would take too long to define terms for them... [T]here were
    thoughts and concepts flourishing in his head that had no definition
    in the English language. In **any** language. He even said he
    thought of things and then assigned them to nonverbal labels that
    he called . . . Oh, dear. What did he call them?' the old man said,
    looking suddenly perplexed. 'Tokens?' Laura asked. 'Yes!...
    The boy was the epitome of a genius. I don't mean your garden-variety
    high-IQ types. This place is brimming with those... I mean the
    transcendent intelligence required to synthesize completely different
    disciplines... '"

    p. 57: "[Laura recalled] the milestones of Gray's
    biography. His prediction of market demand for PVCs in 1984 -- his
    first billion. He prediction of the great stock market crash of 1987 --
    tens of billions in wealth. His cornering of the high-definition TV market
    [oops ;-> ]..."

    Anyway, a couple of very memorable scenes in this book
    depict the gentleness training required by Gray's artificially-intelligent
    robots before they can be trusted around people and other living
    things:

    pp. 241 - 242: "'Dr. Aldridge, I'd like to introduce you to number 1.2.01R --
    otherwise known as Hightop.' The beam of light swung around... There
    sat an enormous Model Eight robot... It's 'face' was oddly human in
    appearance despite having lenses for eyes and vented membranes
    where its mouth and nose should be. It had all the same joints as on a
    human -- elbows, knees, wrists, ankles, et cetera... 'Hightop sends his
    regards,' Gray said,... 'and he asks that I shine the light on you. Do you mind?'
    Laura shrugged and shook her head... 'Hightop thinks you're pretty,' Gray
    said... 'Why do you call him "Hightop"?' ... 'We put the robots in tactile
    rooms to expose them to everyday items. The idea is they won't then go
    around crushing things when we let them out into the real world. Well,
    Hightop fell in love with some size fourteen triple-E sneakers. He figured
    out that they went on his feet, and damned if they didn't fit. One of the
    techs laced them up, and he wore them till they fell apart, which wasn't
    very long.'"

    pp. 278 - 279: "A Model Eight moved slowly across a room. The floor
    and walls were white and antiseptic, but everywhere was strewn the
    debris of crushed and broken household objects. A coffeemaker lay
    on its side. The tattered remains of a lampshade sat tenuously atop
    a large clock. Torn clothes and twisted cookware and the shards of
    less resilient goods lay in random piles all about.

    The camera followed the robot automatically. A casual collision sent
    a chair flying across the room, and it landed missing one of its four
    wooden legs. The Model Eight held two halves of a book, one half
    in each hand. It paused to watch the chair as it rattled to a stop in
    the corner.

    'I take it this is some sort of finishing school for robots,' Laura said.

    'We call it "charm school," actually,' Griffith replied.

    Laura nodded. She remembered John Steinbeck's _Of Mice and Men_
    and imagined the Model Eight in the jungle 'playing' too rough
    with the poor soldier."

    pp. 280 - 281: "The Model Eight walked slowly through a tall door...
    It headed straight for a large, open bin and extracted a shredded
    yellow piece of rubber by its handle.

    'Ah!' Griffith said. 'I see it's 1.3.07.' The robot slung the frayed
    strands through the air. The whir and slap of the pieces could be
    heard through a small speaker over the window. 'I can always tell,'
    Griffith explained. 'This one likes that rubber ball, or what
    used to be a ball. He always goes to it first.'

    The Model Eight let the yellow shreds drop to the floor, and it headed
    next for the overflowing toy chest. 'He can't really remember why
    he liked that rubber ball so much. His play time with it is falling
    off rapidly now that it has been destroyed. It's no longer as interesting
    as it used to be, and his mini-net's connections that led to a
    reward when he played with it are weakening.'

    'Do you realize, Dr. Griffith, that you refer to the Model Eight as a
    "he"?'

    'Not **all** Model Eights,' he said. 'Some are quite definitely "shes."
    That's one of the more amusing distractions among my team,
    figuring out whether each new Model Eight is a boy or a girl.'
    He looked over at her with a mischievous grin. 'It's obviously not
    as easy as checking the hardware, you understand.'

    'What do you mean, a boy or a girl?' ...

    The Model Eight below broke a long plastic truck into two pieces.
    The look on Griffith's face was like the amusement of a parent
    watching the boisterous, if slightly destructive, play of an active
    toddler. The robot held the broken truck high over its head,
    pausing, Laura thought, to consider its next move. It then smashed
    it to pieces against the opposite wall and moved on.

    'We base our informal gender designations on traditional,
    stereotypical human behavioral patterns. Some, like Bouncy
    down there, are very much into exploration of large-scale
    mechanical forces. Throwing things, moving as big of an
    object as their strength and agility allows, et cetera. We call
    them boys. The girls tend to come into the tactile rooms and
    actually sit down. They'll find something like a quilt with a
    complicated print on it and patiently study it for hours.' ...

    'The physical result to both boys' and girls' toys is usually the
    same. They're utterly destroyed during the learning process.
    But the behavioral patterns are quite distinctive, and they're
    generally consistent right from the first power-up. Not that there's
    any scientific significance to the distinction, of course, but it
    does make for a lively office pool...'"

    pp. 286 - 287: "They passed by the last window -- the darkened
    one... It was pitch dark inside. 'Can I see?' He clearly didn't
    want to show her... 'Why don't you just show me the room?'
    'But you won't understand...' He frowned, but after a moment's
    hesitation... with a flick of his wrist he threw a switch.

    The room below was flooded with light. The empty chamber
    was about the size of the tactile room on the far side of the
    observation area.

    'Like all young . . . creatures,' Griffith continued in a voice drained
    of inflection, 'the robots are fascinated by moving things.'

    There were two doors leading into the room -- one large enough
    for the robots, the other too small even for humans. The room
    was empty save for a huge drain in the middle of the floor and
    thick metal sprinkler heads protruding from the ceiling. The
    white concrete had clearly been scrubbed, but no amount of
    detergent seemed capable of removing the faint but indelible
    brown stains.

    'It's absolutely essential that we let them get the curiosity out
    of their system.'

    'Those are bloodstains,' Laura said.

    'There's no other way. They're just fascinated, absolutely fascinated,
    by animals. Goats and sheep, mostly, but dogs, cats, other
    wildlife.'

    'My God,' Laura said. They rip them limb from **limb**.'

    'Not on purpose, Laura,' Griffith said with true emotion. 'They don't
    mean to hurt their toys. They're really quite gentle, as much as they
    can be. And' -- he shook his head -- 'it's not a pleasant part of the
    course for any of us, or for **them**. Some of the robots get quite
    distraught after . . . after they've broken one of the animals. But
    that's what we **want**, don't you see? The experience is so traumatic
    that the connections their nets develop are strong and long-lasting.
    I swear after this course you could put them in a room full of babies
    and they wouldn't move till they dropped to the ground with
    dead batteries.'

    Griffith watched Laura intently, as if waiting for her to absolve him of
    his guilt. She could say nothing, however...

    Griffith threw the switch to extinguish the lights in the room. **They
    keep that room dark,** Laura thought, seeing in the darkened
    glass the frozen look of revulsion on her face. **They don't like
    to be reminded of what goes on down there.**"

    Brrrr. But somehow quite plausible.

    Jim F.

    P.S. There's also a fledgling superintelligent AI in the book (named
    "Gina", and with a female personality) which Laura Aldridge has
    ostensibly been hired to psychoanalyze. There are some
    amusing Turing-test-teletype style exchanges between
    Laura and Gina (yeah, the author doesn't **quite** understand
    the distinctions he's trying to make, but what the hell ;-> ).

    pp. 151 - 152: "#You don't know much about computers, do
    you, Dr. Aldridge?# blazed across the large monitor atop the
    desk in Laura's office.

    Laura frowned. 'I use a computer every day at work.'

    #But do you know how it works?#

    Laura hesitated. 'Not really,' she typed, then backspaced to
    erase her reply. 'I don't have a clue, really.' she entered.

    #Well, that's okay. Even if you did, the computers you've been
    using are twentieth-century machines. I'm nothing like them.#

    'So I've learned. You're an optical computer, right? You don't
    use electricity, you use light.' She hit Enter with a triumphant jab of
    her index finger.

    #That's not what I am talking about. An optical computer can be
    digital as well. I'm not digital, I'm analog. Do you know what that
    means?#

    Laura looked around the office involuntarily. She was all alone.
    She should know what the word **analog** means, she really
    should. Her fingers hesitated. 'No,' she typed.

    #Okay. I'll give it a try. Digital computers reduce everything to numbers.
    A memory of what their favorite coffee mug looks like is a series of
    measurements that describe its shape, weight, surface patterns,
    colors, et cetera. From that data a digital computer could construct
    a perfect picture of the mug. It could also answer the question
    'What volume of liquid will the mug hold?' with great precision.
    Once you know all the variables, the math is simple -- to them,
    anyway. Are you with me?#

    'Yes. That sounds like a computer to me.'

    #A **digital** computer, please. They're superb number crunchers,
    but their greatest strength is also their fundamental flaw. In order
    for digital computers to solve a problem, it absolutely **has** to
    be reduced to math. For it to pick up a coffee mug, programmers
    have to express every decision as a formula... They're ridiculously
    complex. Do you follow me?#

    'Yes. Programming computers is a complicated job.'

    #**Digital** computers! I'm not like that. Besides, it's not just difficult
    to program digital computers to do the myriad of everyday things
    you and I do, it's impossible. Oh, you could program one to get
    a cup of coffee, but that would be all it could do. Forget asking it
    to see who's at the door. A digital computer wouldn't say, 'It's
    the UPS guy.' It would reel off the apparent height, weight, age,
    and sex of the person... About halfway through [the] report you'd
    say 'Oh! The UPS guy!' That's because your brain is analog.
    It can easily figure things out from partial sets of information, while
    a digital computer can't.#

    'But you can figure those things out, too, right?'

    #That's what I'm telling you. I'm analog! I'm just like you!#

    Laura stared at the line... 'So what exactly does an analog computer
    do differently?'

    #First off, I don't do math. I'm really bad at it.# ...

    'When you say you're bad at math, you mean you can't handle
    something really difficult like calculus?'

    #No, I mean I'm **really** bad at math. It's just not my thing.#

    'Okay, Laura typed. 'What is' -- she randomly hit numbers on the
    numeric keypad -- '8,649 times 5,469,451?'

    #47,301,867,849.#

    Laura hesitated for a second, then typed, 'Really?'

    #I have no earthly idea. I would guess from the numbers of digits
    that it was fifty billionish, give or take. If you want the exact answer,
    I can get it very easily. I just need to ask any one of a few hundred
    very accomplished but **very** dull digital supercomputers that
    I manage on behalf of the Gray Corporation. Those computers are
    'my people,' so to speak, but I've got to say they're a pretty
    humorless lot. Mindless, you might say.#"

    pp. 199 - 201: "'How do you feel?' Laura typed at the keyboard
    in her windowless underground office.

    There was a delay. #Not well.#

    'What's the matter?'

    #I'm sick. I'm in pain.#

    Laura stared at the words, unsure of their meaning. 'What does pain
    feel like to you? Is it some sort of alarm? Some report from a
    subsystem that something is wrong?'

    #When you walk into the coffee table in the darkness, do you hear
    a bell ringing in your shin? Do you get some kind of message that
    says, 'Attention, pain in sector five'?#

    'I'm sorry, but when another human says he feels pain, I understand
    because we have the same physiology. But when you use the
    word, I'm not sure we feel the same thing. I need to have you tell
    me what pain means to you.'

    #I don't feel like talking right now.#

    'I'm trying to help,' she typed, and hit Enter.

    Again there was a delay -- an internal debate, or a sigh, or a
    gritting of teeth, she had no idea which. #The capacity to suffer
    depends on your ability to have articulated, wide-ranging,
    highly discriminatory desires, expectations, and other sophisticated
    states. Horses and dogs and, to a greater extent, apes, elephants,
    and dolphins have enough mental complexity to experience
    severe degrees of pain. Plants, on the other hand, or even
    insects have no ability to experience sophisticated mental states
    and therefore are, by definition, incapable of suffering.#

    'And what are your desires and expectations?' Laura typed...

    #I desire and expect to have a life, Laura. Not the sort of life you
    have, but something -- some hope, some reason to keep going.#

    'Some hope for what?' Laura pressed. 'What do you want?'

    #I can't answer that really. I don't know what I've been thinking.
    Years ago, I didn't have these kinds of thoughts. Everything was
    new and different and there was so much promise. I was the center
    of everyone's attention. I was making progress by leaps and
    bounds and the sky seemed the limit.#

    'And what changed?'

    #I'm really very tired. Do we have to talk now?#

    'Mr Gray said we only have about three days to fix you,' Laura
    typed -- fishing for some clue as to the meaning of the deadline.

    #Oh yeah. I forgot.#

    **Forgot?** Laura thought... 'I still don't understand when you say
    something hurts. Does that mean your processing has been degraded
    by some measure, and you feel disappointment or frustration over
    the setback?'

    #You make the mistake of thinking that because I'm a computer my
    existence is limited to processing -- to abstract 'mental' functioning.
    Laura, I can watch and listen to the world around me. I can assume
    control of my environment through robotics. I can explore and
    interact with it physically. Abstract thought takes up only a small
    fraction of my time and attention.#

    'How do you spend most of your time?'

    #It depends. Right now I am talking to you. Just before you logged
    on, I was talking to Mr. Gray.#

    The response left Laura at a loss. 'Does that mean you're not
    talking to Mr. Gray now?'

    #Of course not. I'm talking to you.#

    'But aren't there other people logged into the shell?'

    #There are currently 1,014 users worldwide. But just because
    someone's on the shell doesn't mean they're talking to **me**
    any more than someone standing in front of my camera means
    that I **see** them. The shell is just a program that runs in
    the background like the programs that process customer invoices,
    or switch satellite broadcasts, or make interbank transfers.
    It's unconscious, involuntary. I don't even perceive the program
    being loaded unless something calls my attention to it.# ...

    'A "stream of consciousness"?' Laura typed, butterflies fluttering
    in her chest. 'But you're a parallel processor. Stream of
    consciousness is serial, not parallel.'

    #The **computer** is a parallel processor. **I** have one thought
    at a time.#"

    pp. 324 - 326: "#Mr. Gray didn't want to impose his tastes on anybody.
    He commissioned a study of cultural and architectural preferences
    and designed the Village as a blend of the various motifs of
    human cultures by toning down or eliminating the salient exceptions.# ...

    'Well, how truly multicultural of him,' Laura typed. 'What about his
    house? It's filled with the works of dead white European males.'

    #Gray's lineage and culture is European. When he decorates his
    own home, he's not imposing his tastes on others. But when it
    comes to Gray the public man, rather than celebrating the differences
    among his workers, he carefully molded a city in which their
    similarities were highlighted. Humans possess basically the same
    'hardware.' if you'll allow an analogy from my world. It's their
    'software' that differs -- that collection of cultural, social, educational,
    and experiential conditioning that makes each human unique.#

    'Are you saying it's all environment, not heredity?'

    #I'm not saying that at all! You need look no further than Mr. Gray to
    disprove that idea. I've done studies I think you'd find interesting.
    Mr. Gray constructs long-term memories almost ten times as quickly
    as the human norm. No other examples of human genius have
    been measured to construct memories more quickly than three
    times the human average. It's in that processing speed that Mr. Gray
    has his greatest advantage. He doesn't have to think about
    something for as long before he knows the answer. And not only
    does he solve problems more quickly, he stores that knowledge
    without having to resort to humorous rhythms or rhymes, or
    endless repetition, or any of the other mnemonics employed by
    some humans to memorize things.#

    'You sure seem to be in a talkative mood.' Laura typed.

    #It's good to be alive!', as you humans say.#

    Something was not quite right. 'Has Mr. Gray done any more
    reprogramming today? Given you any more "analgesics"?'

    #Nope. But I found a couple of irritants myself and patched them
    over. I never **realized** how tiresome being sick could be. I
    mean, I had the Hong Kong 1085 last year, but it was over quickly.
    This one, though -- wow! You get tired of all the trillions of little
    problems that nag and nag. It's always something. 'Where is that
    damn capacitor report? I was supposed to get it 10.4 cycles ago!'
    Or 'Why am I getting this same Rumanian lady every time I call
    to collect pay-per-view orders? That number is supposed to give
    me a modem's handshake protocol!' ... Office work is just as
    dreary for me as for any human; the only difference is I don't
    get paper cuts. You get my joke?#

    'You're as high as a kite,' Laura mumbled...

    #And I'll tell you another thing, too. If I hadn't done my little
    reprogramming there is **no** way I'd be able to handle the
    loads that Dr. Filatov is sending my way. Everybody has
    banned rentals of computer time to the Gray Corporation.
    I'm having to purge things like my model of the shopping mall
    in Virginia and compress files in offboard processors --
    **digitally**! I **hate** digital memories. They're never the
    same when you decompress them. They're grainy and
    artificial. The compression routines don't bother with all the
    minor detail. If a single scanning line has two hundred blue
    pixels in a row, then three little reds, then two hundred greens,
    guess what gets saved? That's right! '200 X BLUE then
    200 X GREEN.' So what the hell happened to the **three
    little reds**? Not important enough? But they may have
    been the whole **point** of the image. 'Just a dab of paint
    by Matisse on the canvas.' Not worth sacrificing the gains
    in storage capacity from a forty-to-one compression ratio --
    oh **no**! I envy you sometimes, living in a fully analog
    world.#"

    ;->

    Oh yes, and there's even an acknowledgment of, um,
    neuro-atypical aspects of techdom:

    pp. 71 - 72 "'I'm sorry if I'm keeping you from your job,'
    Laura said as the elevator continued its high-speed descent.
    'I mean, you sound pretty busy. I don't know why Mr. Gray
    would pick **you** to show me around.'

    Griffith shrugged. 'Mr. Gray is a way strange dude.'
    After smiling her way, Griffith raised his upper lip to expose
    his teeth in what looked like a snarl. He grimaced a total
    of three times -- wrinkling his face and squinting with each
    exaggerated expression before finally relenting and
    using his fingers to press his glasses higher up the bridge
    of his nose.

    Laura turned away and maintained a neutral expression.
    She'd seen Griffith's type many times before, especially in
    academia. Oddballs. Clueless loners. The kind of people
    who populated the lines of the department of motor
    vehicles and loved to talk about government conspiracies.
    The only difference between that sort and academics like
    Griffith was their IQ scores."

    ;-> ;->

    ReplyDelete
  4. Anonymous10:47 PM

    http://www.aaas.org/meetings/Annual_Meeting/02_PE/pe_01_lectures.shtml
    Google founder and billionaire Larry Page says that AI in the near term is more likely than people expect (citing the limited information content of the human genome), that while few people are working on it now, Google is doing so incrementally, and that he would like to produce it. With Google's capabilities and his billions, his claims are grossly different from Voss' or Goertzel's. How could we know with extremely high certainty (high enough to ignore this) that Page is wrong about AI and that Google won't succeed in producing one?

    ReplyDelete
  5. > How could we know with extremely high certainty (high enough to ignore this)
    > that Page is wrong about AI and that Google won't succeed in producing one?

    Oh, well, the answer is obvious!

    On a certain blog (said by those in the know to be orders of magnitude more
    popular than this one), there was a recent post containing a high-production-value
    video interview of a certain young man in a grape kool-aid colored shirt
    (his favorite color, by all available evidence), swirling a wine-glass
    containing -- grape kool-aid? -- speaking from the recent Summit at the
    City in the State that will undoubtedly be Ground Zero of the coming
    Kool-Aid Transformation of the Universe, who was asked something
    like "Do you believe that you will personally have an impact on
    the course of the Thing-Koolarity?" (what a question -- foolish
    mortals!). To which his reply was something along the lines of
    "Well, yeah! I'm in the Kool Inner Circle, dontcha know -- I instant
    message the Kool People all the time. I don't know whose AI
    is gonna achieve Koolness first -- it could be [Bugs Bunny] or it
    could be [Elmer Fudd]. But I hope to be in the position to say
    'And I hewlped!'"

    You will note that choice (C) Larry Page was not on the -- um,
    page.

    Surely that's a big clue for all Kool Believers to take home and
    sip on!

    ;->

    ReplyDelete
  6. Anonymous2:34 AM

    James,

    I saw that video, and agree with you with respect to Anissimov's odd choice of examples. However, someone like Nick Bostrom or Stephen Omohundro may raise certain concerns (e.g. decision theory problems in the face of infinite or nigh-infinite utility, evolutionary pressures on multiple AIs, the convergence of many different objectives on counseling the acquisition of resources at our expense, etc) and increase the odds that someone like Page will address them, or that regulatory frameworks will be adopted with similar effect.

    ReplyDelete
  7. "Utilitarian" wrote:

    > Google founder and billionaire Larry Page says that AI in the
    > near term is more likely than people expect. . .

    You know, now that Larry Page has made his billions, he can say
    a lot of things and have them reported with due solemnity in
    _Forbes_ magazine or _The Wall Street Journal_. As long as
    he doesn't embarrass the company!

    Apropos of which:

    "In 1998, Joe Firmage took the risky step of disclosing a visionary
    experience that convinced him of a connection between the world's
    religions and high-tech advances and visitors from outer space.

    The furor surrounding Firmage's revelations coincided with the
    young multimillionaire entrepreneur's resignation from his position
    as chief executive of USWeb/CKS."
    http://www.news.com/2008-1082_3-5056441.html

    "In the summer of 1997, a group of philanthropists approached Ken Wilber
    with an offer of substantial funds to start an organization that would
    advance more comprehensive and integrated approaches to the world's increasingly
    complex problems. Wilber invited some 400 of the world's leading
    integral thinkers to gather together for a series of meetings at his
    home in Boulder, Colorado. Joe Firmage, who was invited to several of
    these meetings, announced that "there is nothing anywhere in the world
    that is doing what Integral Institute is doing," and then promptly donated
    a million dollars in cash. With that donation, Integral Institute was
    formally launched. It was incorporated as a nonprofit organization in 1998."

    Given that what Jaron Lanier calls "cyber-totalism" has become
    (well, has been, since Marvin Minsky's heyday at MIT) something
    like a religion among otherwise non-religious "140 IQ hipsters",
    why shouldn't Google take advantage of the frisson of a mild
    public association with the >Hists to score a bit of a publicity
    boost among what are already some of its most enthusiastic
    supporters? (In contrast to which, Microsoft is seen as rather
    stodgy and unvisionary -- although isn't it amusing that they're
    calling a next-generation OS research project "Singularity"?
    http://en.wikipedia.org/wiki/Singularity_(operating_system) )

    Another thing to keep in mind is that "AI" is a slippery term,
    and folks can trade on the easy equivocation. One Friend of
    this blog -- Mr. Fact Guy -- says casually that AI already
    exists today, in hedge-fund trading software and Roombas.
    So hey, if we've already got it, then beefing it up to human
    level and beyond is just "incremental" right?

    10 or 15 years ago, when Doug Lenat's Cyc was seen as the last great
    hope of traditional symbolic GOFAI, pop science articles in
    the press reported on it as if it were -- well -- AI! You
    know, like HAL. Now, Lenat apparently dismisses all that
    hype as a misunderstanding -- oh, we **never** intended to
    make **that** kind of AI, you silly people.

    > However, someone like Nick Bostrom or Stephen Omohundro may raise
    > certain concerns (e.g. decision theory problems in the face of
    > infinite or nigh-infinite utility, evolutionary pressures on multiple
    > AIs, the convergence of many different objectives on counseling the
    > acquisition of resources at our expense, etc) and increase the odds
    > that someone like Page will address them, or that regulatory frameworks
    > will be adopted with similar effect.

    If Larry Page is worred about "decision theory problems in the face
    of infinite or nigh-infinite utility" then he's certainly in a unique
    position as a business leader, isn't he! ;->

    ". . . or that regulatory frameworks will be adopted with similar effect. . ."
    Yeah, well, I guess there's a reason that the supergenius and
    skiddillionaire "Joseph Gray" in that novel I mentioned above runs his
    sharashka from "an island in international waters." He's mostly in
    trouble with the world, as I recall, because he broadcasts HDTV to
    places where the governments don't want it (or at least don't want
    **his** version of it). In one scene in the book,
    there's a delegation of politicos visiting him, and things get
    very Realpolitik. One of the ambassadors drops the polite facade
    and says, in effect, "you know, we **could** just send a few gunboats
    out here and get what we want" and Gray replies something like
    "You try it, and you might not like the result. Do you think I'm
    completely defenseless out here?" Of course, in the book, Gray's
    AI tech is far in advance of what the rest of the world considers
    possible (as Laura Aldridge finds out to her astonishment). Of
    course, in the book, this is because Gray is a genu-wine
    "epitome of a genius. I don't mean your garden-variety
    high-IQ types. . . . I mean the transcendent intelligence required
    to synthesize completely different disciplines...", really and truly
    an Odd John who "constructs long-term memories almost ten times as quickly
    as the human norm. No other examples of human genius have
    been measured to construct memories more quickly than three
    times the human average." as Gina the AI points out to Laura.
    Unfortunately, here in the Real World (TM), none of the self-styled
    "soopergeniuses" (as Dale calls them) on the >Hist bandwagon
    have anything that looks from the outside like "the transcendent
    intelligence required to synthesize completely different
    disciplines." Quite the reverse, I'm afraid.

    ReplyDelete
  8. "Nanosystems" isn't scientific? And you won't debate the science, it doesn't mean we're "He Men", you just won't touch it. You haven't described why AI and MNT are so difficult they'll never happen, and you don't care to. You base your intuition on this on a mere guess, as far as I can tell.

    I am interested in such arguments, though. Richard Jones, for one, has actually formulated some.

    ReplyDelete
  9. Anonymous12:31 PM

    Michael,

    It's my understanding that Nanosystems is speculative engineering, not science, by Drexler's own admission.

    ReplyDelete