Thursday, June 04, 2015

Deep Unlearning

Computation, whether networked or not, raises a host of complex problems and possibilities that demand critical analysis and serious policy deliberation, to be sure, but not one of these has ever been nor will ever be clarified by the customary recourse to the deceptive, deranging figure artificial intelligence or its various promotional repackagings, of which the latest -- and one of the worst -- is the frankly insulting "deep learning."

4 comments:

  1. Ted Nelson keynotes Homebrew reunion, Dec 2013
    https://www.youtube.com/watch?v=9-Ma2MZpUQQ
    5:26/27:11

    This was the issue of Time magazine for January 23, 1950 --
    I was twelve, and I just found the picture on the Web.
    I was puzzled by the article. Here's how the article started --
    Time will give you the first couple paragraphs for free.
    "On Oxford Street in Cambridge, Mass. lives a sibyl.
    A priestess of science." Skipping a little. "She is a long,
    slim, glass-sided machine with 760,000 parts, and
    the riddles that are put to her, and that she unfailingly
    answers, concern such matters as rocket motors, nuclear
    physics, and trigonometric functions." I could not fathom
    how a machine could do all that. A year and a half later,
    in the fall of 1951, my grandfather and I went to an exhibit
    of da Vinci models at the IBM showroom on Madison Avenue,
    and there we walked through the IBM SSEC -- the Selective
    Sequence Electronic Calculator. As I recall, its ten thousand
    or twelve thousand vacuum tubes glowed blue. But I still
    didn't know what the hell it was about. Or that "Selective Sequence"
    meant that they had just added branching instructions.

    Nine years later, in graduate school, I took a computer course
    and went crazy. Everything I'd heard about computers was a
    **lie**! They weren't mathematical. They weren't scientific.
    They were electric trains you could run in circles.

    ReplyDelete
  2. > This was the issue of Time magazine for January 23, 1950

    I looked this up. The caption underneath the artist's
    anthropomorphized rendition of the (Harvard Mark III)
    computer is
    "Can man build a superman?". ;->

    http://img.timeinc.net/time/magazine/archive/covers/1950/1101500123_400.jpg
    http://www.historyofinformation.com/expanded.php?id=2284
    http://en.wikipedia.org/wiki/Harvard_Mark_III

    ReplyDelete
  3. Well, well. Speaking of supermen, MIRI has a new fearless --
    I was about to say leader, but "nominal chief executive"
    is probably more accurate.

    http://lesswrong.com/lw/ma4/taking_the_reins_at_miri/
    ------------------
    Hello, I'm Nate Soares, and I'm pleased to be taking the reins at
    MIRI on Monday morning.

    For those who don't know me, I've been a research fellow at MIRI
    for a little over a year now. I attended my first MIRI workshop
    in December of 2013 while I was still working at Google, and was
    offered a job soon after. Over the last year, I wrote a
    dozen papers, half as primary author. Six of those papers
    were written for the MIRI technical agenda. . .

    I've always had a natural inclination towards leadership: in the past,
    I've led a F.I.R.S.T. Robotics team, managed two volunteer theaters,
    served as president of an Entrepreneur's Club, and co-founded
    a startup or two. . .

    The last year has been pretty incredible. Discussion of long-term AI
    risks and benefits has finally hit the mainstream, thanks to the
    success of Bostrom's _Superintelligence_. . .
    ====

    http://lesswrong.com/user/so8res/
    http://mindingourway.com/minding-our-way-to-the-heavens/


    http://lukemuehlhauser.com/f-a-q-about-my-transition-to-givewell/
    ------------------
    Why did you take a job at GiveWell?

    Apparently some people think I must have changed my mind about what
    I think Earth’s most urgent priorities are. So let me be clear:
    Nothing has changed about what I think Earth’s most urgent
    priorities are.

    I still buy the basic argument in Friendly AI research as
    effective altruism.

    I still think that growing a field of technical AI alignment
    research, one which takes the future seriously, is plausibly
    the most urgent task for those seeking a desirable long-term
    future for Earth-originating life.

    And I still think that MIRI has an incredibly important role
    to play in growing that field of technical AI alignment research.

    I decided to take a research position at GiveWell mostly for
    personal reasons.
    ====

    YMMV. Holden Karnofsky is less sanguine, about MIRI at
    any rate.

    ReplyDelete
  4. In Richard Jones' latest article on his Soft Machines blog,
    "Does Transhumanism Matter", he wrote:

    http://www.softmachines.org/wordpress/?p=1607
    -------------
    To many observers with some sort of scientific background. . .
    the worst one might say about transhumanism is that it is mostly harmless,
    perhaps over-exuberant in its claims and ambitions, but beneficial
    in that it promotes a positive image of science and technology.

    But there is another critique of transhumanism, which emphasises not
    the distance between transhumanism’s claims and what is technologically
    plausible,. . . but the continuity between the way transhumanists talk
    about technology and the future and the way these issues are talked
    about in the mainstream. In this view, transhumanism matters, not
    so much for its strange ideological roots and shaky technical foundations,
    but because it illuminates some much more widely held, but pathological,
    beliefs about technology.
    ====

    You can't get more mainstream than Time magazine.

    That January 23, 1950 issue of Time that Ted Nelson
    (of hypertext/Computer Lib/Xanadu fame) recalled seeing
    when he was 12 years old, whose cover contains an
    artist's rendition of an anthropomorphized Harvard Mark III
    computer with the caption "Can Man Build a Superman?",
    contains some passages that are not that dissimilar from some of
    the articles in the mainstream media today amplifying the frettings
    of Nick Bostrom, Elon Musk, Bill Gates, Stephen Hawking,
    and others.

    This has been going on for a **long** time.

    http://www.historyofinformation.com/expanded.php?id=2284
    -------------
    "What Is Thinking? Do computers think? Some experts say yes,
    some say no. Both sides are vehement; but all agree that the answer
    to the question depends on what you mean by thinking.

    "The human brain, some computermen explain, thinks by judging present
    information in the light of past experience. That is roughly what
    the machines do. They consider figures fed into them (just as
    information is fed to the human brain by the senses), and measure
    the figures against information that is "remembered." The machine-radicals
    ask: 'Isn't this thinking?' . . .

    "Nearly all the computermen are worried about the effect the machines
    will have on society. But most of them are not so pessimistic as
    [Norbert] Wiener. . .

    "Psychotic Robots.

    In the larger, "biological" sense, there is room for
    nervous speculation. Some philosophical worriers suggest that the computers,
    growing superhumanly intelligent in more & more ways, will develop wills,
    desires and unpleasant foibles' of their own, as did the famous
    robots in Capek's R.U.R.

    "Professor Wiener says that some computers are already "human" enough
    to suffer from typical psychiatric troubles. Unruly memories, he says,
    sometimes spread through a machine as fears and fixations spread through
    a psychotic human brain. Such psychoses may be cured, says Wiener,
    by rest (shutting down the machine), by electric shock treatment
    (increasing the voltage in the tubes), or by lobotomy (disconnecting
    part of the machine).

    "Some practical computermen scoff at such picturesque talk, but others recall
    odd behavior in their own machines. Robert SeeberOffsite Link of I.B.M.
    says that his big computer has a very human foible: it hates to wake
    up in the morning. . .

    "Neurotic Exchange.

    Bell Laboratories' Dr. [Claude] Shannon has a similar story. During
    World War II, he says, one of the Manhattan dial exchanges (very similar
    to computers) was overloaded with work. It began to behave queerly,
    acting with an irrationality that disturbed the company. Flocks of
    engineers, sent to treat the patient, could find nothing organically
    wrong. After the war was over, the work load decreased. The ailing
    exchange recovered and is now entirely normal. Its trouble had been
    'functional': like other hard-driven war workers, it had suffered
    a nervous breakdown"
    ====

    "Picturesque" indeed.

    ReplyDelete