Sunday, December 06, 2015

Three Fronts in the Uploading Discussion -- A Guest Post by Jim Fehlinger

Longtime friend and friend-of-blog Jim Fehlinger posted a cogent summarizing judgment (which doesn't mean concluding by any means) of the Uploading discussion that's been playing out in this Moot non-stop for days. I thought it deserved a post of its own. In the Moot to this post, I've re-posted my responses to his original comments to get the ball rolling again. I've edited it only a very little, for continuity's sake, but the link above will take the wary to the originals.--d

It strikes me that this conversation (/disagreement) has been proceeding along three different fronts (with, perhaps, three different viewpoints) that have not yet been clearly distinguished:

1. Belief in/doubts about GOFAI ("Good Old-Fashioned AI") -- the 50's/60's Allen Newell/Herbert Simon/Seymour Papert/John McCarthy/Marvin Minsky et al. project to replicate an abstract human "mind" (or salient aspects of one, such as natural-language understanding) by performing syntactical manipulations of symbolic representations of the world using digital computers. The hope initially attached to this approach to AI has been fading for decades. Almost a quarter of a century ago, in the second edition of his book, Hubert Dreyfus called GOFAI a "degenerating research program":
Almost half a century ago [as of 1992] computer pioneer Alan Turing suggested that a high-speed digital computer, programmed with rules and facts, might exhibit intelligent behavior. Thus was born the field later called artificial intelligence (AI). After fifty years of effort [make it 70, now], however, it is now clear to all but a few diehards that this attempt to produce artificial intelligence has failed. This failure does not mean this sort of AI is impossible; no one has been able to come up with a negative proof. Rather, it has turned out that, for the time being at least, the research program based on the assumption that human beings produce intelligence using facts and rules has reached a dead end, and there is no reason to think it could ever succeed. Indeed, what John Haugeland has called Good Old-Fashioned AI (GOFAI) is a paradigm case of what philosophers of science call a degenerating research program.

A degenerating research program, as defined by Imre Lakatos, is a scientific enterprise that starts out with great promise, offering a new approach that leads to impressive results in a limited domain. Almost inevitably researchers will want to try to apply the approach more broadly, starting with problems that are in some way similar to the original one. As long as it succeeds, the research program expands and attracts followers. If, however, researchers start encountering unexpected but important phenomena that consistently resist the new techniques, the program will stagnate, and researchers will abandon it as soon as a progressive alternative approach becomes available.
[That research program i]s still degenerating, as far as I know.

Dale and I agree in our skepticism about this one. Gareth Nelson, it would seem (and many if not most & Hists, I expect) still holds out hope here. I think it's a common failing of computer programmers. Too close to their own toys, as I said before. ;-&

2. The notion that, even if we jettison the functionalist/cognitivist/symbol-manipulation approach of GOFAI, we still might simulate the low-level dynamic messiness of a biological brain and get to AI from the bottom up instead of the top down. Like Gerald Edelman's series of "Darwin" robots or, at an even lower and putatively more biologically-accurate level, Henry Markram's "Blue Brain" project.

Gareth seems to be on-board with this approach as well, and says somewhere above that he thinks a hybrid of the biological-simulation approach and the GOFAI approach might be the ticket to AI (or AGI, as Ben Goertzel prefers to call it).

Dale still dismisses this, saying that a "model" of a human mind is not the same as a human mind, just as a picture of you is not you.

I am less willing to dismiss this on purely philosophical grounds. I am willing to concede that if there were digital computers fast enough and with enough storage to simulate biological mechanisms at whatever level of detail turned out to be necessary (which is something we don't know yet) and if this sufficiently-detailed digital simulation could be connected either to a living body with equally-miraculously (by today's standards) fine-grained sensors and transducers, or to a (sufficiently fine-grained) simulation of a human body immersed in a (sufficiently fine-grained) simulation of the real word -- we're stacking technological miracle upon technological miracle here! -- then yes, this hybrid entity with a human body and a digitally-simulated brain, I am willing to grant, might be a good-enough approximation of a human being (though hardly "indistinguishable" from an ordinary human being, and the poor guy would certainly find verself playing a very odd role indeed in human society, if ve were the first one). I'm even willing to concede (piling more miracles on top of miracles by granting the existence of those super-duper-nanobots) the possibility of "uploading" a particular human personality, with memories intact, using something like the Moravec transfer (though again, the "upload" would find verself in extremely different circumstances from the original, immediately upon awakening). This is still not "modelling" in any ordinary sense of the word in which it occurs in contemporary scientific practice! It's an as-yet-unrealized (except in the fictional realm of the SF novel) substitution of a digitally-simulated phenomenon for the phenomenon itself (currently unrealized, that is, except in the comparatively trivial case in which the phenomenon is an abstract description of another digital computer).

However, I am unpersuaded, Moravec and Kurzweil and their fellow-travellers notwithstanding, that Moore's Law and the "acceleration of technology" are going to make this a sure thing by 2045. I am not even persuaded that we know enough to be able to predict that such a thing might happen by 20450, or 204500, whether by means of digital computers or any other technology, assuming a technological civilization still exists on this planet by then.

The physicist Richard C. Feynman, credited as one of the inventors of the idea of "nanotechnology", is quoted as having said "There's plenty of room at the bottom." Maybe there is. Hugo de Garis thinks we'll be computing using subatomic particles in the not too distant future! If they're right, then -- sure, maybe all of the above science-fictional scenarios are plausible. But others have suggested that maybe, just maybe, life itself is as close to the bottom as our universe permits when it comes to, well, life-like systems (including biologically-based intelligence). If that's so, then maybe we're stuck with systems that look more-or-less like naturally-evolved biochemistry.

3. Attitudes toward the whole Transhumanist/Singularitarian mishegas. What Richard L. Jones once called the "belief package", or what Dale commonly refers to as the three "omni-predicates" of & Hist discourse: omniscience=superintelligence; omnipotence=super-enhancements (including super-longevity); omnibenevolence=superabundance.

This is a very large topic indeed. It has to do with politics, mainly the politics of libertarianism (Paulina Boorsook, Cyberselfish, Barbrook & Cameron, "The Californian Ideology," religious yearnings (the "Rapture of the Nerds"), cult formation (especially sci-fi tinged cults, such as Ayn Rand's [or Nathaniel Branden's, if you prefer] "Objectivism", L. Ron Hubbard's "Scientology", or even Joseph Smith's Mormonism!), psychology (including narcissism and psychopathy/sociopathy), and other general subjects. Very broad indeed!

Forgive me for putting it this insultingly, but I fear Gareth may still be savoring the Kool-Aid here.

Dale and I are long past this phase, though we once both participated on the Extropians' mailing list, around or before the turn of the century. When we get snotty (sometimes reflexively so ;-&), it's the taste of the Kool-Aid we're reacting to, which we no longer enjoy, I'm afraid.

23 comments:

  1. Gareth... says somewhere above that he thinks a hybrid of the biological-simulation approach and the GOFAI approach might be the ticket to AI (or AGI, as Ben Goertzel prefers to call it). Dale still dismisses this, saying that a "model" of a human mind is not the same as a human mind, just as a picture of you is not you.

    You may be right that I am more skeptical than you are on this second question -- I am not sure, your formulation seems pretty congenial after a first read -- all I would say is that the context for all this was the futurological conceit of uploading in particular, and I do indeed still regard that notion as too incoherent in principle to draw any comfort from the points you are making.

    Even if, as Gareth seems to be implying, there is a "weak" uploading project in which good-enough simulations can replace people for an (insensitive enough?) audience apart from a "strong" uploading project in which some sort of info-souls are somehow translated/ migrated and thus, again somehow, immortalized digitally, I think both notions are bedeviled by conceptual and rhetorical and political nonsense rendering them unworthy of serious consideration (except as sfnal conceits doing literary kinds of work). I am not sure anybody but Gareth actually maintains this strong/weak distinction quite the way he seems to do, and I'm not sure his endorsement of the weak version doesn't drift into the strong version in any case in its assumptions and aspirations.

    Dale and I are long past this phase, though we once both participated on the Extropians' mailing list, around or before the turn of the century. When we get snotty... it's the taste of the Kool-Aid we're reacting to, which we no longer enjoy, I'm afraid.

    Even back in '93 I was hate-reading Extropians -- I once thought/hoped James Hughes' socialist strong reading of transhumanism might yield a useful technoprogressivism, but boy was I wrong to hold out that hope! I will admit that as an avid sf reader with a glimpse of the proto-transhumanoid sub(cult)ure via the L5 Society and Durk Pearson and Sandy Shaw I was a bit transhumanish at age eleven or so -- with a woolly sense that longevity medicine and nano/femto-superabundance should be the next step after the Space Age. The least acquaintance with consensus science disabused me of that nonsense. It's a bit like the way first contact, pretty much in my first term away from home in college, with comparative religion made me a cheerful atheist and confrontation with an actually diverse world made the parochial pieties of market ideologies instantly hilarious.

    ReplyDelete
  2. > [C]omputer programmers [are t]oo close to their own toys,
    > as I said before.

    There's a rather unflattering portrait of computer programmers
    in a 1976 book by Joseph Weizenbaum, in a chapter entitled
    "Science and the Compulsive Programmer", that I excerpted at
    length in the comment thread of:
    http://amormundi.blogspot.com/2008/03/giulio-demands-clarifications-and-i.html

    > [The GOFAI research program i]s still degenerating, as far as I know.

    To be fair, not everybody accepts this characterization,
    even to this day.

    E.g., there's a book available to browse on Google Books,
    _Mind as Machine: A History of Cognitive Science, Volume 1 & 2_
    by Margaret A. Boden (Oxford University Press, 2006):
    --------------
    11.ii Critics and Calumnies
    https://books.google.com/books?id=nQMPIGd4baQC&pg=PA838&lpg=PA838&dq=%22Mind+as+Machine%22+%22Hubert+Dreyfus%22&source=bl&ots=Ggz_SzjkCC&sig=pmyVqbV_qSc0APYxXc6z4GGeG6Q&hl=en&sa=X&ved=0ahUKEwjJgd-DuMfJAhXESiYKHct-CcwQ6AEIHDAA#v=onepage&q=%22Mind%20as%20Machine%22%20%22Hubert%20Dreyfus%22&f=false

    "The 1970s critique of GOFAI was no mere intellectual exercise,
    confined to the academy; it was a passion-ridden cultural phenomenon
    too." ;->

    13.vii: CODA
    https://books.google.com/books?id=4BAGY-UR2xEC&pg=PA1108&dq=%22Mind+as+Machine%22+%22Imre+Lakatos%22&hl=en&sa=X&ved=0ahUKEwj7567EuMfJAhXCbiYKHarWDfoQ6AEIJjAA#v=onepage&q=%22Mind%20as%20Machine%22%20%22Imre%20Lakatos%22&f=false

    "Clearly, then, the charge that GOFAI is 'a degenerative research programme'
    is mistaken. Blay Whitby has put it in a nutshell:

    'A myth has developed that AI has failed as a research programme.
    This myth is prevalent **both inside and outside** AI and related
    scientific enterprises. In fact AI is a remarkably successful
    research programme which has delivered not only scientific insight
    but a great deal of useful technology.' . . .

    In sum, GOFAI hasn't 'failed' and it hasn't ceased either. Today's
    AI champions can still produce defensive rhetoric that makes
    one wince. . .

    As Mark Twain might have said, the rumours of its death are
    greatly exaggerated. . ."
    ====

    YMMV. Stay tuned. ;->

    > . . .Richard C. Feynman. . .

    Richard P. Feynman

    > . . .Richard L. Jones. . .

    Richard A. L. Jones

    I should check these things more carefully. My memory isn't what
    it used to be. ;->

    ReplyDelete
  3. I used to be a transhumanst but the more i was exposed to history, philosophy and science the more I added the caveat of if we can develope the technologies and then I realised what a pointless exercise it was and that I could still innovation, creativity and developement without needing capitalism and absolutism.

    ReplyDelete
  4. The state of the art, 2015:

    _The Brain_ with David Eagleman
    Episode 6, Attempting to Create Artificial Intelligence
    http://video.pbs.org/video/2365575367/

    I'm sorry, I don't know what this is.

    ReplyDelete
  5. The state of the art in 1956:

    ROBBIE THE ROBOT & Dr. EDWARD MORBIUS 1956
    https://www.youtube.com/watch?v=a63i4rGZ1ts

    (Chattier than the space robots were in 1951.
    Klaatu barada nikto!)

    BTW, I am not a robot. I just had to click a checkbox
    and pass a little test to prove that to myself. ;->

    ReplyDelete
  6. 'A myth has developed that AI has failed as a research programme.
    This myth is prevalent **both inside and outside** AI and related
    scientific enterprises.


    If it can't fail but can only be failed, what you have on your hands, ladies and gentlemen, is an ideology.

    ReplyDelete
  7. > GOFAI. . . the. . . project to replicate an abstract human "mind". . .
    > by performing syntactical manipulations of symbolic representations
    > of the world using digital computers. . .
    >
    > [Or alternatively] we. . . might simulate the low-level dynamic
    > messiness of a biological brain and get to AI from the bottom up
    > instead of the top down. . .

    There is perhaps a "third way" (the phrase reminds me of "Third Force"
    [clinical] psychology, a term popularized in the 70s for
    alternatives to both Behaviorism and Psychoanalysis ;-> ).

    Here's one such advocate:

    http://www.richardloosemore.com/
    --------------
    Artificial General Intelligence
    How to Build an AGI — and How Not To
    May 11 2015

    . . .

    Before we start, let’s get a couple things out of the way:

    1. If you think the current boom in Big Data/Deep Learning
    means that we are on the glide path to AGI... well, please, just don’t.

    2. There seems to be a superabundance of armchair programmers who
    know how to do AGI...

    Artificial Intelligence research is at a very bizarre stage in its history.

    Almost exclusively, people who do AI research were raised as mathematicians
    or computer scientists. These people have a belief — a belief so strong it
    borders on a religion — that AI is founded on mathematics, and the best
    way to build an AI is to design it as a formal system. They will also tell
    you that human intelligence is just a ridiculously botched attempt by
    nature to build something that should have been a logico-mathematical intelligence.

    Mathematicians are the high priests of Science, so we tend to venerate
    their opinion. But I’m here to tell you that on this occasion they could
    turn out to be spectacularly wrong.

    There are reasons to believe that a complete, human-level artificial
    general intelligence cannot be made to work if it is based on a formal,
    mathematical approach to the problem.

    Don’t confuse that declaration with the kind of slogans produced by luddites:
    "No machine will ever be creative..." "No machine will ever be conscious..."
    "Machines can only do what they are programmed to do". Those slogans are
    driven by lack of understanding or lack of imagination.

    ReplyDelete
  8. The declaration I just made is... based on an argument...
    Here is my attempt to pack it into a single paragraph:

    There is a type of system called a “complex system,” whose
    component parts interact with one another in such a horribly tangled
    way that the overall behavior of the system does not look even remotely
    connected to what the component parts are doing. (You might think
    that the “overall behavior” would therefore just be random, but that
    is not the case: the overall behavior can be quite regular and lawful.)
    We know that such systems really do exist because we can build them
    and study them, and this matters for AI because there are powerful
    reasons to believe that all intelligent systems must be complex systems.
    If this were true it would have enormous implications for AI
    researchers: it would mean that if you try to produce a mathematically
    pure, formalized, sanitized version of an intelligence, you will
    virtually guarantee that the AI never gets above a certain level
    of intelligence. . .

    As you can imagine, this little argument tends to provoke a hot reaction
    from AI researchers.

    That is putting it mildy. One implication of the complex systems problem
    is that the skills of most AI researchers are going to be almost worthless
    in a future version of the AI field — and that tends to make people very
    angry indeed. There are some blood-curdling examples of that anger,
    scattered around the internet. . .
    ====

    One example of which is an infamous episode that occurred 9 years
    ago, on Eliezer Yudkowsky's "SL4" mailing list, a link to the
    culmination of which is still the 3rd-ranked result of a Google search
    on this gentleman's name.

    Cutting Loosemore: bye bye Richard
    http://www.sl4.org/archive/0608/15895.html

    See also the comment thread in
    http://amormundi.blogspot.com/2013/01/a-robot-god-apostles-creed-for-less.html

    Loc. cit.
    --------------
    History, if the human race survives long enough to write it, will say
    that this was bizarre because, in fact, we probably could have built a
    complete, human-level AGI at least ten years ago (perhaps even as
    far back as the late 1980s). And yet, because nobody could risk their
    career by calling out the queen for her nakedness, nothing happened. . .
    ====

    Once again, stay tuned!

    ReplyDelete
  9. > These people have a belief — a belief so strong it
    > borders on a religion — that AI is founded on mathematics, and the best
    > way to build an AI is to design it as a formal system. They will also tell
    > you that human intelligence is just a ridiculously botched attempt by
    > nature to build something that should have been a logico-mathematical intelligence.

    There is a (minor) SF author named John C. Wright, who
    wrote a transhumanist SF trilogy (overall title:
    "The Golden Age")

    The books were received rapturously in >Hist circles, and
    the author himself was warmly welcomed on the Extropians'
    mailing list, until his conversion (from Ayn Randian Objectivism) to
    Christianity (with all that entails) ultimately made him persona non
    grata.

    However, Wright's science-fictional AIs (known in the books as
    "sophotechs") capture the flavor of the kind of AI still
    dreamed of by the preponderance of >Hists (and described by
    Richard Loosemore above).

    Amusingly, there's an "evil" AI in the story (the "Nothing
    Sophotech") whose "irrational" innards mirror very closely
    how some modern theorists (Edelman, say, or George Lakoff)
    would describe the architecture of a human mind (or brain).

    A Usenet reviewer of the books exclaimed:

    ------------
    After two and a half books of crazy-ass post-human hijinks, Wright
    declares that the Final Conflict will be between the rational
    thought-process of the Good Guys and the insane thought-process of the
    Bad Guys. . .

    He does a great job of describing how *I* think the sentient
    mind works, and imputes it to the evil overlord.

    (Really. I was reading around page 200, thinking "This argument
    doesn't work because the human mind doesn't work that way; it works
    like *this*." Then I got to page 264, and there was an excellent
    description of *this*.)

    Then Wright declares that his side wins the argument, and that's the
    end of the story. (The evil overlord was merely insane, and is
    cured/convinced by Objectivism.)
    ====

    Or maybe the evil overlord read the Sequences. ;->

    There are longer quotes in the comment threads of:

    http://amormundi.blogspot.com/2008/03/challenge.html
    http://amormundi.blogspot.com/2010/10/of-differently-intelligent-beings.html

    ReplyDelete
  10. > . . .the whole Transhumanist/Singularitarian mishegas. . .
    > It has to do with politics,. . . religious yearnings. . .,
    > cult formation. . ., psychology. . ., and other general subjects.

    There's been some interesting Web commentary from folks who once
    got swept up in these enthusiasms by coming into contact
    with Eliezer Yudkowsky's LessWrong blog (that's his most recent
    Web venue -- he started out [as a serious transhumanist, leaving aside
    his earliest Usenet contributions on Barney the Dinsaur]
    by publishing Web articles and participating on the Extropians' list
    in the mid-90's; then at the turn of the century [at the time
    he co-founded the Singularity Institute for Artificial Intellgence]
    created his own mailing list, S[hock]L[evel]4; then in the
    mid-2000s he became a co-blogger with transhumanist economist Robin Hanson
    at "Overcoming Bias"; and most recently spun off his own
    blog LessWrong), but who later soured on that community
    and came to conclusions not unlike those I myself reached
    more than a decade ago.

    Alexander Kruel is one such person who started out as an
    enthusiastic supporter of the LessWrong brand of "rationality"
    but later began formulating detailed critiques of that
    community and its guru. He once posted this article
    to his own blog:

    Possible reasons for a perception of lesswrong/SIAI as a cult
    http://postbiota.org/pipermail/tt/2012-July/011768.html

    The original blog-link has since been removed by the author, who
    was allegedly the target of sustained harassment as a result
    of this sort of criticism, according to
    http://rationalwiki.org/wiki/Lesswrong#The_rational_way_to_deal_with_critics
    and
    http://rationalwiki.org/wiki/Talk:LessWrong ("Crossing the cult event horizon")

    Christopher Hallquist (a.k.a Topher Hallquist) is someone
    else who, several years ago, bridled at the suggestion that
    the community aggregated around the LessWrong blog might
    be a "cult" (see the comment thread at
    http://amormundi.blogspot.com/2013/02/from-futurism-to-retro-futurism.html )
    but who has become much more critical in more recent blog posts:

    http://www.patheos.com/blogs/hallq/2014/07/the-lesswrongmiri-communitys-problem-with-experts-and-crackpots/
    http://www.patheos.com/blogs/hallq/2014/12/rokos-basilisk-lesswrong/

    This past year, he's become even harsher:

    https://topherhallquist.wordpress.com/2015/07/30/lesswrong-against-scientific-rationality/
    https://topherhallquist.wordpress.com/2015/08/17/reply-to-scott-alexander/

    The comment thread in that last post is particularly illuminating,
    as it contains responses from the actual target of all this criticism.

    ReplyDelete
  11. > Alexander Kruel is one such person who started out as an
    > enthusiastic supporter of the LessWrong brand of "rationality"
    > but later began formulating detailed critiques of that
    > community and its guru. [But many of his critical blog posts
    > have] since been removed by the author, who
    > was allegedly the target of sustained harassment as a result
    > of this sort of criticism, according to
    > http://rationalwiki.org/wiki/Lesswrong#The_rational_way_to_deal_with_critics
    > and
    > http://rationalwiki.org/wiki/Talk:LessWrong ("Crossing the cult event horizon")

    Neverthless, it seems that many of them were archived by the Wayback Machine
    before they were removed from the current incarnation of the blog.

    They're worth a perusal:

    https://web.archive.org/web/20141013084504/http://kruel.co/#sthash.Q6tAPmG1.dpbs
    https://web.archive.org/web/20141013084608/http://kruel.co/2012/07/17/miri-lesswrong-critiques-index/#sthash.WpWxyBRj.dpbs

    ReplyDelete
  12. > There is a (minor) SF author named John C. Wright who
    > wrote a transhumanist SF trilogy. . .

    I do not mean to tar all SF authors with the same brush, by
    any means. There are far more sensible voices in the
    SF-verse. Ironically, some of them (e.g., Greg Egan)
    were once **recommended** by the Singularitarian-sect
    >Hists as a way for mehums (mere humans ;-> ) to expand
    their minds around higher-Shock-Level ideas. Since
    then, the Singularitarian guru-wannabes have been
    rather mercilessly skewered by none other than Egan
    himself, which (not surprisingly) resulted in the
    official line being changed -- it's now no longer
    "rational" to look to SF for serious discussion, or
    speculation, about >Hist topics. ;->

    See, e.g.,
    Zendegi - [a review by] Gareth Rees
    http://gareth-rees.livejournal.com/31182.html
    (via
    http://rationalwiki.org/wiki/Lesswrong#cite_note-57 )

    and see also
    http://amormundi.blogspot.com/2012/06/i-dont-think-that-phrase-straw-man.html
    http://amormundi.blogspot.com/2012/06/robot-cultists-chastise-charlie-stross.html
    and the comment thread of
    http://amormundi.blogspot.com/2011/05/more-signs-of-singularity-damn-you-auto.html

    Egan himself participated in an exchange hosted by
    philospher and >Hism sympathizer Russell Blackford (see comment
    thread at
    http://amormundi.blogspot.com/2008/04/greg-egan-on-transhumanists.html )

    Charlie Stross, SF author and blogger who wrote the Singularity-scenario
    novel _Accelerando_ some years ago, has since turned rather sour
    on the Singularitarians:
    http://www.antipope.org/charlie/blog-static/2012/05/deconstructing-our-future.html

    And more than a decade ago, Bruce Sterling gave an entertaining, but
    skeptical, talk on the Singularity:
    http://longnow.org/seminars/02004/jun/11/the-singularity-your-future-as-a-black-hole/

    ReplyDelete
  13. In the comment thread of
    http://amormundi.blogspot.com/2015/11/relevant-expertise-in-critique-of.html
    Dale wrote
    > my current twitter exchanges with robocultists trundle along the
    > same well-worn grooves as they did a decade ago.
    and I snarked, in reply
    > Yes, but now you get to hear from a whole new generation of 'em. . .
    and later on, Gareth Nelson retorted
    > i'm 28 now so I suppose you could say i'm "the new generation",
    > if that's at all relevant.

    I suppose the following might bear on the "if that's at all relevant"
    remark ;->

    http://www.overcomingbias.com/2014/01/i-was-wrong.html
    --------------
    I Was Wrong
    By Robin Hanson
    January 21, 2014

    On Jan 7, 1991 Josh Storrs Hall made this offer to me on the Nanotech email list:

    > I hereby offer Robin Hanson (only) 2-to-1 odds on the following statement:
    > “There will, by 1 January 2010, exist a robotic system capable of the
    > cleaning an ordinary house (by which I mean the same job my
    > current cleaning service does, namely vacuum, dust, and scrub the
    > bathroom fixtures). This system will not employ any direct copy of
    > any individual human brain. Furthermore, the copying of a living
    > human brain, neuron for neuron, synapse for synapse, into any
    > synthetic computing medium, successfully operating afterwards and
    > meeting objective criteria for the continuity of personality,
    > consciousness, and memory, will not have been done by that date.”. . .

    At the time I replied that my estimate for the chance of this was
    in the range 1/5 to 4/5, so we didn’t disagree. But looking back I
    think I was mistaken – I could and should have known better, and accepted this bet.

    I’ve posted on how AI researchers with twenty years of experience tend
    to see slow progress over that time, which suggests continued future
    slow progress. Back in ’91 I’d had only seven years of AI experience,
    and should have thought to ask more senior researchers for their opinions.
    But like most younger folks, I was more interested in hanging out
    and chatting with other young folks. While this might sometimes be
    a good strategy for finding friends, mates, and same-level career allies,
    it can be a poor strategy for learning the truth. Today I mostly
    hear rapid AI progress forecasts from young folks who haven’t bothered
    to ask older folks, or who don’t think those old folks know much relevant.

    I’d guess we are still at least two decades away from a situation where
    over half of US households use robots do to over half of the house
    cleaning (weighted by time saved) that people do today.
    ====

    Darn! I really need that housecleaning robot. (Pace Robert Heinlein,
    though, no manufacturer would dare, in the 21st century, to call
    itself "Hired Girl, Inc.". I suppose that's progress.)

    OTOH, this guy doesn't look so young, so maybe his sober
    analysis means there's hope I'll get that cleaning robot Real Soon Now:

    http://www.wfs.org/blogs/len-rosen/top-human-minds-meet-montreal-discuss-artificial-minds
    ------------
    Len Rosen's 21st Century Tech
    Top Human Minds Meet in Montreal to Discuss Artificial Minds
    Posted on December 9, 2015

    In a headline yesterday Bloomberg Business shouted "Why 2015 was a
    Breakthrough Year in Artificial Intelligence." There is no doubt
    that AI technology is evolving at a faster rate each year so
    the author, Jack Clark, is not wrong. . .

    In 2015 Elon Musk and Stephen Hawking both expressed worry that
    AI could spell doom for humanity at its current pace of advancement.
    Musk even pledged $10 million U.S. to research into ensuring this
    doesn't happen. In 2015 roboethicists are debating rules by which
    AI should be governed. . .

    The year is drawing to a close and one wonders what 2016 will
    bring to AI? My guess is even faster advancements and more fear. . .
    ====

    Or not. Sigh. Wake me up on judgment day. . .

    ReplyDelete
  14. > Charlie Stross, SF author and blogger who wrote the Singularity-scenario
    > novel _Accelerando_ some years ago, has since turned rather sour
    > on the Singularitarians:
    > http://www.antipope.org/charlie/blog-static/2012/05/deconstructing-our-future.html

    While the above link points doubters of >Hism back to this blog,
    the links I really wanted were:

    Three arguments against the singularity
    http://www.antipope.org/charlie/blog-static/2011/06/reality-check-1.html

    Roko's Basilisk wants YOU
    http://www.antipope.org/charlie/blog-static/2013/02/rokos-basilisk-wants-you.html

    ReplyDelete
  15. I did some back of the envelope calculations assuming that the basic unit of information processing in the human brain is the atom. On that assumption following Moore's law you can create a tortured brain in a sensory deprived void in about 250 years. R&D costs would also rise over the course of the project to the point where it may become prohibitively expensive to maintain that pace of development so you may have to add one or two 0's to the end of that figure. Something may happen between now and then though.

    ReplyDelete
  16. No need to wait, futurological discourse seems to create tortured brains in a void in the here and now all the time.

    ReplyDelete
  17. > Charlie Stross, SF author and blogger who wrote the Singularity-scenario
    > novel _Accelerando_ some years ago, has since turned rather sour
    > on the Singularitarians:
    >
    > Three arguments against the singularity
    > http://www.antipope.org/charlie/blog-static/2011/06/reality-check-1.html

    http://futurismic.com/2011/06/23/stross-starts-singularity-slapfight/
    -------------
    Stross starts Singularity slapfight
    Paul Raven
    23-06-2011

    Fetch your popcorn, kids, this one will run for at least week
    or so in certain circles. Tonight’s challenger in the blue corner,
    it’s the book-writing bruiser from Edinburgh, Charlie Stross,
    coming out swinging. . .

    And now, dukes held high in the red corner, Mike Anissimov steps
    into the ring. . .

    . . .

    Seriously, I just eat this stuff up – and not least because I’m
    fascinated by the ways different people approach this sort of debate.
    Rhetorical fencing lessons, all for free on the internet! . . .
    ====

    Rhetoric -- now that's a dirty word to the neo-rationalists
    at LessWrong.

    http://lesswrong.com/lw/3k/how_to_not_lose_an_argument/
    (via
    http://amormundi.blogspot.com/2013/02/engaging-robot-cultists-on-specifics.html )
    --------------------
    The science of winning arguments is called Rhetoric, and it
    is one of the Dark Arts. Its study is forbidden to rationalists,
    and its tomes and treatises are kept under lock and key in
    a particularly dark corner of the Miskatonic University library.
    More than this it is not lawful to speak.
    ====

    I suppose that might mean Dale is Lord Voldemort in disguise.
    Or maybe just an Orc of the Dark Tower.

    ;->

    ReplyDelete
  18. I'm the Mouth of Sauron.

    ReplyDelete
  19. > http://amormundi.blogspot.com/2013/02/engaging-robot-cultists-on-specifics.html

    You know, I was just perusing that post and its comment thread,
    and noticed:

    http://rationalwiki.org/wiki/Talk:LessWrong/Archive4
    -------------------
    LessWrong is nothing like Scientology, and that's a completely silly
    comparison. And trivialises Scientology the way casually comparing people
    to Hitler trivialises the Nazis and what they did.

    I'll note here that I happen to know really lots and lots about Scientology
    so I can speak knowledgeably about the comparison. . .

    Scientology is basically the Godwin example of cults; comparing any
    damn thing to Scientology makes actually abusive cults that are not
    as bad as Scientology seem benign. LessWrong is not Scientology.
    Not even slightly. . .

    Their stupidities are stupid, their fans are fanboys, their good bits
    are fine, that's quite sufficient. The stupidities fully warrant
    horselaughs. . . [but y]ou're mistaking stupidity
    for malice, and this is leading you to make silly comparisons

    David Gerard
    21 June 2012
    ===

    Since then, Mr. Gerard has substantially changed his tune.

    http://reddragdiva.tumblr.com/post/127944291153/reasonableapproximation-uncrediblehallq
    -------------------
    [Aug 30th, 2015]

    i see no evidence that yudkowsky distinguishes in practice between “x the critic”
    and “x the person”: once a criticism is made, the person is deemed reprehensible
    and their every word must be ignored.

    (for the bored casual reader: critic [Alexander Kruel] begs for mercy because harassment from
    yudkowsky’s fans is affecting his health; yudkowsky’s idea of a reasonable response.
    i used to be an active critic of scientology, and my first thought was steps a-e,
    what an ex-scientologist sick of the harassment that is an apostate’s lot was
    supposed to do to get the harassment to stop. i stand by my statement that this
    was self-evidently reprehensible behaviour that any decent human has an ethical
    obligation to condemn.). . .

    lesswrong ideas - not just the basilisk, all the other stuff that’s basilisk
    prerequisites - were observably a memetic hazard for some lesswrong readers.

    i would loosely categorise the most susceptible victims as young, smart, idealistic,
    somewhat aspergic, ocd tendencies, and not a very strong sense of self. this is
    also i think a perceptible category of people who reacted to lesswrong ideas
    like catnip. people who went “well that makes obvious good sense” when yudkowsky
    told them to donate as absolutely as much money as they could to siai, as the
    most effective and altruistic possible action any person could take.
    roko strikes me as being an example: read the basilisk post and the previous post.
    no wonder he wished later he’d never heard of “these ideas”. . .

    something that people will tend to tag a “cult” can form spontaneously and without
    actual intent on the part of the founder. while this does not i think befall [sic]
    [read: give responsibility for, charge, invest, endow; burden, encumber, saddle, tax ;-> ]
    the founder with unlimited ethical liability - nobody actually wants to be brian[*] - i
    do think yudkowsky and siai’s response was horrifyingly short of adequate. charisma
    is fun, i can be charismatic as anything! but when people get that look in their
    eyes and fawn over your ideas, that’s the world telling you to back off and think
    very fucking hard about what you’re doing. . .

    [*] [This is a reference to the Monty Python movie, presumably. However,
    in this case, I'm not at all sure that the founder doesn't want to be Brian.]
    ====

    Yet another LW apostate. Let's count 'em. David Gerard, above; Alexander Kruel;
    Dmytry Lavrov; [Chris]topher Hallquist. There may well be others, of course,
    who have not written, or written as much, about their reservations.

    ReplyDelete
  20. Breaking news!

    http://gizmodo.com/musks-plan-to-save-the-world-from-dangerous-ai-develop-1747645289
    ----------------
    Musk's Plan to Save the World From Advanced AI: Develop Advanced AI
    Maddie Stone
    12/12/15 10:00am

    Noted killer robot-fearer Elon Musk has a plan to save humanity
    from the looming robopocalypse: developing advanced artificial
    intelligence systems. You know, the exact technologies that could
    lead to the robopocalypse. . .

    Yesterday, Tesla’s boss, along with a band of prominent tech
    executives including Linked in co-founder Reid Hoffman and
    PayPal co-founder Peter Thiel, announced the creation of
    OpenAI, a nonprofit devoted to “[advancing] digital intelligence
    in the way that is most likely to benefit humanity as a whole,
    unconstrained by a need to generate financial return.”

    The company’s founders are already backing the initiative with
    $1 billion in research funding over the years to come. Musk will
    co-chair OpenAI with venture capitalist Sam Altman.

    [venture capitalist... unconstrained by a need to generate
    financial return? What is he, some kind of phila... philo...
    uh, good deed doer?]

    Here’s Altman’s response to a question about whether accelerating
    AI technology might empower people seeking to gain power or
    oppress others:

    "Just like humans protect against Dr. Evil by the fact that
    most humans are good, and the collective force of humanity can
    contain the bad elements, we think its far more likely that many,
    many AIs, will work to stop the occasional bad actors than
    the idea that there is a single AI a billion times more powerful
    than anything else,” Altman said. “If that one thing goes
    off the rails or if Dr. Evil gets that one thing and there
    is nothing to counteract it, then we’re really in a bad place.”
    ====

    This is NOT THE PLAN! Haven't these people been reading SL4?
    The idea is that the smartest person in the world, who understands
    Friendliness Theory, will make a singleton AI imbued with
    its maker's Friendliness (called the Sysop, back in the day)
    that will then prevent any other competing AI from ever
    arising. Sort of like Forbin's Colossus, only, you know, in
    a good (or Friendly(TM) ) way.

    ReplyDelete
  21. https://openai.com/blog/introducing-openai/
    ----------------
    Introducing OpenAI
    by Greg Brockman, Ilya Sutskever, and the OpenAI team
    December 11, 2015

    OpenAI is a non-profit artificial intelligence research company.
    Our goal is to advance digital intelligence in the way that is
    most likely to benefit humanity as a whole, unconstrained by
    a need to generate financial return. . .

    OpenAI's research director is Ilya Sutskever, one of the world
    experts in machine learning. Our CTO is Greg Brockman, formerly
    the CTO of Stripe. The group's other founding members are world-class
    research engineers and scientists: Trevor Blackwell, Vicki Cheung,
    Andrej Karpathy, Durk Kingma, John Schulman, Pamela Vagata,
    and Wojciech Zaremba.
    ====

    Wot, no Ben Goertzel? (no Richard Loosemore? :-0 ).

    Also, none of these people seem to have learned to think
    Rationally(TM) by partaking of LessWrong's purifying discourse,
    as far as I can Google.

    Though one of them is mentioned in a CFAR (Center for Applied
    Rationality) testimonial:

    http://rationality.org/testimonials/
    ----------------
    Not that there are magic secrets here. You can learn about everything
    CFAR teaches from books like Kahneman’s _Thinking Fast and Slow_
    (also recommended!). But, for one thing, you won’t. And most importantly,
    there’s huge value in doing it together with an amazing group of
    like-minded people. I was seriously impressed by the caliber of my
    fellow attendees, not least of which being YCombinator’s TREVOR BLACKWELL.
    ====

    And their research director once signed an open letter that mentioned MIRI:

    http://lesswrong.com/lw/liz/research_priorities_for_artificial_intelligence/
    ----------------
    The Future of Life Institute has published their document "Research
    priorities for robust and beneficial artificial intelligence" and
    written an open letter for people to sign indicating their support. . .

    --

    A number of prestigious people from AI and other fields have signed
    the open letter, including Stuart Russell, Peter Norvig, Eric Horvitz,
    ILYA SUTSKEVER, several DeepMind folks, Murray Shanahan,
    Erik Brynjolfsson, Margaret Boden, Martin Rees, Nick Bostrom,
    Elon Musk, Stephen Hawking, and others. . .

    Also, it's worth noting that the research priorities document cites
    a number of MIRI's papers.
    ====

    So maybe these two have learned enough about the Way of Rationality(TM)
    to keep the rest of that unenlightened horde from destroying humanity.

    Loc cit.
    ----------------
    Pieter Abbeel, Yoshua Bengio, Alan Kay, Sergey Levine,
    and Vishal Sikka are advisors to the group. OpenAI's co-chairs
    are Sam Altman and Elon Musk.

    Sam, Greg, Elon, Reid Hoffman, Jessica Livingston, Peter Thiel,
    Amazon Web Services (AWS), Infosys, and YC Research are donating
    to support OpenAI. In total, these funders have committed $1 billion,
    although we expect to only spend a tiny fraction of this in the
    next few years.
    ====

    Interesting to see Alan Kay's name there.

    But uh oh, one of these guys is definitely on the Dark Side:

    https://plus.google.com/113710395888978478005/posts/RuBHhTFNwPV
    ----------------
    Andrew Ng
    Mar 5, 2015

    Enough thoughtful AI researchers (including YOSHUA BENGIO,
    Yann LeCun) have criticized the hype about evil killer robots
    or "superintelligence," that I hope we can finally lay
    that argument to rest. This article summarizes why I
    don't currently spend my time working on preventing AI
    from turning evil.
    http://fusion.net/story/54583/the-case-against-killer-robots-from-a-guy-actually-building-ai/
    ====

    We're doomed! Doomed, I tell you!

    ReplyDelete
  22. Speaking of YCombinator, over at Hacker News:

    https://news.ycombinator.com/item?id=10720176
    ----------------
    Introducing OpenAI. . .

    vox_mollis

    Where does this leave MIRI?

    Is Eliezer going to close up shop, collaborate with OpenAI, or compete?


    robbensinger

    MIRI employee here!

    We're on good terms with the people at OpenAI, and we're
    very excited to see new AI teams cropping up with an explicit
    interest in making AI's long-term impact a positive one.
    Nate Soares is in contact with Greg Brockman and Sam Altman,
    and our teams are planning to spend time talking over the coming months.

    It's too early to say what sort of relationship we'll develop,
    but I expect some collaborations. We're hopeful that the
    addition of OpenAI to this space will result in promising
    new AI alignment research in addition to AI capabilities research.


    Kutta

    Almost certainly, the AI safety pie getting bigger will
    translate to more resources for MIRI too.

    That said, although a lot of money and publicity was thrown
    around regarding AI safety in the last year, so far I haven't
    seen any research outside MIRI that's tangible and substantial.
    Hopefully big money AI won't languish as a PR existence,
    and of course they shouldn't reinvent MIRI's wheels either.


    cobaltblue

    I'm sure if OpenAI ever produces people with anything interesting
    to contribute to the alignment problem, MIRI will happily collaborate.
    That $1bn commitment must be disappointing to some people though.
    ====


    Putting a brave face on it, I see.


    Loc cit.
    ----------------
    Houshalter

    Surveys of AI experts give a median prediction that we will
    have human level AI within 30 years. And a non-trivial probability
    of it happening in 10-20 years. They are almost unanimous in
    predicting it will happen within this century.

    benashford

    They were also unanimous it would happen last century?

    What do we mean by "human level" anyway? Last time I got
    talking to an AI expert he said current research wouldn't lead
    to anything like a general intelligence, rather human level
    at certain things. Machines exceed human capacities already
    in many fields after all...
    ====

    Miss Brahms, we have been diddled. And I am unanimous in
    that.
    https://www.youtube.com/watch?v=m6Px53b_prc

    ReplyDelete
  23. > CFAR teaches from books like Kahneman’s _Thinking Fast and Slow_

    https://plus.google.com/113565599890566416138/posts
    -----------------

    As some of you know, I am a long-time critic of Kahneman, the
    "cognitive biases" idea, and the cult that has grown up around
    that idea (to wit Eliezer Yudkowsky, the Less Wrong blog,
    Future of Humanity Institute, etc.). So it will come as no
    surprise if I try to dismembowel the above teaser. . .

    What I have just described is multiple, simultaneous constraint
    relaxation (MSCR) ..... which means the cognitive system does its work
    by exploring models in parallel, and by hunting for ways to make
    all the elements consistent with one another. It turns out that
    this kind of cognitive system is able to make logical, rational,
    symbolic systems fall flat on their face. MSCR is probably the
    explanation for the overwhelming majority of the stuff that
    happens when you go about your mental business very day.
    Sometimes the MSCR process comes up with wrong answers in
    particular circumstances, but the solution to those circumstances
    is to LEARN formal systems (like arithmetic) which allow you to
    deal with the special cases.

    Without MSCR you would not have any intelligence at all. The
    occasional mistakes are tolerable as the price that has to be
    paid for that incredibly flexible thing that you call
    "intelligence". And if you think that MSCR is a dumbass way to
    build an intelligent system, just try to use the alternative
    (logical, rational, etc.) technique to build an AI capable
    of even a fraction of the mental versatility of a 5-year-old
    human child. After more than half a century of trying, AI
    researcher have utterly failed at that task. (They can do
    other things, but the core mystery of how that kid acquired
    her knowledge and skills as a result of experience is pretty
    much untouched).

    Now, having said all of that, Kahneman, Yudkowsy, and all the
    others who celebrate human irrationality would say that
    what actually happened when your brain built that answer
    was a failure of human cognition.

    Sigh!
    ====

    YMMV. Stay tuned, sports fans.

    ReplyDelete