Saturday, January 26, 2013

A Robot God Apostle's Creed for the "Less Wrong" Throng

The singularitarian and techno-immortalist Robot Cultists who throng the tinpot-fiefdom of Less Wrong apparently had a minor tempest in their tinpot half a year or so ago in which some of the faithful dared declare that their sub(cult)ure might benefit from more contrarians and skeptics here and there, especially given the high-profile in their self-congratulatory self-promotional utterances about how marvelously self-critical and bias-fumigated they all are compared to Outsiders. But at least one Believer was having none of it, declaring:
I think the Sequences got everything right and I agree with them completely... Even the controversial things, like: I think the many-worlds interpretation of quantum mechanics is the closest to correct and you're dreaming if you think the true answer will have no splitting (or I simply do not know enough physics to know why Eliezer is wrong, which I think is pretty unlikely but not totally discountable). I think cryonics is a swell idea and an obvious thing to sign up for if you value staying alive and have enough money and can tolerate the social costs. I think mainstream science is too slow and we mere mortals can do better with Bayes. I am a utilitarian consequentialist and think that if allow someone to die through inaction, you're just as culpable as a murderer. I completely accept the conclusion that it is worse to put dust specks in 3^^^3 people's eyes than to torture one person for fifty years. I came up with it independently, so maybe it doesn't count; whatever. I tentatively accept Eliezer's metaethics, considering how unlikely it is that there will be a better one (maybe morality is in the gluons?) "People are crazy, the world is mad," is sufficient for explaining most human failure, even to curious people, so long as they know the heuristics and biases literature.
Yes, of course it is ridiculous to pretend that the many worlds interpretation is so non-problematic and non-controversial that one would have to be "dreaming" to entertain the possibility that it may one day be supplanted by a better theory that looks more like alternatives already on offer -- and, yes, it is especially ridiculous to pretend so on the basis of not knowing more about physics than a non-physicist high school drop-out guru-wannabe who thinks he is leading a movement to code a history-shattering Robot God who will solve all our problems for us any time soon.

Yes, of course it is ridiculous to believe that your frozen, glassified, hamburgerized brain will be revived and sooper-enhanced and possibly immortalized by swarms of billions of robust reliably controllable and programmable self-replicating nanobots, and/or your info-soul "migrated" via snapshot "scanning" into a cyberspatial Holodeck Heaven where it will cavort bug-and-crash-and-spam free for all eternity among the sexy sexbots.

Yes, of course it is ridiculous to imagine non-scientists in an online Bayes-Theorem fandom can help accomplish warranted scientific results faster than common or garden variety real scientists can themselves by running probability simulations in your club chairs or on computer programs in addition to or even instead of anybody engaging in actually documentable, repeatable, testable experiments, publishing the results, and discussing them with people actually qualified to re-run and adapt and comment on them as peers.

Yes, of course it is ridiculous to think of oneself as the literal murderer of every one of the countless distant but conceivably reachable people who share the world with you but are menaced by violence, starvation, or neglected but treatable health conditions even if it is true that not caring at all about such people would make you a terrible asshole -- and, yes, it is ridiculous to fall for the undergraduate fantasy that probabilistic formulae might enable us to transform questions of what we should do into questions of fact in the first place.

Yes, of course it is ridiculous to say so many nonsensical things and then declare the rest of the world mad.

Yes, it is ridiculous that the very same Eliezer Yudkowsky treated as the paragon against whose views all competing theories of physics are measured is the very same person endorsed a few sentences later as the meta-ethical paragon compared to whose views all competing moral philosophies are judged wanting. Sure, sure, your online autodidact high priest deserves the Nobel Prize for Physics and the Nobel Peace Prize on top of it in addition to all that cash libertopian anti-multiculturalist reactionary and pop-tech CEO-celebrity Peter Thiel keeps giving him for being an even better Singularipope than Kurzweil. Who could doubt it?

Perhaps grasping the kind of spectacle he is making of himself, our True Believer offers up this defensive little bit of pre-emptive PR-management in his post (not that it yields any actual qualification of the views he espouses or anything): "This of course makes me a deranged, non-thinking, Eliezer-worshiping fanatic for whom the singularity is a substitute religion." Hey, pal, if the shoe hurts, you're probably wearing it.

By the way, if anybody is wondering just what The Sequences are, you know, the ones that presumably "get everything right" -- no, nothing culty there -- they are topical anthologies of posts that have appeared on Less Wrong (major contributions written by, you guessed it, Eliezer Yudkowsky, naturellement) and function more or less as site FAQs with delusions of grandeur. While not everything in The Sequences is wrong, little that isn't wrong in them isn't also widely grasped and often endorsed by all sorts of folks who aren't also members of Robot Cults who think they are the only ones who aren't wrong, er, are "less wrong" -- which is the usual futurological soft shoe routine, after all.

Inspired by the aggressive-defensive post I have been dissecting so far, another True Believer offered up -- again, all in good funny fun, right, right? -- the following intriguing, revealing Robot God Apostle's Creed for the Less Wrong Throng, which I reproduce here for your delight and edification:
I believe in Probability Theory, the Foundation, the wellspring of knowledge,
I believe in Bayes, Its only Interpretation, our Method.
It was discovered by the power of Induction and given form by the Elder Jaynes.
It suffered from the lack of priors, was complicated, obscure, and forgotten.
It descended into AI winter. In the third millennium it rose again.
It ascended into relevance and is seated at the core of our FAI.
It will be implemented to judge the true and the false.
I believe in the Sequences,
Many Worlds, too slow science,
the solution of metaethics,
the cryopreservation of the brain,
and sanity everlasting.
Phyg.
Nothing to see here, folks. For more on how totally not a cult the Robot Cult is, see this and this; and for more on the damage even so silly a cult as the Robot Cult can do, see this and this.

30 comments:

  1. In fairness to LessWrong, plenty of folks affiliated with the site are also concerned about it being a Yudkowsky cult. From a high-karma comment on that post:

    "If you agree with everything Eliezer wrote, you remember him writing about how every cause wants to be a cult. This post looks exactly like the sort of cultish entropy that he advised guarding against to me. Can you imagine a similar post on any run-of-the-mill, non-cultish online forum?"

    ReplyDelete
  2. Ok that is creepy on many many levels......
    If these guys were were wearing bath robes, running shoes and tracksuits and were all castrated you couldn't tell them apart from a cult....

    A couple of more steps to crazy town guys and the spaceship hiding in the asteroid will be along momenterilly to collect you after you down the kool aid Eliezer is handing out.

    ReplyDelete
  3. > Yes, of course it is ridiculous to say so many nonsensical
    > things and then declare the **rest** of the world mad. . .
    >
    > [A]ll in good funny fun, right, right?

    Laments of the Rationalists:

    http://lesswrong.com/lw/38u/best_career_models_for_doing_research/344l
    ----------------
    FormallyknownasRoko [Roko Mijic]
    10 December 2010 05:06:28PM

    [In reference to "Roko's Basilisk" -- see
    http://rationalwiki.org/wiki/LessWrong#Roko.27s_Basilisk
    or the comment at
    http://amormundi.blogspot.com/2011/07/futurological-brickbats.html ]

    . . .

    Furthermore, I would add that I wish I had never learned about
    any of these ideas. In fact, I wish I had never come across the
    initial link on the internet that caused me to think about
    transhumanism and thereby about the singularity; I wish very
    strongly that my mind had never come across the tools to inflict
    such large amounts of potential self-harm with such small
    durations of inattention, uncautiousness and/or stupidity,
    even if it is all premultiplied by a small probability. . .

    I went to the effort of finding out a lot, went to SIAI and
    Oxford to learn even more, and in the end I am left seriously
    disappointed by all this knowedge. In the end it all boils down to:

    "most people are irrational, hypocritical and selfish, if you
    try and tell them they shoot the messenger, and if you try
    and do anything you bear all the costs, internalize only tiny
    fractions of the value created if you succeed, and you almost
    certainly fail to have an effect anyway. And by the way the
    future is an impending train wreck"

    I feel quite strongly that this knowledge is not a worthy
    thing to have sunk 5 years of my life into getting.
    ----------------

    which engendered the reply:

    ----------------
    XiXiDu [Alexander Kruel, http://www.kruel.co ]
    10 December 2010

    I wish you'd talk to someone other than Yudkowsky about this.
    You don't need anyone to harm you, you already seem to harm
    yourself. You indulge yourself in self-inflicted psychological
    stress. As Seneca said, "there are more things that terrify
    us than there are that oppress us, and we suffer more often
    in opinion than in reality". You worry and pay interest
    for debt that will likely never be made. . .
    ----------------

    ReplyDelete
  4. http://kruel.co/2012/11/02/rationality-come-on-this-is-serious/
    ----------------
    What I can say is that I am becoming increasingly confused
    about how to decide anything and increasingly tend to assign
    more weight to intuition to decide what to do and naive
    introspection to figure out what I want.

    John Baez replied,

    Well, you actually just described what I consider the correct
    solution to your problem! Rational decision processes take a
    long time and a lot of work. So, you can only use them to
    make a tiny fraction of the decisions that you need to make.
    If you try to use them to make more than that tiny fraction,
    you get stuck in the dilemma you so clearly describe: an infinite
    sequence of difficult tasks, each of which can only be done
    after another difficult task has been done!

    This is why I think some ‘rationalists’ are just deluding
    themselves when it comes to how important rationality is. Yes,
    it’s very important. But it’s also very important not to try
    to use it too much! If someone claims to make most of
    their decisions using rationality, they’re just wrong: their
    ability to introspect is worse than they believe.

    So: learning to have good intuition is also very important – because
    more than 95% of the decisions we make are based on intuition.
    Anyone who tries to improve their rationality without also
    improving their intuition will become unbalanced. Some even
    become crazy.
    ----------------

    Indeed. Or, as G. K. Chesterton put it a century ago (_Orthodoxy_, 1908):

    "If you argue with a madman, it is extremely probable that
    you will get the worst of it; for in many ways his mind
    moves all the quicker for not being delayed by the things
    that go with good judgment. He is not hampered by a sense
    of humor or by charity, or by the dumb certainties of
    experience. He is the more logical for losing sane
    affections. Indeed, the common phrase for insanity is in
    this respect a misleading one. The madman is not the man
    who has lost his reason. The madman is the man who has
    lost everything except his reason."

    (via
    http://cantuar.blogspot.com/2012/01/how-to-argue-with-madman-from-gk.html )

    (And no, I am not a Catholic. ;-> )

    ReplyDelete
  5. Eliezer wrote... every cause wants to be a cult.

    How very wrong... and revealing.

    ReplyDelete
  6. Another strange aspect to the guru-whammy grip (and there's really
    no other way to characterize it) that Yudkowsky has held over
    a large subset of the >Hist crowd (including the contingent marshalled by
    his tireless PR disciple Michael Anissimov) ever since his
    1996 appearance on the Extropians' mailing list and his
    publication of "Staring Into the Singularity" and other articles
    (including his own characterization of his self-imputed non-neurotypical
    genius as an "Algernon" -- h/t the Daniel Keyes short story and
    later novel and film) in 1997 at the old http://tezcat.com/~eliezer
    Web site, is that **whatever** the prospects for "Strong"
    Artificial Intelligence (or Artificial General Intelligence [AGI] --
    I guess Ben Goertzel originated that term; I don't know for sure),
    Yudkowsky's own rigid utilitarianism (now promulgated at
    LessWrong) absolutely dominates discussions about AI in
    on-line >Hist circles. Ben Goertzel himself doesn't buy it,
    but he's been very very careful indeed to pussyfoot his
    gentle demurrals so as not to inflame Yudkowsky and his
    acolytes (and not always successfully, either).

    It's an odd thing, but maybe not so surprising. The rigid,
    analytical-math-oriented bias of that approach to AI
    1) harks back to the GOFAI of the 50s and 60s, when some
    folks expected the whole thing to be soluble by a smart
    grad student spending a summer on it 2) reinforces
    Yudkowsky's own dear image of himself as a consummate mathematician
    3) is congruent with the kind of Ayn Randian, libertopian
    bias among so many of the SF-fan, >Hist crowd.

    No other approaches to AI need apply, because they're either
    1) wrong, or worse 2) an existential risk.

    Richard Loosemore, for example, (someone who, unlike Yudkowsky, has an
    actual credential or two) has triggered the virulent immune
    response of Yudkowsky and his defenders by proposing a model
    of AI that seems, to **me** at least (but hey, what do I know? ;-> )
    far more plausible (even if still not likely to be
    technologically achievable anytime soon).

    (I've mentioned Loosemore before in comments on
    http://amormundi.blogspot.com/2007/10/superla-pope-peeps.html
    http://amormundi.blogspot.com/2009/03/from-futurological-confusions-to.html
    http://amormundi.blogspot.com/2009/08/treder-traitor.html
    http://amormundi.blogspot.com/2010/10/nauru-needs-futurologists.html
    http://amormundi.blogspot.com/2012/02/transhumanist-war-on-brains.html )

    ReplyDelete
  7. > . . . Richard Loosemore . . .

    http://kruel.co/2012/05/14/disturbingly-ungrounded/
    -----------------
    Richard Loosemore, Mon May 14 2012 @ existential mailing list:

    I find this entire discussion to be disturbingly ungrounded.

    We are debating the behavior, drives and motivations of intelligent machines,
    so it would seem critical to understand how the mechanisms underlying
    behavior, drives and motivations would actually work.

    But it is most certainly NOT the case that we understand these
    mechanisms. There is a widespread assumption (especially at SIAI
    and FHI) that the mechanisms must be some type of proposition-based
    utility-maximization function. But this is nothing more than an
    extrapolation from certain types of narrow-AI hierarchical
    goal-planning systems, and a convenient excuse to engage in
    unconstrained mathematical theorizing. In practice, we have
    not come anywhere near to building an AGI system that:

    (a) contains such types of motivation mechanism, with
    extremely high-level supergoals, or
    (b) contains such a mechanism and also exhibits a stable form
    of intelligence.

    Everything said in arguments like the current thread depends
    on exactly how the mechanism would work, but that means that everything
    said is actually predicated on unfounded assumptions.

    On a more particular note:

    On 5/14/12 9:24 AM, Anders Sandberg wrote:
    > For example, the work on formalizing philosophical concepts
    > (automating the process of grounding the fuzzy words into something
    > real) into something an AI could understand requires quite
    > sophisticated understanding of both philosophy and machine learning.

    This assumes that there is some formalization to be had. But there
    are many arguments (including those in some of my own papers on the
    subject) that lead to the conclusion that this kind of formalization
    of semantics, and the whole machine learning paradigm, is not
    going to lead to AGI.

    In plain language: you are never going to formalize the notion
    of “friendliness” in such a way that the AGI can “understand”
    it in the way that will make “Be friendly to humans” a valid
    supergoal statement.

    ReplyDelete
  8. http://lesswrong.com/user/Richard_Loosemore/overview/?count=10&after=t1_6jyo
    ---------------------
    In response to Thoughts on the Singularity Institute (SI)
    Comment author: Richard_Loosemore 10 May 2012 07:11:15PM -3 points

    My own experience with SI, and my background, might be relevant
    here. I am a member of the Math/Physical Science faculty at
    Wells College, in Upstate NY. I also have had a parallel career
    as a cognitive scientist/AI researcher, with several publications
    in the AGI field, including the opening chapter (coauthored with
    Ben Goertzel) in a forthcoming Springer book about the Singularity.

    I have long complained about SI's narrow and obsessive focus
    on the "utility function" aspect of AI -- simply put, SI assumes
    that future superintelligent systems will be driven by certain
    classes of mechanism that are still only theoretical, and which
    are very likely to be superceded by other kinds of mechanism that
    have very different properties. Even worse, the "utility function"
    mechanism favored by SI is quite likely to be so unstable that
    it will never allow an AI to achieve any kind of human-level
    intelligence, never mind the kind of superintelligence that would
    be threatening.

    Perhaps most important of all, though, is the fact that the
    alternative motivation mechanism might (and notice that I am
    being cautious here: might) lead to systems that are extremely
    stable. Which means both friendly and safe.

    Taken in isolation, these thoughts and arguments might amount to
    nothing more than a minor addition to the points that you make above.
    However, my experience with SI is that when I tried to raise
    these concerns back in 2005/2006 I was subjected to a series of
    attacks that culminated in a tirade of slanderous denunciations
    from the founder of SI, Eliezer Yudkowsky. After delivering this
    tirade, Yudkowsky then banned me from the discussion forum that
    he controlled, and instructed others on that forum that discussion
    about me was henceforth forbidden.

    Since that time I have found that when I partake in discussions
    on AGI topics in a context where SI supporters are present, I am
    frequently subjected to abusive personal attacks in which
    reference is made to Yudkowsky's earlier outburst. This activity
    is now so common that when I occasionally post comments here,
    my remarks are very quickly voted down below a threshold that makes
    them virtually invisible. (A fate that will probably apply
    immediately to this very comment).

    I would say that, far from deserving support, SI should be considered
    a cult-like community in which dissent is ruthlessly suppressed
    in order to exaggerate the point of view of SI's founders and
    controllers, regardless of the scientific merits of those views,
    or of the dissenting opinions.

    ReplyDelete
  9. The rigid, analytical math-oriented bias of that approach to AI 1) harks back to the GOFAI of the 50s and 60s, when some folks expected the whole thing to be soluble by a smart grad student spending a summer on it 2) reinforces Yudkowsky's own dear image of himself as a consummate mathematician 3) is congruent with the kind of Ayn Randian, libertopian bias among so many of the SF-fan, >Hist crowd.

    I think there are enormously clarifying observations packed into that formulation, and folks should re-read it.

    Speaking of the way the singularitarians hark back to the most failed most inept most sociopathic most boyz-n-toys AI discourse of mid-century Gernsbackian-pulp post-WW2 U!S!A! footurism, I can't help but cite another passage from Less Wrong that you drew to my attention in a private e-mail a couple of days ago:

    "I've just been through the proposal for the Dartmouth AI conference of 1956, and it's a surprising read. All I really knew about it was its absurd optimism, as typified by the quote:

    An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.

    But then I read the rest of the document, and was... impressed. Go ahead and read it, and give me your thoughts. Given what was known in 1955, they were grappling with the right issues, and seemed to be making progress in the right directions and have plans and models for how to progress further. Seeing the phenomenally smart people who were behind this (McCarthy, Minsky, Rochester, Shannon), and given the impressive progress that computers had been making in what seemed very hard areas of cognition (remember that this was before we discovered Moravec's paradox)... I have to say that had I read this back in 1955, I think the rational belief would have been [emphasis added] 'AI is probably imminent'. Some overconfidence, no doubt, but no good reason to expect these prominent thinkers to be so spectacularly wrong on something they were experts in."

    Although the poor Robot Cultist cannot help but point to the "overconfidence" of these sentiments -- all of which were after all completely flabbergastingly serially failed and wrong and ridiculous -- you can tell his heart just isn't in it. Where sensible people look at these pronouncements and see the radically impoverished conception of intelligence and ridiculously triumphalist conception of technoscience driving the discourse, the Robot Cultist finds himself saying, man those dumb sociopathic white guys were really onto something there! Man, were they rational and right or what to believe so irrationally in what was so wrong! Man, I love those guys! Notice that even the retroactive reconstruction of Bayesian triumphalism cannot permit the reality of how "spectacularly wrong" they all were to have any "good reason" -- and nothing about the example gives a critical nudge even now to the assertion about this army of fail that they are "prominent thinkers" and "experts" in sound AI.

    About the Randroidal pot-boiler & pulp SF connection to this Bayes/AI-fandom notice that the entitative figuration of their AI discourse remains far more beholden to sfnal conceits than software practice, and notice how often sooper-genius Yudkowsky's highest profile formulations have often depended on frankly facile ungainly high-school English level appropriations from popular fiction like Flowers for Algernon or Harry Potter.

    ReplyDelete
  10. > I believe in the Sequences,
    > Many Worlds, too slow science,
    > the solution of metaethics,
    > the cryopreservation of the brain,
    > and sanity everlasting.
    > Phyg.

    What does "phyg" mean? Is it a flying pig?
    http://aethermod.wikia.com/wiki/Phyg
    Is it the Greek word for "shun" or "flee"?
    http://phthiraptera.info/content/phyg-o
    (or "flight"; phyg- = Latin fug- ).

    Well, maybe. But here's another explanation:

    http://lesswrong.com/lw/bql/our_phyg_is_not_exclusive_enough/
    -----------------
    Thanks to people not wanting certain words google-associated with LW: Phyg

    . . .
    -----------------

    This was an embedded link to rot13.com, where
    "Phyg" decodes to "Cult".

    ;->

    ReplyDelete
  11. >"Yes, of course it is ridiculous to believe that your frozen, glassified, HAMBURGERIZED BRAIN will be revived and sooper-enhanced and possibly immortalized by swarms of billions of robust reliably controllable and programmable self-replicating nanobots"

    Right when I read aloud the words "hamburgerized brain", I let out a bellyful of laughter. These transhumanoids, are seriously out of touch with reality, and are seriously out of their depth. Funny thing is I used to believe in such nonsense too! AHAHHA! I am so glad I found this blog, to break out of my brief, yet notable and memorable spell of stupidity.

    ReplyDelete
  12. > Funny thing is I used to believe in such nonsense too!

    There are many similar things that have popped up in recent history
    (mid to late 20th century), some of them science-fiction-derived
    (or science-fiction-related) that you might well have stumbled
    into.

    The new book about Scientology, _Going Clear: Scientology, Hollywood,
    and the Prison of Belief_ by Lawrence Wright
    http://www.amazon.com/Going-Clear-Scientology-Hollywood-Prison/dp/0307700666/
    is quite a jaw-dropper and well worth the $20.
    It's also a professional job, and well-written -- the guy's
    a staff writer for _The New Yorker_.
    http://www.lawrencewright.com/

    There's also a book about the Ayn Rand movement you might want to
    take a look at: _The Ayn Rand Cult_ by Jeff Walker
    http://www.amazon.com/The-Rand-Cult-Jeff-Walker/dp/0812693906/
    (Don't be dismayed by the three-star review average; books
    like this get pile-on pans by contemporary Randroids.)

    Speaking of science fiction, I had heard before that Keith Raniere's
    "large-group awareness training" outfit (don't call it a cult
    or they'll sue!), NXIVM, had been inspired (Raniere claimed)
    by Hari Seldon's "psychohistory" in Isaac Asimov's
    _Foundation_ trilogy
    ( http://www.rickross.com/reference/esp/esp32.html ).

    But I didn't know until I saw it in _Going Clear_ (which I'm
    currently reading) that the Aum Shinrikyo cult had also
    derived some of its ideas from the _Foundation_ books:

    "In March 1995, adherents of a Japanese movement called _Aum
    Shinrikyo_ ("Supreme Truth") attacked five subway trains in
    Tokyo with sarin gas. Twelve commuters died; thousands more
    might have if the gas had been more highly refined. It
    was later discovered that this was just one of at least
    fourteen attacks the group staged in order to set off a
    chain of events intended to result in an apocalyptic world
    war. The leader of the group, Shoko Asahara, a blind yoga
    instructor, comibned the tenets of Buddhism with notions
    drawn from Isaac Asimov's _Foundation_ trilogy, which depicts
    a secretive group of scientists who are preparing to take
    over the world. Many of Asahara's followers were indeed
    scientists and engineers from top Japanese universities
    who were enchanted by this scheme. They purchased military
    hardware in the former Soviet Union and sought to acquire
    nuclear warheads. When that failed, they bought a sheep farm
    in Western Australia that happened to be atop a rich
    vein of uranium. They cultivated chemical and biological
    weapons, such as anthrax, Ebola virus, cyanide, and VX gas.
    They had used such agents in previous attacks, but failed
    to create the kind of mass slaughter they hoped would bring
    on civil war and nuclear Armageddon. . . A spokesperson
    for the Church of Scientology in New Zealand explained that
    the source of Aum Shinrikyo's crimes was the practice of
    psychiatry in Japan."

    -- _Going Clear_, pp. 240-241

    See also "The Cult at the End of the World"
    Wired Magazine, July 4, 1996
    By David E. Kaplan and Andrew Marshall
    http://icsahome.com/infoserv_respond/by_group.asp?ID=50045

    ReplyDelete
  13. the Aum Shinrikyo cult had also
    derived some of its ideas from the _Foundation_ books


    For heaven's sake don't let Paul Krugman hear about that!

    ReplyDelete
  14. Science fiction has fingerprints in a lot of cults.

    I just took a peek at Less Wrong and I see major paralells with the Scientology objective to apply Dianetics to become "clear". What these guys are doing is outsourcing their thinking to a charismatic authority figure which is the oldest and biggest logical fallacy in the book.

    ReplyDelete
  15. > Yudkowsky's own rigid utilitarianism (now promulgated at
    > LessWrong) absolutely dominates discussions about AI in
    > on-line >Hist circles. Ben Goertzel himself doesn't buy it,
    > but he's been very very careful indeed to pussyfoot his
    > gentle demurrals so as not to inflame Yudkowsky and his
    > acolytes (and not always successfully, either).

    Speaking of Ben Goertzel:

    http://lesswrong.com/lw/aw7/muehlhausergoertzel_dialogue_part_1/
    -----------------------
    Muehlhauser-Goertzel Dialogue, Part 1
    16 March 2012 05:12PM

    . . .

    [T]here’s no formal mathematical reason to think that
    “technical rationality” is a good approach in real-world situations;
    and “technical rationality” has no practical track record to
    speak of. And ordinary, semi-formal rationality itself seems to
    have some serious limitations of power and scope. . .

    [A]t this stage -- certainly, anyone who has supreme confidence that
    technical rationality is going to help humanity achieve its
    goals better, is being rather IRRATIONAL ;-) ….

    In this vein, I’ve followed the emergence of the Less Wrong community
    with some amusement and interest. One ironic thing I’ve noticed
    about this community of people intensely concerned with improving their
    personal rationality is: by and large, these people are already
    hyper-developed in the area of rationality, but underdeveloped
    in other ways! Think about it -- who is the prototypical Less Wrong
    meetup participant? It’s a person who’s very rational already,
    relative to nearly all other humans -- but relatively lacking in
    other skills like intuitively and empathically understanding other
    people. But instead of focusing on improving their empathy and
    social intuition (things they really aren’t good at, relative to
    most humans), this person is focusing on fine-tuning their rationality
    more and more, via reprogramming their brains to more naturally
    use “technical rationality” tools! This seems a bit imbalanced. . .

    To me it’s all about balance. . . Don’t let your thoughts be clouded by
    your emotions; but don’t be a feeling-less automaton, don’t make
    judgments that are narrowly rational but fundamentally unwise.
    As Ben Franklin said, “Moderation in all things, including moderation.”

    . . .

    ReplyDelete
  16. About the hypothetical uber-intelligence that wants to tile the
    cosmos with molecular Mickey Mouses. . . [Y]ou don’t have any rigorous
    argument to back up the idea that a system like you posit is possible
    in the real-world, either! And SIAI has staff who, unlike me,
    are paid full-time to write and philosophize … and they haven’t
    come up with a rigorous argument in favor of the possibility of
    such a system, either. Although they have talked about it a lot,
    though usually in the context of paperclips rather than Mickey Mouses. . .

    About my blog post on “The Singularity Institute’s Scary Idea” --
    yes, that still reflects my basic opinion. After I wrote that blog post,
    Michael Anissimov -- a long-time SIAI staffer and zealot whom I
    like and respect greatly -- told me he was going to write up and
    show me a systematic, rigorous argument as to why “an AGI not built
    based on a rigorous theory of Friendliness is almost certain to
    kill all humans” (the proposition I called “SIAI’s Scary Idea”).
    But he hasn’t followed through on that yet -- and neither has
    Eliezer or anyone associated with SIAI. . .

    But I find it rather ironic when people make a great noise about
    their dedication to rationality, but then also make huge grand
    important statements about the future of humanity, with great
    confidence and oomph, that are not really backed up by any rational
    argumentation. This ironic behavior on the part of Eliezer,
    Michael Anissimov and other SIAI principals doesn’t really bother
    me, as I like and respect them and they are friendly to me, and
    we’ve simply “agreed to disagree” on these matters for the time
    being. But the reason I wrote that blog post is because my own
    blog posts about AGI were being trolled by SIAI zealots
    (not the principals, I hasten to note) leaving nasty comments to the
    effect of “SIAI has proved that if OpenCog achieves human level AGI,
    it will kill all humans.“ Not only has SIAI not proved any such
    thing, they have not even made a clear rational argument! . . .

    I recall when. . . Anna Salamon guest lectured in the class on
    Singularity Studies that my father and I were teaching at
    Rutgers University in 2010. Anna made the statement, to the students,
    that. . . “If a superhuman AGI is created without being carefully
    based on an explicit Friendliness theory, it is ALMOST SURE
    to destroy humanity.” (i.e., what I now call SIAI’s Scary Idea)

    I then asked her. . . if she could give any argument to back up the idea.

    She gave the familiar SIAI argument that, if one picks a mind at random
    from “mind space”, the odds that it will be Friendly to humans
    are effectively zero. . .

    I had pretty much the same exact argument with SIAI advocates
    Tom McCabe and Michael Anissimov on different occasions; and also,
    years before, with Eliezer Yudkowsky and Michael Vassar -- and
    before that, with (former SIAI Executive Director) Tyler Emerson.
    Over all these years, the SIAI community maintains the Scary Idea
    in its collective mind, and also maintains a great devotion
    to the idea of rationality, but yet fails to produce anything
    resembling a rational argument for the Scary Idea -- instead
    repetitiously trotting out irrelevant statements about random minds!! . . .
    [And] backing off when challenged into a platitudinous position
    equivalent to “there’s a non-zero risk … better safe than sorry...”,
    is not my idea of an intellectually honest way to do things.

    Why does this particular point get on my nerves? Because I don’t
    like SIAI advocates telling people that I, personally, am on a
    R&D course where if I succeed I am almost certain to destroy
    humanity!!! That frustrates me. . .
    -----------------------


    ;->

    ReplyDelete
  17. There's an even larger set of professionals working on vaccinations that are being told that they're destroying humanity for profit by an even larger group of folks. Some of them are also in enlightenment cults styled after Scientology. Life is too short to spend much of it being annoyed by how these people think of you. This kind of nuttery is like the residual noise left behind by the big bang, it's constant and it's not going away on any timescale you can imagine.

    ReplyDelete
  18. > This kind of nuttery is like the residual noise left behind by
    > the big bang, it's constant and it's not going away on any
    > timescale you can imagine.

    No, I'm afraid it's not going to go away. There are always (a few)
    people being born who are ready to step into the role of a guru
    who is going to Show Us The Way, and (rather more) people
    being born who are ready to become a follower of somebody
    who looks like he (usually a "he", though there are exceptions --
    Mary Baker Eddy, Helena Blavatsky, Aimee Semple McPherson)
    can (for a price) provide their lives with meaning (usually
    a meaning of no less than Cosmic Significance).

    That doesn't mean the phenomenon isn't worth public exposure.
    Far from it! Though those doing the exposing can often
    themselves pay a terrible price. It boggles the mind that
    something like the following could happen in the United States,
    in the 1970s:

    (From _Going Clear_, p. 117)

    "Paulette Cooper was studying comparative religion for a summer
    at Harvard in the late 1960s when she became interested in
    Scientology, which was gaining attention. 'A friend came to
    me and said he had joined Scientology and discovered he was
    Jesus Christ,' she recalled. She decided to go undercover to
    see what the church was about. 'I didn't like what I saw,'
    she said. The Scientologists she encountered seemed to be in
    a kind of trance. When she looked into the claims that the
    church was making, she found many of them false or impossible
    to substantiate. 'I lost my parents to Auschwitz,' Cooper said,
    explaining her motivation in deciding to write about
    Scientology at a time when there had been very little published
    and those who criticized the church came under concentrated
    legal and personal attacks. . . Cooper published her first
    article in _Queen_, a British magazine, in 1970. 'I got
    death threats,' she said. The church filed suit against her.
    She refused to be silent. 'I thought if, in the nineteen-thirties
    people had been more outspoken, maybe my parents would
    have lived.' The following year, Cooper published a book,
    _The Scandal of Scientology_. . .

    After[wards], Cooper's life turned into a nightmare. She was
    followed; her phone was tapped; she was sued nineteen times.
    Her name and telephone number were written on the stalls of
    public men's rooms. One day, when Cooper was out, her
    cousin, who was staying in her New York apartment, opened
    the door for a delivery from a florist. The deliveryman took
    a gun from the bouquet, put it to her temple, and pulled
    the trigger. When the gun didn't fire, he attempted to
    strangle her. Cooper's cousin screamed and the assailant
    fled. Cooper then moved to an apartment building with a
    doorman, but soon after that her three hundred neighbors
    received letters saying that she was a prostitute with venereal
    disease who molested children. . . Cooper was charged
    with mailing bomb threats to the Church of Scientology. . .
    In May, 1973, Cooper was indicted by the U.S. Attorney's
    office for mailing the threats and then lying about it
    before the grand jury. . ."

    ReplyDelete
  19. p. 140

    "Very early one morning in July 1977, the FBI, having been
    tipped off about [another Scientology operation], carried
    out raids on Scientology offices in Los Angeles and
    Washington, DC, carting off nearly fifty thousand documents.
    One of the files was titled 'Operation Freakout.'
    It concerned the treatment of Paulette Cooper, the journalist
    who had published an expose of Scientology. . . six years
    earlier.

    After having been indicted on perjury and making bomb threats
    against Scientology, Cooper had gone into a deep depression.
    She stopped eating. At one point, she weighed just
    eighty-three pounds. She considered suicide. Finally,
    she persuaded a doctor to give her sodium pentothal, or
    'truth werum,' and question her under the anaesthesia.
    The government was sufficiently impressed that the prosecutor
    dropped the case against her, but her reputation was
    ruined, she was broke, and her health was uncertain."

    And more recently:

    http://www.nytimes.com/2013/01/03/books/scientology-fascinates-the-author-lawrence-wright.html
    -----------------
    [Lawrence Wright's] new book, “Going Clear: Scientology, Hollywood, & the
    Prison of Belief” (Knopf) is about the famously litigious Church of Scientology,
    and he said he has received innumerable threatening letters from lawyers
    representing the church or some of the celebrities who belong to it.
    (Transworld, Mr. Wright’s British publisher, recently canceled its plans
    to publish “Going Clear,” though a spokeswoman insisted that the decision
    was not made in response to threats from the church.)
    -----------------

    ReplyDelete
  20. That doesn't mean the phenomenon isn't worth public exposure.

    That's for sure. It is important to expose even the wackier Robot Cultists to the extent that

    [1] they are saying things that certain elite-incumbents like to hear however ridiculous on the merits -- eg, skim-scam tech celebrity ceos looking to be cast as the protagonists of history, petrochemical ceos looking for profitable geo-engineering rationales rather than regulatory interventions that impact their bottom lines, corporate-militarists on the lookout for existential threat techno-terror frames that justify big budget boondoggles -- the example of the belligerent neocon militarists and macroeconomically illiterate market ideologues should be ever before us in recalling this;

    [2] they are saying things that in their extremity actually expose the underlying assumptions, aspirations, and pathologies of more mainstream and prevalent scientism, evo-psycho/evo-devo reductionism, eugenic "optimal" health norms, techno-fetishism, techno-triumphalism, unsustainable consumption, digi-utopianism, exploitative fraudulent global developmentalism in neoliberal discourses and practices;

    [3] they are doing real damage to real people in real time in organizational and media contexts by mobilizing guru-wannabe, pseudo-expertise, True Believer dynamics at whatever scale.

    ReplyDelete
  21. These suckers aren't born they're victims of circumstance. Almost everyone is susceptible at some point. The two biggest predictors of joining a cult are death of someone really close to you or a recent divorce. It's one of the reasons why Scientologist comb over disaster areas trying to recruit people like they did in Haiti.

    I was in a sales cult when I was 18, shortly after my dad died. It was easy to get sucked in but I came to my senses rather suddenly after two weeks. I rang a friend from a payphone and he came and picked me up and got me out of there.

    ReplyDelete
  22. Oh and I agree they should be subjected to ridicule but it's best not to let them rile you too much.

    ReplyDelete
  23. > These suckers aren't born they're victims of circumstance.
    > Almost everyone is susceptible at some point. The two
    > biggest predictors of joining a cult are death of someone
    > really close to you or a recent divorce.

    From _Dream Catcher: A Memoir_ by Margaret A. Salinger
    (daughter of author J. D. Salinger)
    (Washington Square Press, 2000)
    http://www.amazon.com/gp/product/0671042823/

    The existential state of the typical person who, upon
    encountering a cult, is likely to become a follower
    reads like a description of most of my father's
    characters, and indeed, or my father himself.
    Many studies of cult phenomena have found that the
    appeal of the cult depends "largely on the weakness
    and vulnerability that all of us feel during key
    stress periods in life. At the time of recruitment,
    the person is often mildly depressed, in transition,
    and feeling somewhat alienated." [Robert W. Dellinger,
    _Cults and Kids_] One study, in particular, of
    those who become involved in cults, speaks directly
    to the vulnerability of my father and his characters
    who "just got out": "Leaving any restricted
    community can pose problems -- leaving the Army for
    civilian life is hard, too . . . many suffered from
    depression . . . loneliness, anomie [Margaret Thaler
    Singer, "Coming out of the Cults," _Psychology Today_,
    January 1979], or what can be referred to as
    "future void." They're standing at the edge, as Holden
    said, of "some crazy cliff," looking for a catcher. . .
    Many of those who join cults find "close relationships
    with like-minded others" [A study conducted by the
    Jewish Community Relations Committee of Philadelphia
    asked former cult members to list their reasons
    for joining. The committee found that, in order
    of relative importance, the number one reason was
    loneliness and the need for friendship. "More than
    any other factor, the desire for uncomplicated
    warmth and acceptance . . . leads people into
    cults."] . . .


    I copied more from _Dream Catcher_ into Dale's blog
    in the comment thread at
    http://amormundi.blogspot.com/2008/04/my-defamatory-utterances-against.html

    ReplyDelete
  24. http://amormundi.blogspot.com/2008/04/my-defamatory-utterances-against.html

    > Obviously I wouldn't expect Michael [Anissimov] to be pleased
    > by these observations, but it is interesting that -- quite
    > true to form for a cultist -- he immediately identifies the
    > critique as nothing but defamation, he claims I am calling
    > particular people names when I am clearly pointing out reasons
    > among many why people join cults of which I think transhumanism
    > is one (I call it a Robot Cult, after all), he calls it
    > hate speech, ad hominem, libelous and so on, quasi-legal
    > insinuations Giulio Prisco also takes up in his pile-on post.
    > I must say, transhumanist muckety-mucks do e-mail me weaselly
    > little insinuations about suing me all the time, by the way. . .
    >
    > Michael goes on to claim that reading my posts for him is like
    > a person of color reading the fulminations of a white supremacist
    > or a Jew reading an antisemitic screed. It is in moments like
    > this when you get a glimpse into the fully crazy place transhumanist
    > sub(cult)ural "warriors" have found their way to in their
    > substitution of an identity movement organized by investment in
    > an idiosyncratic construal of "technology" and fantasy of "the future"
    > for actually serious deliberation about technodevelopmental topics.
    >
    > That is to say, in this very response Michael clearly exemplifies
    > the True Believer Groupthink irrationality I attributed to
    > transhumanism and which he is taking such exception to in the
    > first place. . .

    From a blog post by a contributor to LessWrong:

    http://kruel.co/2012/07/31/is-criticism-of-siailw-a-result-of-hostility/
    ---------------------
    Is criticism of the Singularity Institute a result of hostility?
    2012-07-31

    First of all I want to say something about the recent use of
    the word “cult” with respect to the Singularity Institute and
    LessWrong. I don’t think that they are a cult. . . [However, t]here
    are very good reasons to analyze them critically and start playing
    hardball. . .

    People have to realize that to critically examine the output
    of that community is very important due to the nature and scale
    of what they are trying [or at least **claiming** to be trying!]
    to achieve.

    Even people with comparatively modest goals like trying to
    become the president of the United States of America should
    face and expect a constant and critical analysis of everything
    they are doing.

    Which is why I am kind of surprised how often people protest
    any kind of fierce criticism that community is facing or find fault
    with the alleged “hostility” of some of its critiques. Excuse me?
    They are asking for money to implement a mechanism that will change
    the nature of the whole universe. . .

    Last but not least one should always be wary of a group of people
    with strong beliefs about the possibility of doom as the result
    of the actions of another group of people (in this case the
    culprits being AI and computer scientists). Even more so if it
    is a group who believes that the fate of an intergalactic civilization
    depends on their actions.

    Those beliefs are strong incentives. The history of humanity is rich
    with examples where it took much less than that to cause people
    to take incredibl[y] stupid actions.
    ---------------------

    Indeed.

    ReplyDelete
  25. Cult is a funny word as it gets used in different ways and the nature of cults themselves morphs over time. Less Than Wrong isn't a cult in the classic sensse, but few cults are. And there are lots of different kinds of cults. If you want to get really specific it can be described as an Enlightenment cult which is way down on the malignancy scale. The internet has fostered these as it makes it easier to maintain a diffuse network (traditional cults relied on concentrating people and isolating them physicaly from the outside world. Enlightenment cults on the internet form self reinforcing intellectual ghettos and basically turn people into assholes with an inflated sense of their own importance. The presence of a charismatic leader who makes grand promises about intellectual superiority and increased longevity and who is aggrandised in return is a key element. The cult exists to further the leaders own ego needs to the detriment of his followers who are often disciplined and mistreated if they stray from the dictates of the leader (or sometimes the leader just gets off on cruelty.

    I think the shoe fits in this case.

    ReplyDelete
  26. > > the Aum Shinrikyo cult had also derived some of its
    > > ideas from [Isaac Asimov's] _Foundation_ books
    >
    > For heaven's sake don't let Paul Krugman hear about that!

    For Heaven's Gate, did you say? ;->

    Oh, and Newt Gingrich, too:
    http://hnn.us/articles/newt-gingrich-galactic-historian
    (via David Brin at
    https://plus.google.com/116665417191671711571/posts/ehkcKcdbZns )

    "Two thousand years ago Cicero observed that to be ignorant of
    history was to remain always a child. To which we might add a
    Gingrich corollary: to confuse science fiction with reality
    is to remain always a child."

    ReplyDelete
  27. Anonymous4:37 AM

    White people are better than you :)

    ReplyDelete