Monday, October 27, 2014

Robocultic Kack Fight

It would seem that some Robot Cultists are mad with Elon Musk, too, for Summoning the Demon.

Although Musk is diverting plenty of money and attention to things robocultic, "Jesuscopter Crudslinger" declares Musk to be tantamount to the Taliban in the transhumanoid tabloid h+ magazine because Musk fails to love the Robot God with his whole heart. "[M]ind uploading and cyborgization seem almost inevitable, once you ponder them in a rational and open-minded way," insists rational and open-minded Jesuscopter Crudslinger. (It would appear that Mr. Crudslinger is none other than Ben Goertzel, about whom I've written before in Amor Mundi fan fave piece Nauru Needs Futurologists! and also in the self-explanatorily titled Robot Cultist Declares Need for Holiday Counting Chickens Before They Are Hatched.) "And the odds of AI systems vastly exceeding human beings in general intelligence and overall capability, seem very close to 100%." That "seem" is a nice touch, I must say. Very moderate! Very Serious.

'slinger goes on to confide, "Elon" -- with whom Jesuscopter Crudslinger is on a first name basis, naturally -- "I’m sorry if you find AGI, mind uploading and cyborgization demonic. But they’re going to happen anyway, no matter what you, MIRI, the Taliban or the Amish think about it. And no, humanity won’t be able to 'control' it, any more than we have been able to control computers [or] the Internet." Crudslinger says he is sorry, but I doubt it. He doesn't sound sorry. This is, I'm afraid, one of many doubts I am having. I find I'm reminded a little bit of...



These things are going to happen! That is to say, not only is it not implausible for you to expect Robot Gods to End History and for you to scan your "info-soul" from your cryo-hamburgerized brain to "upload" it as a cyber-angel that will live forever in Holodeck Heaven, why, all this is obvious! inevitable! unstoppable! To control this irresistible tide to techno-transcendence is so laughable that we must put the very word in scare-quotes, why, control isn't even a real word when you ponder it in a rational and open-minded way!

I come from cyberspace home of Mind... the changes in accelerating change are accelerating... the disrupters are disrupting the looming wall of finitude... no death! no taxes! no girl cooties in the clubhouse... the futurological faithful are achieving escape velocity... the stale, pale, males of the Robot Cult hail the Predator Gods of techno-capital.... they are buying The Future one gizmo at a time... the toypile will reach to infinity and beyond...

-- h/t Jim Fehlinger 

8 comments:

  1. > [I]t would appear that Mr. Crudslinger is none other than Ben Goertzel. . .

    Or maybe Paris Hilton!
    http://hplusmagazine.com/2014/10/27/paris-hilton-existential-risks-artificial-intelligence/

    ;->

    ReplyDelete
  2. > It would appear that Mr. Crudslinger is none other than Ben Goertzel,
    > about whom I've written before in Amor Mundi fan fave. . .
    > Robot Cultist Declares Need for Holiday Counting Chickens Before They Are Hatched
    ( http://amormundi.blogspot.com/2012/02/robot-cultist-declares-need-for-holiday.html )

    https://www.singularityweblog.com/do-we-need-to-have-a-future-day/
    ------------------
    Do We Need to Have a “Future Day”?
    by Nikki Olson
    September 28, 2011

    . . .

    “In thinking about how to get people interested in and excited
    about Transhumanist ideas explicitly, one idea I thought about
    was to create a holiday for the future. . ." . . .

    The remarks above were made by Ben Goertzel during the question
    and answer period of last week’s H+ Leadership Summit. . .

    Author and polymath Howard Bloom, who positively influenced
    the musical careers of Michael Jackson, Prince, John Cougar Mellencamp,
    Kiss, Queen, Bette Midler, Billy Joel, Diana Ross, Simon & Garfunkel,
    and many others, responded enthusiastically to Goertzel’s suggestion,
    calling the idea ‘fabulous’. . .
    ====

    Celebrities for the Future!

    Celebrities are Absolutely Fabulous!

    http://www.imdb.com/title/tt0504665/quotes
    ------------------
    Eddie: Everybody's there, everybody! Big names, you know.
    Chanel, Dior, Lagerfeld, Givenchy, Gaultier, darling.
    Names, names, names. Every rich bitch from New York is in
    there. Hockwenden, Ruttenstein, Vandebilt, Rothschild,
    Hookenfookenberger, Dachshund, Rottweiler, sweetie.

    Patsy: A row of skeletons with Jackie O hairdos.

    Eddie: Harper's, Tatler, English "Vogue", American "Vogue",
    French "Vogue", bloody Aby-bloody-ssinian bloody "Vogue",
    darling. Jeff Banks and Selina Scott couldn't even get a ticket,
    darling.
    ====

    ReplyDelete
  3. > God Warrior

    I guess the moral here must be: don't be dork-sided by those AI gorgyles!

    ;->

    ReplyDelete
  4. > It would seem that some Robot Cultists are mad with Elon Musk. . .

    The backlash continues:

    http://ieet.org/index.php/IEET/more/notaro20141029
    -------------
    The onset of transhumanism. . . may rally many people against
    technological innovations. . .
    ====

    Back in 2004, one Michael Wilson had materialized as an insider
    in SIAI [the "Singularity Institute for Artificial Intelligence",
    now called MIRI, the "Machine Intelligence Research Institute"]
    circles. And during the same era he was posting
    rather frequently on the S[hock]L[level]4 mailing list
    [an Eliezer Yudkowsky-owned forum]. At one point,
    he made a post in which he castigated himself (and
    this didn't seem tongue-in-cheek to me in the context, though
    in most contexts such claims would clearly be so) for
    having "almost destroyed the world last Christmas" as a
    result of his own attempts to "code an AI", but now that he
    had seen the light (as a result of SIAI's propaganda) he
    would certainly be more cautious in the future. (Of course, no
    one on the list seemed to find his remarks particularly
    outrageous -- he was more-or-less right in tune
    with the Zeitgeist there). He also wrote:

    "To my knowledge Eliezer Yudkowsky is the only person that has tackled
    these issues head on and actually made progress in producing engineering
    solutions (I've done some very limited original work on low-level
    Friendliness structure). Note that Friendliness is a class of advanced
    cognitive engineering; not science, not philosophy. We still don't know
    that these problems are actually solvable, but recent progress has been
    encouraging and we literally have nothing to lose by trying.
    I sincerely hope that we can solve these problems, stop Ben Goertzel
    and his army of evil clones (I mean emergence-advocating AI researchers :) and
    engineer the apotheosis. The universe doesn't care about hope though, so I will
    spend the rest of my life doing everything I can to make Friendly AI a
    reality. Once you /see/, once you have even an inkling of understanding
    the issues involved, you realise that one way or another these are the
    Final Days of the human era and if you want yourself or anything else you
    care about to survive you'd better get off your ass and start helping.
    The only escapes from the inexorable logic of the Singularity are death,
    insanity and transcendence."
    ("Phase Changes in the Evolution of Complexity"
    http://www.sl4.org/archive//0404/8401.html
    http://sl4.org/wiki/Starglider )

    The smiley in the above did not reassure me.

    ReplyDelete
  5. I wrote an email to somebody shortly thereafter containing:

    -------------
    The "Singularitarian" circus may just be getting started!

    But seriously -- if we extrapolate Wilson's sort of hysteria
    to the worst imaginable cases (something the
    Yudkowskian Singularitarians seem fond of doing)
    then we might expect that:

    1. The Yudkowskian Singularitarian Party will actually
    morph into a bastion of anti-technology. The approaches
    to AI that -- IMHnon-expertO and in other folks'
    no-soHrather-more-expertO -- are likeliest to succeed
    (evolutionary, selectionist, emergent) are frantically
    demonized as too dangerous to pursue. The most
    **plausible** approaches to AI are to be regulated
    the way plutonium and anthrax are regulated today, or
    at least shouted down among politically-correct
    Singularitarians. IOW, the Yudkowskian Party arrogates
    to itself a role as a sort of proto-Turing Police out
    of William Gibson. Move over, Bill Joy! It's very
    Vingean too, for that matter -- sounds like the first book
    in the "Realtime" trilogy (_The Peace War_).

    2. The **approved** approach to AI -- a Yudkowsky-sanctioned
    "guaranteed Friendly", "socially responsible" framework
    (that seems to be based, in so far as it's coherent at all,
    on a Good-Old-Fashioned mechanistic AI faith in
    "goals" -- as if we were programming an expert system
    in OPS5), which some (more sophisticated?) folks have already
    given up on as a dead end and waste of time, is to suck up all
    of the money and brainpower that the SL4 "attractor" can
    pull in -- for the sake of the human race's safe
    negotiation of the Singularity.

    3. Inevitably, there will be heretics and schisms in the
    Church of the Singularity. The Pope of Friendliness will
    not yield his throne willingly, and the emergence of someone
    (Michael Wilson?) bright enough and crazy enough
    to become a plausible successor will **undoubtedly**
    result in quarrels over the technical fine points of
    Friendliness that will escalate into religious wars.

    4. In the **absolute worst case** scenario I can imagine,
    a genuine lunatic FAI-ite will take up the Unabomber's
    tactics, sending packages like the one David Gelernter
    got in the mail to folks deemed "dangerous" according
    to (lack of) adherence to the principles and politics of FAI
    (whatever they happen to be according to the reigning
    Pope of the moment).
    ====

    Now here's a genu-wine existential risk -- the propensity of
    folks to fall for self-styled Messiahs:
    http://justnotsaid.blogspot.com/2014/10/sociopath-alert-john-roger-hinkins.html

    Watch out for those used-car salesfolks! ;->

    ReplyDelete
  6. http://futurisms.thenewatlantis.com/2014/10/our-new-book-on-transhumanism-eclipse.html
    -------------------
    Wednesday, October 29, 2014
    Our new book on transhumanism: Eclipse of Man

    Since we launched The New Atlantis, questions about human enhancement,
    artificial intelligence, and the future of humanity have been a core
    part of our work. And no one has written more intelligently and
    perceptively about the moral and political aspects of these questions
    than Charles T. Rubin. . . one of our colleagues here on Futurisms.

    So we are delighted to have just published Charlie's new book about
    transhumanism, Eclipse of Man: Human Extinction and the Meaning of Progress. . .
    ====

    http://www.thenewatlantis.com/publications/eclipse-of-man
    -------------------
    Human Extinction and the Meaning of Progress
    Charles T. Rubin

    Tomorrow has never looked better. Breakthroughs in fields like
    genetic engineering and nanotechnology promise to give us
    unprecedented power to redesign our bodies and our world. Futurists
    and activists tell us that we are drawing ever closer to a day
    when we will be as smart as computers. . .
    ====

    Uh....?!

    However, eClips sounds like a great name for a robot
    hair salon. ;->

    ReplyDelete
  7. There's a lot to like at the Futurisms site -- but it bugs me that they seem too often to accept futurology as actually predictive of a future they would abhor rather than as a symptom of a reactionary take on the present, they accept too many of the same terms the futurologists do. Since they indulge a bit of their own reactionary politics on questions of choice and harm reduction policy models more generally, I think there is weirdly a bit of an alignment in assumptions about the futurological belied by their differences in assessing outcomes once these terms are conceded. Again, I think there is a lot of useful and incisive critique there at the Futurisms site. I get a lot out of reading it. But when it comes right down to it, their critical vantage is valuable but really doesn't seem quite the same as mine.

    eClips sounds like it would probably be very enhancing for dynamic hair management. Somebody contact Natasha Vita-More!

    ReplyDelete
  8. It would seem that some Robot Cultists are mad with Elon Musk, too, ... ifighting.blogspot.com

    ReplyDelete