tag:blogger.com,1999:blog-5956838.post5378044028111338976..comments2023-11-22T01:14:54.298-08:00Comments on amor mundi: Robocultic Kack FightDale Carricohttp://www.blogger.com/profile/02811055279887722298noreply@blogger.comBlogger8125tag:blogger.com,1999:blog-5956838.post-12071746191875876072014-12-30T23:03:01.872-08:002014-12-30T23:03:01.872-08:00It would seem that some Robot Cultists are mad wit...It would seem that some Robot Cultists are mad with Elon Musk, too, ... <a href="http://ifighting.blogspot.com" rel="nofollow">ifighting.blogspot.com</a><br />Norberthttps://www.blogger.com/profile/03768669085753419529noreply@blogger.comtag:blogger.com,1999:blog-5956838.post-51382421692409158282014-10-30T13:45:13.706-07:002014-10-30T13:45:13.706-07:00There's a lot to like at the Futurisms site --...There's a lot to like at the Futurisms site -- but it bugs me that they seem too often to accept futurology as actually predictive of a future they would abhor rather than as a symptom of a reactionary take on the present, they accept too many of the same terms the futurologists do. Since they indulge a bit of their own reactionary politics on questions of choice and harm reduction policy models more generally, I think there is weirdly a bit of an alignment in assumptions about the futurological belied by their differences in assessing outcomes once these terms are conceded. Again, I think there is a lot of useful and incisive critique there at the Futurisms site. I get a lot out of reading it. But when it comes right down to it, their critical vantage is valuable but really doesn't seem quite the same as mine.<br /><br />eClips sounds like it would probably be very enhancing for dynamic hair management. Somebody contact Natasha Vita-More!Dale Carricohttps://www.blogger.com/profile/02811055279887722298noreply@blogger.comtag:blogger.com,1999:blog-5956838.post-84904283837178130672014-10-30T06:57:11.429-07:002014-10-30T06:57:11.429-07:00http://futurisms.thenewatlantis.com/2014/10/our-ne...http://futurisms.thenewatlantis.com/2014/10/our-new-book-on-transhumanism-eclipse.html<br />-------------------<br />Wednesday, October 29, 2014<br />Our new book on transhumanism: Eclipse of Man <br /><br />Since we launched The New Atlantis, questions about human enhancement,<br />artificial intelligence, and the future of humanity have been a core<br />part of our work. And no one has written more intelligently and<br />perceptively about the moral and political aspects of these questions<br />than Charles T. Rubin. . . one of our colleagues here on Futurisms.<br /><br />So we are delighted to have just published Charlie's new book about<br />transhumanism, Eclipse of Man: Human Extinction and the Meaning of Progress. . .<br />====<br /><br />http://www.thenewatlantis.com/publications/eclipse-of-man<br />-------------------<br />Human Extinction and the Meaning of Progress<br />Charles T. Rubin<br /><br />Tomorrow has never looked better. Breakthroughs in fields like<br />genetic engineering and nanotechnology promise to give us<br />unprecedented power to redesign our bodies and our world. Futurists<br />and activists tell us that we are drawing ever closer to a day<br />when we will be as smart as computers. . .<br />====<br /><br />Uh....?!<br /><br />However, eClips sounds like a great name for a robot<br />hair salon. ;->jimfhttps://www.blogger.com/profile/04975754342950063440noreply@blogger.comtag:blogger.com,1999:blog-5956838.post-8210233344384873202014-10-29T14:11:01.149-07:002014-10-29T14:11:01.149-07:00I wrote an email to somebody shortly thereafter co...I wrote an email to somebody shortly thereafter containing:<br /><br />-------------<br />The "Singularitarian" circus may just be getting started!<br /><br />But seriously -- if we extrapolate Wilson's sort of hysteria<br />to the worst imaginable cases (something the<br />Yudkowskian Singularitarians seem fond of doing)<br />then we might expect that:<br /><br />1. The Yudkowskian Singularitarian Party will actually<br />morph into a bastion of anti-technology. The approaches<br />to AI that -- IMHnon-expertO and in other folks'<br />no-soHrather-more-expertO -- are likeliest to succeed<br />(evolutionary, selectionist, emergent) are frantically<br />demonized as too dangerous to pursue. The most<br />**plausible** approaches to AI are to be regulated<br />the way plutonium and anthrax are regulated today, or<br />at least shouted down among politically-correct<br />Singularitarians. IOW, the Yudkowskian Party arrogates<br />to itself a role as a sort of proto-Turing Police out<br />of William Gibson. Move over, Bill Joy! It's very<br />Vingean too, for that matter -- sounds like the first book<br />in the "Realtime" trilogy (_The Peace War_).<br /><br />2. The **approved** approach to AI -- a Yudkowsky-sanctioned<br />"guaranteed Friendly", "socially responsible" framework<br />(that seems to be based, in so far as it's coherent at all,<br />on a Good-Old-Fashioned mechanistic AI faith in<br />"goals" -- as if we were programming an expert system<br />in OPS5), which some (more sophisticated?) folks have already<br />given up on as a dead end and waste of time, is to suck up all<br />of the money and brainpower that the SL4 "attractor" can<br />pull in -- for the sake of the human race's safe<br />negotiation of the Singularity.<br /><br />3. Inevitably, there will be heretics and schisms in the<br />Church of the Singularity. The Pope of Friendliness will<br />not yield his throne willingly, and the emergence of someone<br />(Michael Wilson?) bright enough and crazy enough<br />to become a plausible successor will **undoubtedly**<br />result in quarrels over the technical fine points of<br />Friendliness that will escalate into religious wars.<br /><br />4. In the **absolute worst case** scenario I can imagine,<br />a genuine lunatic FAI-ite will take up the Unabomber's<br />tactics, sending packages like the one David Gelernter<br />got in the mail to folks deemed "dangerous" according<br />to (lack of) adherence to the principles and politics of FAI<br />(whatever they happen to be according to the reigning<br />Pope of the moment).<br />====<br /><br />Now here's a genu-wine existential risk -- the propensity of<br />folks to fall for self-styled Messiahs:<br />http://justnotsaid.blogspot.com/2014/10/sociopath-alert-john-roger-hinkins.html<br /><br />Watch out for those used-car salesfolks! ;-><br />jimfhttps://www.blogger.com/profile/04975754342950063440noreply@blogger.comtag:blogger.com,1999:blog-5956838.post-28351295595293113102014-10-29T14:07:43.324-07:002014-10-29T14:07:43.324-07:00> It would seem that some Robot Cultists are ma...> It would seem that some Robot Cultists are mad with Elon Musk. . .<br /><br />The backlash continues:<br /><br />http://ieet.org/index.php/IEET/more/notaro20141029<br />-------------<br />The onset of transhumanism. . . may rally many people against<br />technological innovations. . .<br />====<br /><br />Back in 2004, one Michael Wilson had materialized as an insider<br />in SIAI [the "Singularity Institute for Artificial Intelligence",<br />now called MIRI, the "Machine Intelligence Research Institute"]<br />circles. And during the same era he was posting<br />rather frequently on the S[hock]L[level]4 mailing list<br />[an Eliezer Yudkowsky-owned forum]. At one point,<br />he made a post in which he castigated himself (and<br />this didn't seem tongue-in-cheek to me in the context, though<br />in most contexts such claims would clearly be so) for<br />having "almost destroyed the world last Christmas" as a<br />result of his own attempts to "code an AI", but now that he<br />had seen the light (as a result of SIAI's propaganda) he<br />would certainly be more cautious in the future. (Of course, no<br />one on the list seemed to find his remarks particularly<br />outrageous -- he was more-or-less right in tune<br />with the Zeitgeist there). He also wrote:<br /><br />"To my knowledge Eliezer Yudkowsky is the only person that has tackled <br />these issues head on and actually made progress in producing engineering <br />solutions (I've done some very limited original work on low-level <br />Friendliness structure). Note that Friendliness is a class of advanced <br />cognitive engineering; not science, not philosophy. We still don't know <br />that these problems are actually solvable, but recent progress has been <br />encouraging and we literally have nothing to lose by trying.<br />I sincerely hope that we can solve these problems, stop Ben Goertzel<br />and his army of evil clones (I mean emergence-advocating AI researchers :) and<br />engineer the apotheosis. The universe doesn't care about hope though, so I will <br />spend the rest of my life doing everything I can to make Friendly AI a <br />reality. Once you /see/, once you have even an inkling of understanding <br />the issues involved, you realise that one way or another these are the <br />Final Days of the human era and if you want yourself or anything else you <br />care about to survive you'd better get off your ass and start helping. <br />The only escapes from the inexorable logic of the Singularity are death, <br />insanity and transcendence."<br />("Phase Changes in the Evolution of Complexity"<br />http://www.sl4.org/archive//0404/8401.html<br />http://sl4.org/wiki/Starglider )<br /><br />The smiley in the above did not reassure me.jimfhttps://www.blogger.com/profile/04975754342950063440noreply@blogger.comtag:blogger.com,1999:blog-5956838.post-9047428466416466042014-10-29T04:30:56.258-07:002014-10-29T04:30:56.258-07:00> God Warrior
I guess the moral here must be: ...> God Warrior<br /><br />I guess the moral here must be: don't be dork-sided by those AI gorgyles!<br /><br />;->jimfhttps://www.blogger.com/profile/04975754342950063440noreply@blogger.comtag:blogger.com,1999:blog-5956838.post-7556183099094447772014-10-28T09:19:04.320-07:002014-10-28T09:19:04.320-07:00> It would appear that Mr. Crudslinger is none ...> It would appear that Mr. Crudslinger is none other than Ben Goertzel,<br />> about whom I've written before in Amor Mundi fan fave. . .<br />> Robot Cultist Declares Need for Holiday Counting Chickens Before They Are Hatched<br />( http://amormundi.blogspot.com/2012/02/robot-cultist-declares-need-for-holiday.html )<br /><br />https://www.singularityweblog.com/do-we-need-to-have-a-future-day/<br />------------------<br />Do We Need to Have a “Future Day”?<br />by Nikki Olson<br />September 28, 2011<br /><br />. . .<br /><br />“In thinking about how to get people interested in and excited<br />about Transhumanist ideas explicitly, one idea I thought about<br />was to create a holiday for the future. . ." . . .<br /><br />The remarks above were made by Ben Goertzel during the question<br />and answer period of last week’s H+ Leadership Summit. . .<br /><br />Author and polymath Howard Bloom, who positively influenced<br />the musical careers of Michael Jackson, Prince, John Cougar Mellencamp,<br />Kiss, Queen, Bette Midler, Billy Joel, Diana Ross, Simon & Garfunkel,<br />and many others, responded enthusiastically to Goertzel’s suggestion,<br />calling the idea ‘fabulous’. . .<br />====<br /><br />Celebrities for the Future!<br /><br />Celebrities are Absolutely Fabulous!<br /><br />http://www.imdb.com/title/tt0504665/quotes<br />------------------<br />Eddie: Everybody's there, everybody! Big names, you know.<br />Chanel, Dior, Lagerfeld, Givenchy, Gaultier, darling.<br />Names, names, names. Every rich bitch from New York is in<br />there. Hockwenden, Ruttenstein, Vandebilt, Rothschild,<br />Hookenfookenberger, Dachshund, Rottweiler, sweetie.<br /><br />Patsy: A row of skeletons with Jackie O hairdos.<br /><br />Eddie: Harper's, Tatler, English "Vogue", American "Vogue",<br />French "Vogue", bloody Aby-bloody-ssinian bloody "Vogue",<br />darling. Jeff Banks and Selina Scott couldn't even get a ticket,<br />darling.<br />====jimfhttps://www.blogger.com/profile/04975754342950063440noreply@blogger.comtag:blogger.com,1999:blog-5956838.post-23958551825770967082014-10-28T05:25:30.981-07:002014-10-28T05:25:30.981-07:00> [I]t would appear that Mr. Crudslinger is non...> [I]t would appear that Mr. Crudslinger is none other than Ben Goertzel. . .<br /><br />Or maybe Paris Hilton!<br />http://hplusmagazine.com/2014/10/27/paris-hilton-existential-risks-artificial-intelligence/<br /><br />;->jimfhttps://www.blogger.com/profile/04975754342950063440noreply@blogger.com