tag:blogger.com,1999:blog-5956838.post1006249270191086557..comments2023-11-22T01:14:54.298-08:00Comments on amor mundi: Robocultic Q & A With a Tech JournalistDale Carricohttp://www.blogger.com/profile/02811055279887722298noreply@blogger.comBlogger18125tag:blogger.com,1999:blog-5956838.post-85493547707497726602015-12-29T19:28:20.339-08:002015-12-29T19:28:20.339-08:00> [P]erhaps a modern Shakespeare would write th...> [P]erhaps a modern Shakespeare would write that<br />> something is rotten in the state of transhumanism. . .<br /><br />A friend of mine, off from work between the holidays<br />and with the rest of his family out of the house for<br />the duration, invited me over this past weekend to<br />binge-watch movies on his big HDTV. We started with<br />the recent SyFy _Childhood's End_ and ultimately<br />graduated to "serious" dramas (_The Master_, _The<br />Curious Case of Benjamin Button_, _Doubt_), but in<br />between he showed me three of his favorite Marvel<br />superhero movie adaptations -- the first installment<br />each of _Thor_, _Captain America_, and _The Avengers_.<br /><br />I hadn't seen any of these before -- I don't keep up<br />with comic-book movie adaptations. I enjoyed them well<br />enough, and my friend is a more-or-less sophisticated consumer<br />of these things (he's pushing 60; he's no 12-year-old).<br />But in the context of my past almost-20-years' exposure to the<br />on-line transhumanists, I now find this sort of entertainment<br />disturbing on several levels. The feeding of adolescent-male<br />narcissistic power fantasies (however perfumed with ostensible "altruistic"<br />motivations in the diegesis -- the interior story line),<br />the militarism, and the atmosphere of American exceptionalism<br />are certainly bothersome, but what I find most irritating<br />these days is my certain knowledge that **some** people, of<br />whatever age (physical or mental), absorb these fantasies as though<br />they constituted a real paradigm for "the future". All these<br />thoughts were hovering in the back of my mind even as I was still<br />appreciating the movies at a 12-year-old's level. Afterwards,<br />I mentioned these reservations to my friend, and he<br />acknowledged them rather perfunctorily, but I'm afraid he<br />doesn't "bellyfeel" them as much as I do at this stage<br />of my life. ;-><br />jimfhttps://www.blogger.com/profile/04975754342950063440noreply@blogger.comtag:blogger.com,1999:blog-5956838.post-63570246234089755142015-12-29T13:46:07.202-08:002015-12-29T13:46:07.202-08:00May I recommend another open letter?
It seems to ...May I recommend another <a href="http://amormundi.blogspot.com/2012/08/an-open-letter-to-robot-cultists.html" rel="nofollow">open letter</a>? <br />It seems to me that transhumanism is about fear and loathing of the aging, vulnerable, error-prone body and brain and functions primarily to rationalize plutocracy (gizmo fandom is freedom, technocratic elites know best). Anybody who thinks "transhumanism" came up with the idea that self-improvement is nice or thinks "transhumanists" have provided any original contributions to self-improvement as actual practice will find themselves improved by having their head examined.Dale Carricohttps://www.blogger.com/profile/02811055279887722298noreply@blogger.comtag:blogger.com,1999:blog-5956838.post-80176845972390464922015-12-29T13:31:41.088-08:002015-12-29T13:31:41.088-08:00(via http://hplusmagazine.com/2015/12/29/29450/ )
...(via http://hplusmagazine.com/2015/12/29/29450/ )<br /><br />http://hiimpact.blogspot.com/2015/12/an-open-letter-to-transhumanist-movement.html<br />-------------<br />Hi-Impact<br />Musings and stuff from an armchair futurologist, sci-fi addict<br />and furry<br /><br />An Open Letter to the Transhumanist Movement<br />Tuesday, December 1, 2015<br /><br />Maybe it’s the casual ones that think themselves better than<br />everyone else just because they think they see the shape of<br />technology ahead of the curve. Or maybe it’s the Silicon Valley<br />tech types who unironically think that lower economic classes<br />aren’t deserving of the same rights given to them. Regardless<br />of exactly who it is, perhaps a modern Shakespeare would<br />write that something is rotten in the state of transhumanism. . .<br /><br />No doubt I’ve seen quite a few disturbing things from other<br />transhumanists; but I won’t go into a laundry list of them<br />because the main theme boils down to this: “I’m better than<br />everyone else, so screw everyone else”. . .<br /><br />I suspect a few factors that play into the current narrative<br />of “got mine, to heck with you” in transhumanism, but I’m not<br />going to finger point, not now. . . [Oh, what a disappointment ;-> ]<br /><br />[T]he core ideals of transhumanism and the extraneous baggage<br />it has acquired are fundamentally at odds with one another.<br /><br />Transhumanism was supposed to be about improving ourselves, to<br />become less like apes and more like angels if you will. But<br />now transhumanism has been transformed (dare I say hijacked?)<br />by people who revel in being the ape. People who, despite<br />their ideas on technology and existence, still practice the<br />power struggles and smug sense of self-superiority that’s been<br />with us since time immemorial. In other words: to them it’s<br />less like becoming better and more like being the same person<br />inside a computer. . .<br />====jimfhttps://www.blogger.com/profile/04975754342950063440noreply@blogger.comtag:blogger.com,1999:blog-5956838.post-24621012064907690142015-12-24T14:37:15.919-08:002015-12-24T14:37:15.919-08:00James Hughes has forgotten more about democratic p...James Hughes has forgotten more about democratic politics in his panic at the prospect of personal death than many self-identified lefties will ever know. In respect to the specific arguments to which you refer here, Hughes cooks the books fairly transparently in these surveys to get his desired results. He allows all sorts of kooky futurological neoligisms like "upwinger" and "dynamist" into his political IDs and then treats them as "beyond left and right" even when they are demonstrably neoliberal, market fundamentalist, and corporatist-right -- thus making all sorts of actually reactionary factions vanish from being accounted as such. He also disregards all sorts of right-wing eugenic and Bell Curve white supremacist politics in making his calculations. (Given his own weakness for eugenic arguments this isn't exactly surprising.) I mean, sure Haldane and Sanger were eugenicist, but that doesn't mean the eugenic dimensions of their viewpoints were legibly left even if their avowed politics were overall, nor certainly does it mean that one can STILL be legibly left while holding such views given all that we now understand about them. Given that Hughes is presumably providing a sophisticated analysis of transhumanist political entailments in this very piece, it is interesting that he doesn't really go into questions of transhumanist subcultures as essentially gizmo-fashion-fandoms embedded in consumer lifestyle politics beholden to exploitative and unsustainable practices, he doesn't go into the susceptibility of techno-determinist or techno-autonomist understandings of history to engender anti-democratic acquiescence to elites and circumvention of democracy by technocrats, he doesn't question his own willingness to make common cause AS a transhumanist with right-wing transhumanists in what he fancies is a generalized "pro-technology" politics as if all technology is the same when that is obviously a mystification (most useful to incumbent elites, hence, again a reactionary politics), pretending the politics of technoscientific change inheres in "tech specs" rather than in the political struggles to ensure the costs, risks, and benefits of change are equitably distributed to all the stakeholders to change by their lights (denial of which, yet again, is reactionary). As a key figure in the original formulation and popularization of that term "technoprogressive" I am keen to point out that its use is hardly evidence that one is in the presence of a person who is technoscientifically-literate or legibly progressive -- over many years I've repeatedly learned that the hard way! These days, it's not even bad-faith "democratic transhumanists" who are making widest recourse to the technoprogressive term, but techbro venture capitalists rationalizing skim-and-scam operations, often declaring their facile frauds as effective altruism in the bargain -- primping for camera time with patently ridiculous talk of robocalypse and bitcoin rapture. Dale Carricohttps://www.blogger.com/profile/02811055279887722298noreply@blogger.comtag:blogger.com,1999:blog-5956838.post-12822228853551667882015-12-23T18:43:34.091-08:002015-12-23T18:43:34.091-08:00(from THE POLITICS OF TRANSHUMANISM AND THE TECHNO...(from THE POLITICS OF TRANSHUMANISM AND THE TECHNO-MILLENNIAL<br />IMAGINATION, 1626–2030 by James J. Hughes, cont'd)<br /><br />In 2009 the libertarians and Singularitarians launched a campaign<br />to take over the World Transhumanist Association Board of Directors,<br />pushing out the Left in favor of allies like Milton Friedman’s<br />grandson and Seasteader leader Patri Friedman. Since then the<br />libertarians and Singularitarians, backed by Thiel’s philanthropy,<br />have secured extensive hegemony in the transhumanist community.<br />As the global capitalist system spiraled into the crisis in<br />which it remains, partly created by the speculation of hedge<br />fund managers like Thiel, the left-leaning majority of transhumanists<br />around the world have increasingly seen the contradiction between<br />the millennialist escapism of the Singularitarians and practical<br />concerns of ensuring that technological innovation is safe and<br />its benefits universally enjoyed. While the alliance of Left<br />and libertarian transhumanists held together until 2008 in the<br />belief that the new biopolitical alignments were as important<br />as the older alignments around political economy, the global<br />economic crisis has given new life to the technoprogressive<br />tendency, those who want to organize for a more egalitarian<br />world and transhumanist technologies, a project with a long<br />Enlightenment pedigree and distinctly millenarian possibilities. <br /><br />In surveys I conducted in 2003, 2005, and 2007 of the global<br />membership of the World Transhumanist Association, left-wing<br />transhumanists outnumbered conservative and libertarian<br />transhumanists 2-to-1 (Humanity+ 2008). By 2007 16 percent<br />of respondents specifically self-identified as “technoprogressive.” jimfhttps://www.blogger.com/profile/04975754342950063440noreply@blogger.comtag:blogger.com,1999:blog-5956838.post-35491916231290475882015-12-23T18:41:27.550-08:002015-12-23T18:41:27.550-08:00Also from RationalWiki, an account of alleged poli...Also from RationalWiki, an account of alleged political in-fighting<br />among the >Hists.<br /><br />Transhumanism wasn't always as furiously right-wing as it is<br />now [it wasn't? You could've fooled me!]. A similar colonisation<br />happened in 2008-2009, when the libertarians moved in and took<br />over from the more socialist types. From THE POLITICS OF TRANSHUMANISM<br />AND THE TECHNO-MILLENNIAL IMAGINATION, 1626–2030 by James J. Hughes<br />(a PDF I have here):<br /><br />The elective affinity between libertarian politics and Singularity<br />can be partly explained by the idea of technological inevitability.<br />Collective agency is not required to ensure the Singularity, and<br />human governments are too slow and stupid to avert the catastrophic<br />possibilities of superintelligence, if there are any. Only small<br />groups of computer scientists working to create the first<br />superintelligence with core “friendliness code” could have any<br />effect on deciding between catastrophe and millennium.<br /><br />This latter project, building a friendly AI, is the focus of<br />the largest Singularitarian organization, the Singularity Institute<br />for Artificial Intelligence SIAI), headed by the autodidact<br />philosopher Eliezer Yudkowsky. In “Millennial Tendencies in Responses<br />to Apocalyptic Threats” (Hughes 2008), I parse Yudkowky and the<br />SIAI as the “messianic” version of Singularitarianism, arguing<br />that their semi-monastic endeavor to build a literal deus ex machina<br />to protect humanity from the Terminator is a form of magical<br />thinking. The principal backer of the SIAI is the conservative<br />Christian transhumanist billionaire Peter Thiel. Like the<br />Extropians Thiel is an anarcho-capitalist envisioning a<br />stateless future and funder of the Seasteading Foundation,<br />which works to create independent floating city-states in<br />international waters. He also is the principal funder of<br />the Methuselah Foundation, which works on anti-aging research.<br />In 2011 and 2012 Thiel was the principal financier of the<br />SuperPAC backing libertarian Republican Ron Paul, and he<br />supports other conservative foundations and political<br />projects on the right.<br /><br />(continued)jimfhttps://www.blogger.com/profile/04975754342950063440noreply@blogger.comtag:blogger.com,1999:blog-5956838.post-39664123821524676142015-12-23T18:36:12.999-08:002015-12-23T18:36:12.999-08:00A bit of >Hist dirty laundry.
"EA" s...A bit of >Hist dirty laundry.<br /><br />"EA" stands for "Effective Altruism", and in transhumanist<br />circles in recent years, it's been co-opted<br />to mean donating money to Eliezer Yudkowsky's "Machine Intelligence<br />Research Institute" in order to prevent unFriendly superintelligence<br />from taking over the world. It must've seemed like a brilliant<br />fund-raising strategy when somebody (Luke Muehlhauser?) first came up with<br />it, but it blew up in their faces when Holden Karnofsky of<br />GiveWell gave SIAI/MIRI a "thumbs down".<br />[ http://lesswrong.com/lw/cbs/thoughts_on_the_singularity_institute_si/ ]<br /><br />And (3 years later) IEET has piled on.<br /><br />(via<br />http://rationalwiki.org/wiki/Talk:LessWrong )<br /><br />http://ieet.org/index.php/IEET/print/10669<br />---------------<br />Effective Altruism has Five Serious Flaws - Avoid It -<br />Be a DIY Philanthropist Instead<br />Hank Pellissier<br />July 13, 2015<br /><br />In an earlier essay[*] I recommended the Effective Altruism (EA)<br />movement, the humanitarian crusade spearheaded by philosopher<br />Peter Singer.<br /><br />Today, I retract my support. . .<br /><br />FLAW #4: EA’s Weird, Wrong Alliance with MIRI<br />(Machine Intelligence Research Institute)<br /><br />MIRI is a Berkeley-based research team that was<br />previously-titled SIAI (Singularity Institute for<br />Artificial Intelligence). MIRI has a history of<br />arrogance and aggressiveness, justified in their minds,<br />I suppose, by their opinion that the future of the world<br />depends on their ability to help create Friendly AI.<br />MIRI has the financial support of Peter Thiel, who is<br />worth $2.2 billion on Forbes The Midas List. MIRI isn’t<br />curing disease or helping the poor; it’s budget pays<br />the salaries of its aloof, we’re-more-rational-than-you<br />researchers. I’m dismayed that MIRI has infiltrated EA.<br /><br />Two of the recommended introductory essays on the<br />Effective Altruism organization site are written by MIRI<br />members. Posted second, right under Singer’s preface<br />article, is a math-wonky article by SIAI/MIRI founder<br />Eliezar Yudkowsky. Luke Muelhauser, MIRI’s recent<br />Executive Director (who left last month to join<br />GiveWell), wrote a let’s-set-the-agenda article further<br />down the list, titled “Four Focus areas of effective altruism.”<br />He places MIRI in the third focus area.<br /><br />MIRI/SIAI tried to “take over” the transhumanist group<br />HumanityPlus 3.5 years ago, when four SIAI members ran<br />for H+’s Board. SIAI ran a sordid, pushy, insulting campaign,<br />bribing voters, accusing opponents of “racism”, deriding<br />Board members as “freaky… bat-shit crazy [with] broken<br />reasoning abilities.” MIRI failed in their attempt to<br />colonize H+, but they’ve successfully wormed their way<br />into the heart of EA.<br /><br />A colleague of mine (who asked me not to disclose their<br />identity) attended the 2014 EA Summit in San Francisco<br />and afterwards was of the impression that: “MIRI and CFAR<br />(Center for Applied Rationality) are essentially the “owners”<br />of EA. EA as a movement has already sold itself in deals<br />to devils.” This is surely an exaggeration in international<br />EA, but in the SF Bay Area.. MIRI’s presence within EA<br />is uncomfortably strong.<br />====<br /><br />[*] Transhumanism: there are [at least] ten different<br />philosophical categories; which one(s) are you?<br />By Hank Pellissier<br />Jul 8, 2015<br />http://ieet.org/index.php/IEET/more/pellissier20150708jimfhttps://www.blogger.com/profile/04975754342950063440noreply@blogger.comtag:blogger.com,1999:blog-5956838.post-41659357186244517112015-12-22T10:28:38.276-08:002015-12-22T10:28:38.276-08:00> Shaw’s quote sounds a true today as it did in...> Shaw’s quote sounds a true today as it did in the past:<br />><br />> “The reasonable man adapts himself to the world: the unreasonable<br />> one persists in trying to adapt the world to himself. Therefore<br />> all progress depends on the unreasonable man.”<br />><br />> I have never heard anyone argue against this observation.<br /><br />Oh, I have.<br /><br />"Unluckily, it is difficult for a certain type of mind to grasp<br />the concept of insolubility. Thousands...keep pegging away at<br />perpetual motion. The number of persons so afflicted is far<br />greater than the records of the Patent Office show, for beyond the<br />circle of frankly insane enterprise there lie circles of more and<br />more plausible enterprise, until finally we come to a circle which<br />embraces the great majority of human beings.... The fact is that<br />some of the things that men and women have desired most ardently<br />for thousands of years are not nearer realization than they were<br />in the time of Rameses, and that there is not the slightest reason<br />for believing that they will lose their coyness on any near<br />to-morrow. Plans for hurrying them on have been tried since the<br />beginnning; plans for forcing them overnight are in copious and<br />antagonistic operation to-day; and yet they continue to hold off<br />and elude us, and the chances are that they will keep on holding<br />off and eluding us until the angels get tired of the show, and the<br />whole earth is set off like a gigantic bomb, or drowned, like a<br />sick cat, between two buckets."<br /><br />-- H. L. Mencken, "The Cult of Hope"<br />jimfhttps://www.blogger.com/profile/04975754342950063440noreply@blogger.comtag:blogger.com,1999:blog-5956838.post-10920582497188230622015-12-22T10:26:10.285-08:002015-12-22T10:26:10.285-08:00http://hplusmagazine.com/2015/12/22/the-virtuous-c...http://hplusmagazine.com/2015/12/22/the-virtuous-circle-of-fantasy/<br />--------------<br />The Virtuous Circle of Fantasy<br />December 22, 2015<br />Dan Lemire<br />[. . .has a B.Sc. and a M.Sc. in Mathematics from the<br />University of Toronto, and a Ph.D. in Engineering<br />Mathematics from the Ecole Polytechnique and the<br />Université de Montréal. He is a computer science<br />professor at the Université du Québec. . .]<br /><br />It has long been observed that progress depends on the outliers<br />among us. Shaw’s quote sounds a true today as it did in the past:<br /><br />“The reasonable man adapts himself to the world: the unreasonable<br />one persists in trying to adapt the world to himself. Therefore<br />all progress depends on the unreasonable man.”<br /><br />I have never heard anyone argue against this observation.<br /><br />Think about a world where starvation and misery is around the<br />corner. You are likely to put a lot of pressure on your kids<br />so that they will conform. Now, think about life in a wealthy<br />continent like North America in 2015. I know that my kids are<br />not going to grow up and starve no matter what they do. So I<br />am going to be tolerant about their career choices. And that’s<br />a good thing. Had Zuckerberg been my son and had I been poor,<br />I might have been troubled to see him dropping out of Harvard<br />to build a “facebook” site. Dropping out of Harvard to build<br />Facebook was pure fantasy. No parent afraid that his son could<br />starve would have tolerated it.<br /><br />This blog is also fantasy. Instead of doing “serious research”,<br />I write down whatever comes through my mind and post it online.<br />My blog counts for nothing as far as getting me academic currency.<br />I have been warned repeatedly that, should I seek employment,<br />having a blog where I freely shared controversial views could<br />be held against me… To make matters worse, you, my readers,<br />are “wasting time” reading me instead of the Financial times<br />or an Engineering textbook.<br /><br />The more fantasy we allow, the more progress we enable, and<br />that in turn enables more fantasy.<br /><br />There are people who don’t like fantasy one bit, like the radical<br />islamists. I don’t think that they fear or hate the West so much<br />as they are afraid of the increasing numbers of people who decide<br />to be unreasonable. Unreasonable people are like dynamite,<br />they can destroy your world view. They are disturbing.<br /><br />There is one straightforward consequence of this analysis:<br /><br />**Fantasy is growing exponentially.**<br />====<br /><br />Calling all Elves. Get your sorry asses back to Middle-earth --<br />it's time to forge shiny new Rings of Power!<br /><br />;->jimfhttps://www.blogger.com/profile/04975754342950063440noreply@blogger.comtag:blogger.com,1999:blog-5956838.post-24234844105241126942015-12-22T07:14:36.165-08:002015-12-22T07:14:36.165-08:00> . . .making fun of nerdy nerds or indulging i...> . . .making fun of nerdy nerds or indulging in<br />> disasterbatory hyperbole. . .<br /><br />http://www.scottaaronson.com/blog/?p=2307<br />-----------------<br />Shtetl-Optimized<br />The Blog of Scott Aaronson<br />If you take just one piece of information from this blog:<br />Quantum computers would not solve hard search problems<br />instantaneously by simply trying all the possible solutions<br />at once.<br /><br />The End of Suffering?<br />June 1st, 2015<br /><br />A computer science undergrad who reads this blog recently<br />emailed me about an anxiety he’s been feeling connected to<br />the Singularity -- **not** that it will destroy all human life,<br />but rather that it will make life suffering-free and therefore<br />no longer worth living (more _Brave New World_ than<br />_Terminator_, one might say). . .<br /><br />It’s fun to think about these questions from time to time, to<br />use them to hone our moral intuitions -- and I even agree with<br />Scott Alexander that it’s worthwhile to have a small number of<br />smart people think about them full-time for a living. But I<br />should tell you that, as I wrote in my post The Singularity Is Far,<br />I don’t expect a Singularity in my lifetime or my grandchildrens’<br />lifetimes. Yes, technically, if there’s ever going to be a<br />Singularity, then we’re 10 years closer to it now than we were<br />10 years ago, but it could still be one hell of a long way away!<br />And yes, I expect that technology will continue to change in my<br />lifetime in amazing ways—not as much as it changed in my<br />grandparents’ lifetimes, probably, but still by a lot -- but how<br />to put this? I’m willing to bet any amount of money that when<br />I die, people’s shit will still stink.<br />===<br /><br />Hmm. As for that CS undergrad, I'd probably suggest he<br />read a couple of the late Iain M. Banks' "Culture" novels.<br /><br />;-><br />jimfhttps://www.blogger.com/profile/04975754342950063440noreply@blogger.comtag:blogger.com,1999:blog-5956838.post-87451142879779454492015-12-21T14:19:44.569-08:002015-12-21T14:19:44.569-08:00“Hype is dangerous to AI. Hype killed AI four time...“Hype is dangerous to AI. Hype killed AI four times in the last five decades."<br /><br />Notice that this utterance is self-contradictory. Things that actually die have to be killed only once. Of course, there has never been an "AI" to kill. Far from being "dangerous" to AI-discourse, hype is all there is to AI-discourse. (Problems in computer science, user-friendliness, network security, of course, need have nothing to do with AI-discourse.)Dale Carricohttps://www.blogger.com/profile/02811055279887722298noreply@blogger.comtag:blogger.com,1999:blog-5956838.post-45448229149039804572015-12-21T13:13:09.556-08:002015-12-21T13:13:09.556-08:00> . . .past its. . . heyday. . .
http://mother...> . . .past its. . . heyday. . .<br /><br />http://motherboard.vice.com/read/we-need-to-talk-about-how-we-talk-about-artificial-intelligence<br />--------------<br />Elon Musk Calling Artificial Intelligence a 'Demon'<br />Could Actually Hurt Research<br />by Jordan Pearson<br />October 29, 2014<br /><br />Elon Musk drags the future into the present. He’s disrupted<br />space with his scrappy rocket startup SpaceX and played<br />a key role in making electric vehicles cool with Tesla Motors.<br />Because of this, when Musk talks about the future, people<br />listen. That’s what makes his latest comments on artificial<br />intelligence so concerning.<br /><br />Musk has a growing track record of using trumped-up rhetoric<br />to illustrate where he thinks artificial intelligence research<br />is heading. Most recently, he described current artificial<br />intelligence research as “summoning the demon,” and called<br />the malicious HAL 9000 of 2001: A Space Odyssey fame a “puppy dog”<br />compared to the AIs of the future. Previously, he’s explained<br />his involvement in AI firm DeepMind as being driven by his<br />desire to keep an eye on a possible Terminator situation developing.<br />This kind of talk does more harm than good, especially when<br />it comes from someone as widely idolised as Musk.<br /><br />Ultimately, Musk’s comments are hype; and hype, even when negative,<br />is toxic when it comes to research. As Gary Marcus noted in a<br />particularly sharp New Yorker essay last year, cycles of intense<br />public interest, rampant speculation, and the subsequent<br />abandonment of research priorities have plagued artificial<br />intelligence research for decades. The phenomenon is known as<br />an “AI winter”—recurring periods when funding for AI research<br />has dried up after researchers couldn’t deliver on the promises<br />that the media, and researchers themselves, made.<br /><br />As described in Daniel Crevier’s 1993 book outlining the history<br />of AI research, perhaps the most infamous example of an AI winter<br />occurred during the 1970s, when DARPA de-funded many of its<br />projects aimed at developing intelligent machines after many<br />of its initiatives failed to produce the results they expected.<br /><br />Yann LeCun, the head of Facebook’s AI lab, summed it up in a Google+<br />post back in 2013: “Hype is dangerous to AI. Hype killed AI four<br />times in the last five decades. AI Hype must be stopped.” What<br />would happen to the field if we can’t actually build a fully functional<br />self-driving car within five years, as Musk has promised? Forget<br />the Terminator. We have to be measured in how we talk about AI.<br />====<br /><br />Coupled with Moore's Law running out of steam, things might be<br />getting pretty chilly again for AI in the next 10 or 15 years.<br /><br />Bummer! I like new toys as much as the next guy. Where's that<br />shiny new 3D quantum memristor neuromorphic thingy?<br /><br />jimfhttps://www.blogger.com/profile/04975754342950063440noreply@blogger.comtag:blogger.com,1999:blog-5956838.post-6949789229373564522015-12-21T12:25:12.681-08:002015-12-21T12:25:12.681-08:00> I actually think the robocultic sub(cult)ure ...> I actually think the robocultic sub(cult)ure is past its cultural<br />> heyday, but its dwindling number[s]. . . [have] been more than compensated<br />> lately by the inordinate amount of high-profile "tech" billionaires<br />> who now espouse aspects of the worldview in ways that [will]. . .<br />> make money slosh around in ways it might not otherwise do. <br /><br />Sloshing around (via https://plus.google.com/+AlexanderKruel/posts ):<br /><br />http://futureoflife.org/2015/12/17/the-ai-wars-the-battle-of-the-human-minds-to-keep-artificial-intelligence-safe/<br />----------------<br />At the start of 2015, few AI researchers were worried<br />about AI safety, but that all changed quickly. Throughout<br />the year, Nick Bostrom’s book, Superintelligence: Paths,<br />Dangers, Strategies, grew increasingly popular. The Future<br />of Life Institute held its AI safety conference in Puerto Rico.<br />Two open letters regarding artificial intelligence and<br />autonomous weapons were released. Countless articles<br />came out, quoting AI concerns from the likes of Elon Musk,<br />Stephen Hawking, Bill Gates, Steve Wozniak, and other<br />luminaries of science and technology. Musk donated $10 million<br />in funding to AI safety research through FLI. Fifteen million<br />dollars was granted to the creation of the Leverhulme Centre<br />for the Future of Intelligence. And most recently, the<br />nonprofit AI research company, OpenAI, was launched to<br />the tune of $1 billion, which will allow some of the top<br />minds in the AI field to address safety-related problems<br />as they come up.<br />====<br /><br /><br />(via http://hplusmagazine.com/2015/12/21/29415/<br />Rise of the Robots: Disruptive Technologies,<br />Artificial Intelligence & Exponential Growth)<br /><br />https://www.youtube.com/watch?v=J9G7ziqvJPM<br />----------------<br />Ivar Moesman on Exponential Growth of Technology: Disruptions,<br />Implications, 3D printing & Bitcoin<br /><br />Ivar Moesman (@ivarivano) discusses exponential growth of technology,<br />how it disrupts existing industries and some learnings and implications.<br />Second part is an introduction to 3D printing and the third part<br />is about Bitcoin. First part is inspired by Ray Kurzweil and<br />Peter Diamandis & Steven Kotler‘s book BOLD. For the Bitcoin<br />part with special thank and admiration to Andreas M. Antonopoulos,<br />the bitcoin core developers, Roger Ver, Trace Mayer, Eric Voorhees,<br />Charlie Shrem, Gavin Andresen, the Bitcoin knowledge podcast,<br />Let’s talk bitcoin, Epicenter Bitcoin.<br /><br />Dr. Ben Goertzel (@bengoertzel) is widely recognized as the father<br />of Artificial General Intelligence. In this talk he discusses:<br />AI, artificial intelligence, artificial general intelligence,<br />deep learning, life extension, longevity, robotics, humanoid,<br />transhumanism.<br /><br />Professor Doctor De Garis discusses species dominance, artilects,<br />cosmists, terrans, cyborgists, artilect war, gigadeath.<br />====<br /><br />Ben Goertzel is the "father of Artificial General Intelligence"?<br />Well, at least he admits he didn't coin the **phrase**:<br /><br />http://wp.goertzel.org/who-coined-the-term-agi/<br />----------------<br />August 28, 2011<br /><br />In the last few years I’ve been asked increasingly often if<br />I invented the term “AGI” – the answer is “not quite!”<br /><br />I am indeed the one responsible for spreading the term around<br />the world. . . But I didn’t actually coin the phrase. . .<br /><br />In 2002 or so, Cassio Pennachin and I were editing a book on<br />approaches to powerful AI, with broad capabilities at the human<br />level and beyond, and we were struggling for a title. Shane Legg. . .<br />came up with Artificial General Intelligence. . .<br /><br />A few years later, someone brought to my attention that. . .<br />Mark Gubrud. . . had used the term in a 1997 article on the<br />future of technology and associated risks. . .<br />====<br />jimfhttps://www.blogger.com/profile/04975754342950063440noreply@blogger.comtag:blogger.com,1999:blog-5956838.post-17159343995232383292015-12-20T18:55:36.105-08:002015-12-20T18:55:36.105-08:00> I actually think the robocultic sub(cult)ure ...> I actually think the robocultic sub(cult)ure is past its cultural<br />> heyday. . .<br /><br />Yeah, it kind of hit its peak for me before The Future (for my<br />generation, that was the year 2000) actually arrived.<br /><br />Nevertheless, some folks are still partying like it's 1999.<br /><br />via http://ieet.org/index.php/IEET/more/pellissier20151221 :<br /><br />http://motherboard.vice.com/read/the-turing-church-preaches-the-religion-of-the-future<br />-----------------<br />The Turing Church Preaches the Religion of the Future<br />by Andrew Paul<br />December 16, 2015<br /><br />[T]he Italian theoretical physicist and computer scientist [talks]<br />about his latest, and, to some, most quixotic endeavor: the Turing Church,<br />a transhumanist group that he hopes will curate the crowdsourcing<br />of a techno-rapture. In many ways, Prisco and his supporters want<br />to provide a literal faith in the future.<br /><br />It’s one of the newest in a multitude of quasi-religious movements,<br />all vying for a place in the rapidly changing futurist landscape.<br />Prisco is carving out a digital space for what he hopes will store<br />the building blocks for the construction of humanity’s direction. . .<br />====<br /><br /><br />Don’t Worry, Intelligent Life Will Reverse the Slow Death of the Universe<br />-----------------<br />By Giulio Prisco<br />Turing Church<br />Posted: Aug 13, 2015<br /><br />A scientific paper announcing that the universe is slowly<br />dying is making waves on the Internet. But don’t worry, intelligent<br />life will be able to do something about that. . .<br />====<br /><br />But will intelligent life still be watching _Annie Hall_?<br /><br />-----------------<br />Alvy's mother: He's been depressed. All of a sudden, he can't do anything.<br /><br />Doctor: Why are you depressed, Alvy?<br /><br />Alvy's mother: Tell Dr. Flicker. (To the doctor) It's something he read.<br /><br />Doctor: Something he read, huh?<br /><br />Alvy: The universe is expanding...Well, the universe is everything,<br />and if it's expanding, some day it will break apart and that will be<br />the end of everything.<br /><br />Alvy's mother: What is that your business? (To the doctor) He stopped<br />doing his homework.<br /><br />Alvy: What's the point?<br /><br />Alvy's mother: What has the universe got to do with it? You're here in Brooklyn.<br />Brooklyn is not expanding.<br /><br />Doctor: It won't be expanding for billions of years, yet Alvy. And we've<br />got to try to enjoy ourselves while we're here, huh, huh? Ha, ha, ha.<br />====<br /><br />https://www.youtube.com/watch?v=5U1-OmAICpU<br />jimfhttps://www.blogger.com/profile/04975754342950063440noreply@blogger.comtag:blogger.com,1999:blog-5956838.post-24274290096507880422015-12-20T14:51:40.948-08:002015-12-20T14:51:40.948-08:00There is often a discursive/subcultural co-depende...There is often a discursive/subcultural co-dependence between profit/attention-seeking con artists and earnestly paranoid, usefully idiotic conspiracists, surely? That isn't confined to robocultism -- reactionary politics and New Age subcultures both offer photogenic examples.Dale Carricohttps://www.blogger.com/profile/02811055279887722298noreply@blogger.comtag:blogger.com,1999:blog-5956838.post-72636444138451398442015-12-20T13:39:16.346-08:002015-12-20T13:39:16.346-08:00There's been some chitchat on Tumblr in the pa...There's been some chitchat on Tumblr in the past couple of<br />days sparked by the recent OpenAI announcement and Siskind's<br />reaction to it.<br /><br />http://su3su2u1.tumblr.com/post/135473918278/unfortunately-i-think-we-live-in-a-different<br />-------------<br />> Unfortunately, I think we live in a different world. . .<br />><br />> -- @slatestarscratchpad<br /><br />Yay, we get to have this discussion again!<br /><br />I call dibs on calling bullshit before anyone else!<br />====<br /><br />http://su3su2u1.tumblr.com/post/135584613793/in-slatestars-open-ai-piece-scott-says-many<br />-------------<br />> Anonymous asked:<br />> <br />> In slatestar's open AI piece Scott says "many thinkers in this<br />> field including Nick Bostrom and Eliezer Yudkowsky worry..."<br />> and more generally refers to his piece on AI risk to suggest<br />> a consensus (with the 'Bostromian' view) in principle on the<br />> dangers of AI (if not actually in line with 'risk research').<br />> I don't wish to dismiss this as "researchers' incentives for<br />> funding lead to people chasing hype [even if from dubious sources]".<br />> Thoughts on a reasonable response to the claim?<br /><br />Well, first, Bostrom and Yudkowsky aren’t really technical<br />researchers. As a slight metaphor, they are more like philosophy<br />of science than actual science. They aren’t really publishing<br />technical CS work. So how is “the field” being defined? Describing<br />them as in the technical AI field is enormously misleading. <br /><br />Now, there are a few actual machine learning/AI researchers who<br />do say that maybe this is something worth worrying about, but<br />it’s a minority. Also, the majority of the people who say that<br />it’s worth worrying about generally aren’t putting their money<br />where their mouth is- their research plans are the same they’ve<br />always been. I think this puts a bound on how seriously they<br />really take the problem.<br />====<br /><br />And cf. Ben Goertzel on "The Singularity Institute's Scary Idea"<br />http://multiverseaccordingtoben.blogspot.com/2010/10/singularity-institutes-scary-idea-and.html<br />(via<br />http://amormundi.blogspot.com/2013/03/realtime-robocult-id-and-ick.html<br />http://amormundi.blogspot.com/2013/01/a-robot-god-apostles-creed-for-less.html<br />http://amormundi.blogspot.com/2013/10/wired-discussion-of-techno-immortalist.html )<br /><br />and<br /><br />The Fallacy of Dumb Superintelligence<br />By Richard Loosemore<br />http://ieet.org/index.php/IEET/more/loosemore20121128<br />jimfhttps://www.blogger.com/profile/04975754342950063440noreply@blogger.comtag:blogger.com,1999:blog-5956838.post-31458245640576820752015-12-20T13:24:44.074-08:002015-12-20T13:24:44.074-08:00> [Y]ou have futurists like Nick Bostrom and El...> [Y]ou have futurists like Nick Bostrom and Elon Musk who will<br />> claim to take climate change seriously but then who will insist<br />> that the more urgent "existential risk" humans face is artificial<br />> superintelligence. . .<br />><br />> But you know reactionary politics are always [d]riven by fear --<br />> and fear is always sad.<br /><br />You know, hard as it may be for more sober-minded folks to grasp,<br />there apparently **really are** people in the world who take<br />the Evil-AI-taking-over-the-world scenario seriously, and not just<br />con artists looking to solicit money for their "institutes"<br />or to further their academic careers. Bill Joy presumably really<br />did get the fantods back in 2000 after Ray Kurzweil chatted him up<br />in a bar about the coming technological Singularity. And apparently<br />people on Less Wrong really did get nightmares and anxiety attacks<br />about Roko's Basilisk.<br /><br />I gather that Scott Siskind (aka Scott Alexander of the "Slate Star<br />Codex" blog and the "Slate Star Scratchpad" Tumblr) -- and this guy<br />is a **psychiatrist** for crying out loud (or at least a<br />psychiatrist-in-training) -- **really really** takes this stuff seriously<br />(and that's why he's so invested in Yudkowsky/MIRI/LessWrong).<br /><br />http://slatestarcodex.com/2015/12/17/should-ai-be-open/<br />-------------<br />December 17, 2015<br />by Scott Alexander<br /><br />. . .<br /><br />The decision to make AI findings open source is a tradeoff<br />between risks and benefits. The risk is letting the most<br />careless person in the world determine the speed of AI<br />research – because everyone will always have the option<br />to exploit the full power of existing AI designs, and<br />the most careless person in the world will always be the<br />first one to take it. The benefit is that in a world<br />where intelligence progresses very slowly and AIs are<br />easily controlled, nobody will be able to use their<br />sole possession of the only existing AI to garner too<br />much power.<br /><br />Unfortunately, I think we live in a different world – one<br />where AIs progress from infrahuman to superhuman intelligence<br />very quickly, very dangerously, and in a way very difficult<br />to control unless you’ve prepared beforehand. . .<br />====jimfhttps://www.blogger.com/profile/04975754342950063440noreply@blogger.comtag:blogger.com,1999:blog-5956838.post-72982194522147934952015-12-20T07:41:37.178-08:002015-12-20T07:41:37.178-08:00> I have no doubt she could easily write a piec...> I have no doubt she could easily write a piece about<br />> futurology quite as excoriating as the sort I do, . . .<br />> I suspect she had drafts that managed the trick --<br />> but the published result was a puff-piece. . .<br /><br />Presumably you can blame the editorial policies, or decisions,<br />of that "high-profile tech publication".<br /><br />Balance, fairness, avoidance of undue controversy -- that's<br />what editors do, isn't it? Of course, by making it a puff<br />piece, or a "human interest narrative... of a handful of<br />zany robocultic personalities" the underlying message targeted<br />to the "sophisticated" reader can be "this article,<br />and the movement it describes, is pure entertainment --<br />you might as well be reading _People_ or _Us_ here"<br />while the enthusiast can think "Wow, look at all the<br />mainstream coverage we're getting!"<br /><br />Plausible deniability all around.<br /><br />And that subtly snarky, knowing, distancing attitude --<br />isn't that a major element of what they used to call "Timestyle"<br />(after the magazine that invented it)?jimfhttps://www.blogger.com/profile/04975754342950063440noreply@blogger.com