tag:blogger.com,1999:blog-5956838.post826087193550695931..comments2023-11-22T01:14:54.298-08:00Comments on amor mundi: FoolishnessDale Carricohttp://www.blogger.com/profile/02811055279887722298noreply@blogger.comBlogger17125tag:blogger.com,1999:blog-5956838.post-2348860388695265032017-04-12T07:29:49.715-07:002017-04-12T07:29:49.715-07:00> > . . .Dale Carrico . . . Acrid Oracle . ....> > . . .Dale Carrico . . . Acrid Oracle . . .<br />><br />> Now see, that's the kind of thing a contemporary "AI"<br />> **can** do. Permute all the letters, and then consult<br />> a dictionary to see which substrings are real<br />> words.<br /><br />As described by Jonathan Swift, almost 300 years ago:<br /><br />http://andromeda.rutgers.edu/~hbf/compulit.htm<br />------------<br />COMPUTERS IN FICTION<br />by H. Bruce Franklin<br />[This essay originally appeared in Encyclopedia of Computer Science<br />(Nature Publishing Group, 2000)]<br /><br />. . .<br /><br />To formulate a coherent history of computers in fiction,<br />the best place to begin may be Jonathan Swift's Gulliver's Travels,<br />published in 1726. Swift presents an inventor who has constructed<br />a gigantic machine designed to allow "the most ignorant Person"<br />to "write Books in Philosophy, Poetry, Politicks, Law, Mathematicks and Theology."<br />This "Engine" contains myriad "Bits" crammed with all the words of a language,<br />"all linked together by slender Wires" that can be turned by cranks,<br />thus generating all possible linguistic combinations. Squads of<br />scribes produce hard copy by recording any sequence of words that<br />seems to make sense. . .<br />====<br /><br /><br />Whatever the source of the human obsession with artificial life and<br />artificial mind -- whether created by means of clockwork automata, stitching<br />together parts of corpses and zapping them to life with lightning, or<br />reciting magic spells to animate clay or marble effigies (Golems or Galateas) --<br />it really is rather amazing to consider just how old the dream (or the nightmare) is.<br />Thousands of years old. All bound up with the endlessly fascinating<br />(and terrifying) border between life and death, the fear of death<br />(and especially of things that were once alive but are now dead,<br />or things that look like they might be alive but are really dead),<br />and ghosts and vampires and all the other furniture of horror literature<br />and bad dreams.<br /><br />All well antedating the digital computer. The latest technology just<br />seems (if you don't think too hard about it) to put the old<br />fantasies on a new-fangled, "scientific" footing. And to give<br />overly susceptible folks a new reason to scare themselves into<br />insomnia. ;->jimfhttps://www.blogger.com/profile/04975754342950063440noreply@blogger.comtag:blogger.com,1999:blog-5956838.post-56726452180620785012017-04-06T14:40:37.153-07:002017-04-06T14:40:37.153-07:00I notice that one of the commenters in the thread ...I notice that one of the commenters in the thread at<br />http://lesswrong.com/r/discussion/lw/oul/openai_makes_humanity_less_safe/<br />is one "Dagon".<br /><br />I wonder if this is the same "Dagon" who was an occasional commenter<br />here back in '09 (and who got a special mention in<br />https://amormundi.blogspot.com/2009/05/well-isnt-that-special.html ).<br /><br />Likely enough, I suppose -- the "Dagon" in the OpenAI thread<br />on LW has been posting there for at least a decade (posts from back<br />in '07 and the recent comment link to the same LW user overview).<br /><br />https://amormundi.blogspot.com/2009/05/well-isnt-that-special.html<br />------------<br />[Dagon wrote, in an excerpt from a comment on Giulio Prisco's<br />blog] It is frustrating to know that I whereas feel as secure<br />in my h+ist convictions as I can possibly be, and it will take<br />decades to have him eat his shoe. It would be very amusing to<br />have a singularity in 2012, if only to read the comments Dale<br />makes about it. . .<br />====<br /><br />Cf.<br /><br />https://amormundi.blogspot.com/2009/04/its-more-than-fun-to-ridicule.html<br />------------<br />"Four Years Later"<br />Date: Fri Apr 19 2002<br />http://www.sl4.org/archive/0204/3384.html<br /><br />The date is April 19, 2006 and the world is on the verge of something<br />wonderful. The big news of the last twelve months is the phenomenal success<br />of Ben Goertzel's Novamente program. It has become a super tool for solving<br />complex problems. . . "[M]iracle" cures for one major disease after<br />another are being produced on almost a daily basis. . .<br />[T]he success of the Novamente system has made<br />Ben Goertzel rich and famous making frequent appearances on the talk show<br />circuit as well as visits to the White House. One surprise is the fact that<br />the System was unable to offer any useful advise to the legal team that<br />narrowly fended off the recent hostile take over attempt by IBM. The<br />Novamente phenomen[on] has triggered an explosion of public interest and<br />research in AI. Consequently, the non-profit organization The Singularity<br />Institute for Artificial Intelligence has been buried under an avalanche of<br />donations. In their posh new building in Atlanta we find Eliezer working<br />with the seedai system of his own design. . . <br />====<br /><br />Any day now. Start tenderizing those shoes. :-/jimfhttps://www.blogger.com/profile/04975754342950063440noreply@blogger.comtag:blogger.com,1999:blog-5956838.post-66003998217185109112017-04-06T07:38:35.353-07:002017-04-06T07:38:35.353-07:00> It’s very easy to justify murder of one indiv...> It’s very easy to justify murder of one individual, or the threat<br />> of it even if you are not sure you’d carry it through, if it is<br />> offset by some imagined saving of the world.<br /><br />I wrote to one of these folks, back in 2003<br />(via https://amormundi.blogspot.com/2009/05/advice-to-shaken-robot-cultist.html ):<br /><br />> . . .I think it's important for you to understand its implications<br />> (though I have little hope that you will).<br />> <br />> If the Singularity is the fulcrum determining humanity's<br />> future, and **you** are the fulcrum of the Singularity,<br />> the point at which dy/dx -> infinity, the very inflection<br />> point itself, then **ALL** morality goes out the window.<br />> <br />> You might as well be dividing by zero.<br />> <br />> You could justify **anything** on that basis. . .<br />> <br />> The more hysterical things seem, the more desperate,<br />> the more apocalyptic, the more the discourse **and**<br />> moral valences get distorted (a singularity indeed!)<br />> by the weight of importance bearing down on one human<br />> pair of shoulders. Which happens to belong to you (what<br />> a coincidence).<br />> <br />> Don't go there. . . Back slowly away from the precipice.<br />> Before it's too late.<br /><br />To which my interlocutor replied:<br /><br />> > You could justify **anything** on that basis<br />><br />> No, *you* could justify anything on that basis. I am much more careful<br />> with my justifications. . .<br />><br />> Ethics doesn't change as the stakes go to infinity.<br /><br /><br />So people have gotten death threats. No surprise there, I guess.<br /><br />At least, as far as I know, nobody has yet **died** as a result<br />of this nonsense (by their own or somebody else's hand). Which is<br />more, I guess, than can be said for Scientology (or Mormonism).<br />jimfhttps://www.blogger.com/profile/04975754342950063440noreply@blogger.comtag:blogger.com,1999:blog-5956838.post-15076763453270543512017-04-06T07:24:20.811-07:002017-04-06T07:24:20.811-07:00> https://reddragdiva.tumblr.com/post/159236265...> https://reddragdiva.tumblr.com/post/159236265808/let-none-say-phyg<br />> ---------------<br />> let none say phyg [that's the rot13 encoding of "cult"]<br /><br />Cf.<br /><br />https://amormundi.blogspot.com/2014/10/robocultic-kack-fight.html<br />---------------<br />Back in 2004, one Michael Wilson had materialized as an insider<br />in SIAI. . . circles. . . At one point, he made a post<br />[on the S(hock)L(evel)4 mailing list (an Eliezer Yudkowsky-owned forum)]<br />in which he castigated himself. . . for having "almost destroyed<br />the world last Christmas" as a result of his own attempts to "code an AI",<br />but now that he had seen the light (as a result of SIAI's propaganda) he<br />would certainly be more cautious in the future. (Of course, no<br />one on the list seemed to find his remarks particularly<br />outrageous. . .) . . . I sincerely hope that we can solve these problems<br />[of AI "Friendliness"], stop Ben Goertzel and his army of evil clones<br />(I mean emergence-advocating AI researchers :) and engineer the apotheosis. . .<br /><br />(http://www.sl4.org/archive//0404/8401.html<br />http://sl4.org/wiki/Starglider )<br /><br />The smiley in the above did not reassure me. <br />====<br /><br />https://amormundi.blogspot.com/2009/04/lets-talk-about-cultishness.html<br />---------------<br />In the **absolute worst case** scenario I can imagine,<br />a genuine lunatic F[riendly]AI-ite will take up the Unabomber's<br />tactics, sending packages like the one David Gelernter<br />got in the mail.<br />====<br /><br />https://amormundi.blogspot.com/2013/01/a-robot-god-apostles-creed-for-less.html<br />---------------<br />[Ben Goertzel wrote on LessWrong]: After I wrote that blog post<br />["The Singularity Institute's Scary Idea"<br />http://multiverseaccordingtoben.blogspot.com/2010/10/singularity-institutes-scary-idea-and.html ],<br />Michael Anissimov -- a long-time SIAI staffer and zealot whom I<br />like and respect greatly -- told me he was going to write up and<br />show me a systematic, rigorous argument as to why “an AGI not built<br />based on a rigorous theory of Friendliness is almost certain to<br />kill all humans” (the proposition I called “SIAI’s Scary Idea”).<br />But he hasn’t followed through on that yet -- and neither has<br />Eliezer or anyone associated with SIAI. . .<br />====<br />jimfhttps://www.blogger.com/profile/04975754342950063440noreply@blogger.comtag:blogger.com,1999:blog-5956838.post-9813310904176923952017-04-06T07:22:20.190-07:002017-04-06T07:22:20.190-07:00https://reddragdiva.tumblr.com/post/159236265808/l...https://reddragdiva.tumblr.com/post/159236265808/let-none-say-phyg<br />---------------<br />let none say phyg [that's the rot13 encoding of "cult"]<br /><br />DustinWehr<br />03 April 2017<br />[ http://lesswrong.com/r/discussion/lw/oul/openai_makes_humanity_less_safe/dqjq ]<br /><br />> A guy I know, who works in one of the top M[achine]L[earning] groups,<br />> is literally less worried about superintelligence than he is about<br />> getting murdered by rationalists. That’s an extreme POV. Most researchers<br />> in ML simply think that people who worry about superintelligence are<br />> uneducated cranks addled by sci fi.<br />><br />> I hope everyone is aware of that perception problem.<br /><br />username2<br />05 April 2017<br />[ http://lesswrong.com/r/discussion/lw/oul/openai_makes_humanity_less_safe/dqmr ]<br /><br />> Are you describing me? It fits to a T except my dayjob isn’t ML.<br />> I post using this shared anonymous account here because in the past<br />> when I used my real name I received death threats online from<br />> L[ess]W[rong] users. In a meetup I had someone tell me to my face<br />> that if my AGI project crossed a certain level of capability,<br />> they would personally hunt me down and kill me. They were quite serious.<br />><br />> I was once open-minded enough to consider AI x-risk seriously.<br />> I was unconvinced, but ready to be convinced. But you know what?<br />> Any ideology that leads to making death threats against peaceful,<br />> non-violent open source programmers is not something I want to let<br />> past my mental hygiene filters.<br />><br />> If you, the person reading this, seriously care about AI x-risk,<br />> then please do think deeply about what causes this, and ask youself<br />> what can be done to put a stop to this behavior. Even if you haven’t<br />> done so yourself, it is something about the rationalist community which<br />> causes this behavior to be expressed.<br />><br />> . . .<br />><br />> I would be remiss without layout out my own hypothesis. I believe<br />> much of this comes directly from ruthless utilitarianism and the<br />> “shut up and multiply” mentality. It’s very easy to justify murder<br />> of one individual, or the threat of it even if you are not sure you’d<br />> carry it through, if it is offset by some imagined saving of the world.<br />> The problem here is that nobody is omniscient, and AI x-riskers are<br />> willing to be swayed by utility calculations that in reality have<br />> so much uncertainty that they should never be taken seriously. . .<br />====jimfhttps://www.blogger.com/profile/04975754342950063440noreply@blogger.comtag:blogger.com,1999:blog-5956838.post-87835964034307538652017-04-05T14:41:08.264-07:002017-04-05T14:41:08.264-07:00To paraphrase a Great Man: "Nobody knew the ...To paraphrase a Great Man: "Nobody knew the world<br />could be so complicated."<br /><br /><br />To Curb Global Warming, Science Fiction May Become Fact<br />Eduardo Porter<br />ECONOMIC SCENE<br />APRIL 4, 2017<br />--------------<br />Remember “Snowpiercer”? . . .<br /><br />[A]n attempt to engineer the climate and stop global warming<br />goes horribly wrong. The planet freezes. Only the passengers<br />on a train endlessly circumnavigating the globe survive.<br />Those in first class eat sushi and quaff wine [like Tilda Swinton<br />http://cdn.moviestillsdb.com/sm/660b9e1c73b116ac128044479780be50/snowpiercer.jpg ].<br />People in steerage eat cockroach protein bars.<br /><br />Scientists must start looking into this. Seriously. . .<br /><br />Let’s get real. The odds that these processes could be slowed,<br />let alone stopped, by deploying more solar panels and wind turbines<br />seemed unrealistic even before President Trump’s election.<br />It is even less likely now that Mr. Trump has gone to work<br />undermining President Barack Obama’s strategy to reduce<br />greenhouse gas emissions.<br /><br />That is where engineering the climate comes in. . .<br /><br />[T]he research agenda must include an open, international debate<br />about the governance structures necessary to deploy a technology that,<br />at a stroke, would affect every society and natural system in the<br />world. In other words, geoengineering needs to be addressed not<br />as science fiction, but as a potential part of the future just a<br />few decades down the road.<br /><br />“Today it is still a taboo, but it is a taboo that is crumbling,” . . .<br /><br />Arguments against geoengineering are in some ways akin to those<br />made against genetically modified organisms and so-called Frankenfood. . .<br /><br />[H]ow could the world agree on the deployment of a technology<br />that will have different impacts on different countries? How could<br />the world balance the global benefit of a cooling atmosphere<br />against a huge disruption of the monsoon on the Indian subcontinent?<br />Who would make the call? Would the United States agree to this<br />kind of thing if it brought drought to the Midwest? Would Russia<br />let it happen if it froze over its northern ports?<br /><br />Geoengineering would be cheap enough that even a middle-income<br />country could deploy it unilaterally. . .<br /><br />“The biggest challenge posed by geoengineering is unlikely to be<br />technical, but rather involve the way we govern the use of this<br />unprecedented technology.” . . .<br /><br />People should keep in mind the warning by Alan Robock, a<br />Rutgers University climatologist, who argued that the worst case<br />from the deployment of geoengineering technologies might<br />be nuclear war. . .<br />====<br /><br /><br />Geeee oh, oh geee oh.<br /><br />https://www.youtube.com/watch?v=SjHNwi0YotA<br /><br />Old worms of yesterday. . . unbellyfeel. . . THE WORMHOLE!!!<br />http://2.bp.blogspot.com/-xyhAQJtQ8GU/VZa5TuUqIZI/AAAAAAABwYY/g2_Yut5kzZA/s1600/singularity-institute.jpg<br /><br />All I want is to be in his movie. . .<br />https://www.youtube.com/watch?v=xqztBM1_Vp0<br /><br />;->jimfhttps://www.blogger.com/profile/04975754342950063440noreply@blogger.comtag:blogger.com,1999:blog-5956838.post-92192189668502632632017-04-05T12:33:53.092-07:002017-04-05T12:33:53.092-07:00> on the old Extropians' mailing list. . .
...> on the old Extropians' mailing list. . .<br />><br />> https://singularityhub.com/2017/04/05/old-mice-made-young-again-with-new-anti-aging-drug/<br />> ---------<br />> Old Mice Made Young Again With New Anti-Aging Drug<br /><br />Geez, remember Doug Skrecky and his fruit flies?<br /><br />Apparently somebody does:<br /><br />https://www.youtube.com/watch?v=oVu0UaJE-s0<br />---------<br />Stem Cell life extension formulas. Doug Skrecky<br />fruit fly, longevity, anti aging, life extension <br />Scott Rauvers<br />Apr 17, 2016<br />====<br />jimfhttps://www.blogger.com/profile/04975754342950063440noreply@blogger.comtag:blogger.com,1999:blog-5956838.post-12151218510020609082017-04-05T12:17:41.623-07:002017-04-05T12:17:41.623-07:00> I'm reminded of some discussions I weighe...> I'm reminded of some discussions I weighed in on 16 ( :-0 )<br />> years ago on the old Extropians' mailing list. (It's<br />> 2017 -- do you know where your Singularity is?!) . . .<br />><br />> I wonder how old Ms. Fan was in 2001.<br /><br />Oldthinkers unbellyfeel. . .<br /><br />https://singularityhub.com/2017/04/05/old-mice-made-young-again-with-new-anti-aging-drug/<br />---------<br />Old Mice Made Young Again With New Anti-Aging Drug<br />by Shelly Fan<br />Apr 05, 2017<br /><br />. . .<br /><br />[A] collaborative effort between the Erasmus University in the<br />Netherlands and the Buck Institute for Research on Aging in California<br />may have a solution. Published in the prestigious journal Cell,<br />the team developed a chemical torpedo that, after injecting into mice,<br />zooms to senescent cells and puts them out of their misery, while<br />leaving healthy cells alone. . .<br />====<br /><br /><br />I guess this isn't the same thing as got the Young Turks excited<br />a few days ago:<br /><br />https://www.youtube.com/watch?v=v7aib21s2N8<br />---------<br />Harvard Scientists REVERSE Aging In Mice. People Next...<br />The Young Turks<br />Mar 26, 2017<br /><br /><br />Dr. David Sinclair, from Harvard Medical School, and his colleagues<br />reveal their new findings in the latest issue of Science. They focused<br />on an intriguing compound with anti-aging properties called<br />NAD+, short for nicotinamide adenine dinucleotide. . .<br />====<br /><br />No mention by the Turks of the hoopla a decade ago about resveratrol<br />and SIRT1 activators.<br />https://en.wikipedia.org/wiki/Sirtris_Pharmaceuticals<br /><br />Me, I'm betting on the Peter Thiel (and Eldritch Palmer)<br />page-out-of-Count Dracula approach ;-> .<br />( https://amormundi.blogspot.com/2016/08/william-burroughs-on-peter-thiel.html )<br /><br />Hey, does Ray Kurzweil get blood changes these days, or is<br />he still just gobbling supplements (including NAD+ ?) and getting his biomarkers<br />measured by Dr. Terry Grossman? Inquiring minds. . . Well, come to<br />think, I'm not sure I **do** want to know. :-0<br />jimfhttps://www.blogger.com/profile/04975754342950063440noreply@blogger.comtag:blogger.com,1999:blog-5956838.post-58712138093400724132017-04-05T11:57:24.780-07:002017-04-05T11:57:24.780-07:00From your Twitter feed:
https://twitter.com/tnajo...From your Twitter feed:<br /><br />https://twitter.com/tnajournal/status/849419490869882880<br />------------<br />DNA isn't mere code -- it's dynamic. Scientists describe it with words<br />like "orchestration," "choreography," "dance"<br /><br />http://www.thenewatlantis.com/publications/evolution-and-the-purposes-of-life<br />====<br /><br />Computer programmers unbellyfeel the molecular dance that is life.<br /><br />And **nervous systems** -- all of 'em, not just the<br />Human Brain (insert b'rakah, genuflect) -- pile levels of<br />**inter**cellular dynamism on top of the **intra**cellular<br />DNA'n'metabolism disco.<br /><br />I'm reminded of some discussions I weighed in on 16 ( :-0 )<br />years ago on the old Extropians' mailing list. (It's<br />2017 -- do you know where your Singularity is?!)<br /><br /><br />http://extropians.weidai.com/extropians.2Q01/3898.html<br />------------<br />Re: Keeping AI at bay (was: How to help create a singularity)<br />May 06 2001<br /><br />Eugene.Leitl@lrz.uni-muenchen.de wrote:<br /><br />> [C]urrent early precursors of reconfigurable hardware (FPGAs)<br />> seem to generate extremely compact, nonobvious solutions even<br />> using current primitive evolutionary algorithms.<br /><br />But at some point the evolution stops<br />(when the FPGA is deemed to have solved the problem), the chip is plugged<br />into the system and switched on, and becomes just another piece of<br />static hardware. Same with neural networks -- there's a training set<br />corresponding to the problem domain, the network is trained on it,<br />and then it's plugged into the OCR program (or whatever), shrink-wrapped,<br />and sold.<br /><br />Still too static, folks, to be a basis for AI. When are we going to have<br />hardware with the sort of continual plasticity and dynamism that nerve tissue has?<br />(I know it's going to be hard. And, in the meantime, evolved FPGAs<br />might have their uses, if people can trust them to be reliable). . .<br /><br />---<br /><br />[ http://extropians.weidai.com/extropians.2Q01/3906.html ]<br /><br />James Rogers wrote:<br /><br />> Give me just one example of something you can do in high-plasticity<br />> evolvable hardware that can't be done in software.<br /><br />Give **me** an example of just one out of the trillions of instances<br />of high-plasticity evolvable hardware runnning around on this<br />planet that's been successfully replicated in software!<br />====<br /><br /><br />http://extropians.weidai.com/extropians.2Q01/2311.html<br />------------<br />Re: Contextualizing seed-AI proposals<br />Apr 14 2001<br /><br />> Intelligence ("problem-solving", "stream of consciousness")<br />> is built from thoughts. Thoughts are built from structures<br />> of concepts ("categories", "symbols"). Concepts are built from<br />> sensory modalities. Sensory modalities are built from the<br />> actual code.<br /><br />Too static, I fear. Also, too dangerously perched on<br />the edge of what you have already dismissed as the "suggestively-<br />named Lisp token" fallacy.<br /><br />Fee, fie, foe, fum.<br />Cogito, ergo sum. . .<br />====<br /><br /><br />> [W]hen the FPGA is deemed to have solved the problem, the chip is plugged<br />> into the system and switched on, and becomes just another piece of<br />> static hardware. . .<br /><br />Yeah, this is like what happens to Deep Learning (TM) neural networks,<br />after they're trained:<br /><br />https://singularityhub.com/2017/03/29/google-chases-general-intelligence-with-new-ai-that-has-a-memory/<br />------------<br />Google Chases General Intelligence With New AI That Has a Memory<br />Shelly Fan<br />Mar 29, 2017<br /><br />[A]rtificial neural networks like Google’s DeepMind learn to master<br />a singular task and call it quits. To learn a new task, it has to reset,<br />wiping out previous memories and starting again from scratch.<br /><br />This phenomenon, quite aptly dubbed “catastrophic forgetting,”<br />condemns our AIs to be one-trick ponies. . .<br /><br />---<br /><br />Shelly Xuelai Fan is a neuroscientist at the University of California,<br />San Francisco, where she studies ways to make old brains young again.<br />In addition to research, she's also an avid science writer with an<br />insatiable obsession with biotech, AI and all things neuro. . .<br />====<br /><br /><br />I wonder how old Ms. Fan was in 2001.jimfhttps://www.blogger.com/profile/04975754342950063440noreply@blogger.comtag:blogger.com,1999:blog-5956838.post-10969775812286369402017-04-04T21:36:14.708-07:002017-04-04T21:36:14.708-07:00> http://unremediatedgender.space/2017/Jan/from...> http://unremediatedgender.space/2017/Jan/from-what-ive-tasted-of-desire/<br />> ----------------<br />> Why "gender identity" and trans activism could literally destroy the world<br />><br />> . . .<br />><br />> [H]umans are a mess of conflicting desires inherited from our evolutionary<br />> and sociocultural history; we don't have a utility function written down<br />> anywhere that we can just put in the AI.<br /><br /><br />http://www.chakoteya.net/StarTrek/37.htm<br />------------<br />The Changeling<br />Original Airdate: 29 Sep, 1967<br /><br />KIRK: . . . Lieutenant. Lieutenant, are you all right?<br /><br />(Uhura just gazes blankly ahead.)<br /><br />KIRK: Sickbay. What did you do to her?<br /><br />NOMAD: That unit is defective. Its thinking is chaotic. Absorbing it unsettled me.<br /><br />SPOCK: That unit is a woman.<br /><br />NOMAD: A mass of conflicting impulses.<br />====<br /><br /><br />;->jimfhttps://www.blogger.com/profile/04975754342950063440noreply@blogger.comtag:blogger.com,1999:blog-5956838.post-42135115606116427642017-04-03T11:39:59.408-07:002017-04-03T11:39:59.408-07:00> Imagine the future Bro-bot God.
Or, alternat...> Imagine the future Bro-bot God.<br /><br />Or, alternatively, we could get an AI Overlord acculturated<br />as that bane of all libertechbrotarians, the Social Justice Warrior.<br /><br />In fact, Google is working on that one as we speak:<br /><br />https://www.nytimes.com/2017/04/03/technology/google-training-ad-placement-computers-to-be-offended.html<br />---------------<br />Google Training Ad Placement Computers to Be Offended<br />By DAISUKE WAKABAYASHI<br />APRIL 3, 2017<br /><br />MOUNTAIN VIEW, Calif. — Over the years, Google trained computer systems<br />to keep copyrighted content and pornography off its YouTube service.<br />But after seeing ads from Coca-Cola, Procter & Gamble and Wal-Mart<br />appear next to racist, anti-Semitic or terrorist videos, its engineers<br />realized their computer models had a blind spot: They did not understand<br />context.<br /><br />Now teaching computers to understand what humans can readily grasp<br />may be the key to calming fears among big-spending advertisers that<br />their ads have been appearing alongside videos from extremist groups<br />and other offensive messages.<br /><br />Google engineers, product managers and policy wonks are trying to<br />train computers to grasp the nuances of what makes certain videos<br />objectionable. . .<br />====<br /><br /><br />_South Park_ gave us Mecha-Streisand. Here's a nightmare meme for the<br />libertechbrotarians exponentially worse than Roko's Basilisk:<br /><br />MECHA-P.Z. MYERS!!!!<br /><br />https://inignorance.files.wordpress.com/2013/01/pz.jpg<br /><br />AIeeeeee!<br /><br />;-><br />jimfhttps://www.blogger.com/profile/04975754342950063440noreply@blogger.comtag:blogger.com,1999:blog-5956838.post-73513302213215843422017-04-03T11:30:15.008-07:002017-04-03T11:30:15.008-07:00Loc. cit
-------------
tariqk:
> I swear to go...Loc. cit<br />-------------<br />tariqk:<br /><br />> I swear to god, if I hear another pasty wight boi wring their<br />>hands together about The Coming SuperIntelligence™…<br />><br />> As if we already don’t have perfectly stupid sub-intelligent algorithms<br />> ruining lives, causing destruction. But those algorithms are owned<br />> by wight people, so that’s apparently okay.<br />><br />> It’s like wight people — or, really, wight bois — are secretly terrified<br />> that their malevolent rule will be supplanted by beings that are just<br />> as cruel as them. . .<br />====<br /><br /><br />> https://www.youtube.com/watch?v=gLKmKqrNUKY<br />> ---------------<br />> Joe Rogan and Lawrence Krauss on artificial intelligence<br />><br />> Krauss: AI researchers [say] -- and I find<br />> this statement almost vacuous, but I'm amazed that they use it all<br />> the time -- . . . program machines with "human values". . . <br />> [A] very smart guy. . . said to me, "well, they just have to watch us." And I<br />> said, "What do you mean -- they watch Donald Trump and they know what<br />> human values are?" I mean -- come on!<br /><br />Or our AI pupils could watch these guys:<br /><br />https://www.nytimes.com/2017/04/01/opinion/sunday/jerks-and-the-start-ups-they-ruin.html<br />-------------<br />Jerks and the Start-Ups They Ruin<br />By DAN LYONS<br />APRIL 1, 2017<br /><br />. . .<br /><br />[T]he real problem with tech bros is not just that they’re<br />boorish jerks. It’s that they’re boorish jerks who don’t know<br />how to run companies.<br /><br />Look at Uber, the ride-hailing start-up. . . The company’s woes<br />spring entirely from its toxic bro culture, created by its<br />chief executive, Travis Kalanick.<br /><br />What is bro culture? Basically, a world that favors young men<br />at the expense of everyone else. A “bro co.” has a “bro” C.E.O.,<br />or C.E.-Bro, usually a young man who has little work experience<br />but is good-looking, cocky and slightly amoral — a hustler. . .<br /><br />Bro cos. become corporate frat houses, where employees are chosen<br />like pledges, based on “culture fit.” Women get hired, but they<br />rarely get promoted and sometimes complain of being harassed.<br />Minorities and older workers are excluded.<br /><br />Bro culture also values speedy growth over sustainable profits,<br />and encourages cutting corners, ignoring regulations and doing<br />whatever it takes to win.<br /><br />Sometimes it works. But often the whole thing just flames out. . .<br />====<br /><br /><br />Imagine the future Bro-bot God. Gets the whole human race drunk,<br />and then sends drone cameras scurrying about taking pictures up women's<br />skirts.<br />jimfhttps://www.blogger.com/profile/04975754342950063440noreply@blogger.comtag:blogger.com,1999:blog-5956838.post-83042538082188364282017-04-03T11:29:22.488-07:002017-04-03T11:29:22.488-07:00Boku de Roko
https://reddragdiva.tumblr.com/ (Dav...Boku de Roko<br /><br />https://reddragdiva.tumblr.com/ (David Gerard)<br />-------------<br />the other roko’s basilisk<br /><br />> there’s a novella called roko’s basilisk which someone wrote<br />> and put up on kindle. . .<br /><br />just finished it. . . it’s a quick psychological horror short.<br />basically it takes the concepts behind roko’s basilisk and puts<br />them into story form. “roko” plays both yudkowsky and roko and<br />explains the killing meme to his not-as-brilliant friend.<br />in this world “friendly ai” is a term used in real ai research<br />(rather than something that gets real ai researchers punching walls<br />harder than chemists do at “nanobots”). “roko” has solved<br />Coherent Extrapolated Volition or something close enough for<br />a scifi handwave. . .<br />====<br /><br /><br />Ehh. . . I'm reassimilating _Neuromancer_ in audiobook form.<br />And I think I'll listen to the BBC radio play after that.<br /><br />I used to be able to buy single wrapped pieces of Ting Ting Jahe<br />candied ginger at a deli down the street from where I worked.<br />Nowadays I can order a bag of it on Amazon if I want.<br />Trying to keep the sugar consumption under control, though. ;->jimfhttps://www.blogger.com/profile/04975754342950063440noreply@blogger.comtag:blogger.com,1999:blog-5956838.post-39551950662796252632017-04-02T20:45:29.122-07:002017-04-02T20:45:29.122-07:00> . . . artificially imbecilent . . .
https://...> . . . artificially imbecilent . . .<br /><br />https://reddragdiva.tumblr.com/tagged/the-crackpot-offer-indeed<br />------------<br />btw, the quality MIRI sneer culture fodder is now at<br />https://www.reddit.com/r/ControlProblem/<br /><br />in which we see rationalists™ expound upon the AI safety implications<br />of how those vile transgenders will PAPERCLIP US ALL!!!!<br />(and oh god the discussion)<br /><br />and the rationalists were doing so well with transgender issues up<br />to now. turns out they’re fake goths<br />====<br /><br /><br />"the rationalists were doing so well with transgender issues up<br />to now"? I guess that means Michael Anissimov never counted<br />as a rationalist™.<br /><br />There was a Twitter war a few years ago, tagged "#Trannygate",<br />between our old pal Michael and NRx fellow-traveller<br />Bryce Laliberte over the latter's daring to consort with<br />transgender Google programmer Justine Tunney<br />(http://www.thedailybeast.com/articles/2014/08/01/occupying-the-throne-justine-tunney-neoreactionaries-and-the-new-1-percent.html<br />and cf. stuff I quoted in comment thread of<br />https://amormundi.blogspot.com/2014/09/robot-cultist-martine-rothblatt-is-in.html ).<br /><br /><br />But what could the T in LGBT possibly have to do with artificial intelligence?<br /><br />Oh.<br /><br />(via<br />https://www.reddit.com/r/ControlProblem/?count=50&after=t3_60h0e4 )<br />http://unremediatedgender.space/2017/Jan/from-what-ive-tasted-of-desire/<br />----------------<br />Why "gender identity" and trans activism could literally destroy the world<br /><br />. . .<br /><br />[H]umans are a mess of conflicting desires inherited from our evolutionary<br />and sociocultural history; we don't have a utility function written down<br />anywhere that we can just put in the AI. So if the systems that ultimately<br />run the world end up with a utility function that's not in the incredibly<br />specific class of those we would have wanted if we knew how to translate<br />everything humans want or would-want into a utility function, then the<br />machines disassemble us for spare atoms and tile the universe with<br />something else. . .<br /><br />the bad epistemic hygiene habits of the trans community that are<br />required to maintain the socially-acceptable alibi that transitioning is<br />about expressing some innate "gender identity", are necessarily spread<br />to the computer science community, as an intransigent minority of trans<br />activist-types successfully enforce social norms mandating that everyone<br />must pretend not to notice that trans women are eccentric men. With<br />social reality placing such tight constraints on perception of actual<br />reality, our chances of developing the advanced epistemology needed to<br />rise to the occasion of solving the alignment problem seem slim at best. . .<br />====<br /><br /><br />Uh **huh**.jimfhttps://www.blogger.com/profile/04975754342950063440noreply@blogger.comtag:blogger.com,1999:blog-5956838.post-43508997547442084752017-04-02T17:57:29.170-07:002017-04-02T17:57:29.170-07:00> Talking about AI all these years has rendered...> Talking about AI all these years has rendered me<br />> artificially imbecilent at last...<br /><br />Don't fret. Our wits will be refurbished as soon<br />as we get our own AIs to talk **to**!<br /><br />> > http://jeffwise.net/2017/03/15/when-machines-go-rogue/<br />> ><br />> > The Outline: When Machines Go Rogue<br />> ><br />> > . . . the jet hit the frozen ground with the velocity<br />> > of a .45 caliber bullet. . . <br />><br />> Of course, this real-life autopilot malfunction, as<br />> tragic as its consequences were, still lacks the main<br />> maguffin of an "AI thriller"<br /><br />https://mathbabe.org/2016/07/11/when-is-ai-appropriate/<br />--------------<br />When is AI appropriate?<br />July 11, 2016<br />Cathy O'Neil<br /><br />I was invited last week to an event co-sponsored by the<br />White House, Microsoft, and NYU called AI Now: The social<br />and economic implications of artificial intelligence technologies<br />in the near term.<br /><br />Before I talk about some of the ideas that came up, I want to<br />mention that the definition of “AI” was never discussed. After<br />a while I took it to mean anything that was technological that<br />had an embedded flow chart inside it. So, anything vaguely<br />computerized that made decisions. Even a microwave that automatically<br />detected whether your food was sufficiently hot – and kept<br />heating if it wasn’t – would qualify as AI under these rules. . .<br />====<br /><br /><br /><br />A killer microwave. No, I don't think that would cut the<br />mustard as an AI thriller maguffin either. It might be suitable<br />for a supernatural thriller -- like that demon-possessed<br />floor lamp in Amityville 4 - The Evil Escapes (with Patty Duke<br />and Jane Wyatt, no less) ;-><br /><br />https://www.youtube.com/watch?v=HjIcSgXZ6wI<br /><br />(Hey, was that a microwave that got Jane Wyatt's parrot?<br />No, I guess it was a toaster oven.)<br />jimfhttps://www.blogger.com/profile/04975754342950063440noreply@blogger.comtag:blogger.com,1999:blog-5956838.post-46299586744621931582017-04-02T15:39:03.919-07:002017-04-02T15:39:03.919-07:00Talking about AI all these years has rendered me a...Talking about AI all these years has rendered me artificially imbecilent at last...Dale Carricohttps://www.blogger.com/profile/02811055279887722298noreply@blogger.comtag:blogger.com,1999:blog-5956838.post-81657626429311761132017-04-01T16:07:46.847-07:002017-04-01T16:07:46.847-07:00> . . . Acrid Oracle . . .
Now see, that's...> . . . Acrid Oracle . . .<br /><br />Now see, that's the kind of thing a contemporary "AI"<br />**can** do. Permute all the letters, and then consult<br />a dictionary to see which substrings are real<br />words.<br /><br />;->jimfhttps://www.blogger.com/profile/04975754342950063440noreply@blogger.com