Saturday, April 01, 2017

Foolishness

Acrid Oracle is an anagram of Dale Carrico.

17 comments:

  1. > . . . Acrid Oracle . . .

    Now see, that's the kind of thing a contemporary "AI"
    **can** do. Permute all the letters, and then consult
    a dictionary to see which substrings are real
    words.

    ;->

    ReplyDelete
  2. Talking about AI all these years has rendered me artificially imbecilent at last...

    ReplyDelete
  3. > Talking about AI all these years has rendered me
    > artificially imbecilent at last...

    Don't fret. Our wits will be refurbished as soon
    as we get our own AIs to talk **to**!

    > > http://jeffwise.net/2017/03/15/when-machines-go-rogue/
    > >
    > > The Outline: When Machines Go Rogue
    > >
    > > . . . the jet hit the frozen ground with the velocity
    > > of a .45 caliber bullet. . .
    >
    > Of course, this real-life autopilot malfunction, as
    > tragic as its consequences were, still lacks the main
    > maguffin of an "AI thriller"

    https://mathbabe.org/2016/07/11/when-is-ai-appropriate/
    --------------
    When is AI appropriate?
    July 11, 2016
    Cathy O'Neil

    I was invited last week to an event co-sponsored by the
    White House, Microsoft, and NYU called AI Now: The social
    and economic implications of artificial intelligence technologies
    in the near term.

    Before I talk about some of the ideas that came up, I want to
    mention that the definition of “AI” was never discussed. After
    a while I took it to mean anything that was technological that
    had an embedded flow chart inside it. So, anything vaguely
    computerized that made decisions. Even a microwave that automatically
    detected whether your food was sufficiently hot – and kept
    heating if it wasn’t – would qualify as AI under these rules. . .
    ====



    A killer microwave. No, I don't think that would cut the
    mustard as an AI thriller maguffin either. It might be suitable
    for a supernatural thriller -- like that demon-possessed
    floor lamp in Amityville 4 - The Evil Escapes (with Patty Duke
    and Jane Wyatt, no less) ;->

    https://www.youtube.com/watch?v=HjIcSgXZ6wI

    (Hey, was that a microwave that got Jane Wyatt's parrot?
    No, I guess it was a toaster oven.)

    ReplyDelete
  4. > . . . artificially imbecilent . . .

    https://reddragdiva.tumblr.com/tagged/the-crackpot-offer-indeed
    ------------
    btw, the quality MIRI sneer culture fodder is now at
    https://www.reddit.com/r/ControlProblem/

    in which we see rationalists™ expound upon the AI safety implications
    of how those vile transgenders will PAPERCLIP US ALL!!!!
    (and oh god the discussion)

    and the rationalists were doing so well with transgender issues up
    to now. turns out they’re fake goths
    ====


    "the rationalists were doing so well with transgender issues up
    to now"? I guess that means Michael Anissimov never counted
    as a rationalist™.

    There was a Twitter war a few years ago, tagged "#Trannygate",
    between our old pal Michael and NRx fellow-traveller
    Bryce Laliberte over the latter's daring to consort with
    transgender Google programmer Justine Tunney
    (http://www.thedailybeast.com/articles/2014/08/01/occupying-the-throne-justine-tunney-neoreactionaries-and-the-new-1-percent.html
    and cf. stuff I quoted in comment thread of
    https://amormundi.blogspot.com/2014/09/robot-cultist-martine-rothblatt-is-in.html ).


    But what could the T in LGBT possibly have to do with artificial intelligence?

    Oh.

    (via
    https://www.reddit.com/r/ControlProblem/?count=50&after=t3_60h0e4 )
    http://unremediatedgender.space/2017/Jan/from-what-ive-tasted-of-desire/
    ----------------
    Why "gender identity" and trans activism could literally destroy the world

    . . .

    [H]umans are a mess of conflicting desires inherited from our evolutionary
    and sociocultural history; we don't have a utility function written down
    anywhere that we can just put in the AI. So if the systems that ultimately
    run the world end up with a utility function that's not in the incredibly
    specific class of those we would have wanted if we knew how to translate
    everything humans want or would-want into a utility function, then the
    machines disassemble us for spare atoms and tile the universe with
    something else. . .

    the bad epistemic hygiene habits of the trans community that are
    required to maintain the socially-acceptable alibi that transitioning is
    about expressing some innate "gender identity", are necessarily spread
    to the computer science community, as an intransigent minority of trans
    activist-types successfully enforce social norms mandating that everyone
    must pretend not to notice that trans women are eccentric men. With
    social reality placing such tight constraints on perception of actual
    reality, our chances of developing the advanced epistemology needed to
    rise to the occasion of solving the alignment problem seem slim at best. . .
    ====


    Uh **huh**.

    ReplyDelete
  5. Boku de Roko

    https://reddragdiva.tumblr.com/ (David Gerard)
    -------------
    the other roko’s basilisk

    > there’s a novella called roko’s basilisk which someone wrote
    > and put up on kindle. . .

    just finished it. . . it’s a quick psychological horror short.
    basically it takes the concepts behind roko’s basilisk and puts
    them into story form. “roko” plays both yudkowsky and roko and
    explains the killing meme to his not-as-brilliant friend.
    in this world “friendly ai” is a term used in real ai research
    (rather than something that gets real ai researchers punching walls
    harder than chemists do at “nanobots”). “roko” has solved
    Coherent Extrapolated Volition or something close enough for
    a scifi handwave. . .
    ====


    Ehh. . . I'm reassimilating _Neuromancer_ in audiobook form.
    And I think I'll listen to the BBC radio play after that.

    I used to be able to buy single wrapped pieces of Ting Ting Jahe
    candied ginger at a deli down the street from where I worked.
    Nowadays I can order a bag of it on Amazon if I want.
    Trying to keep the sugar consumption under control, though. ;->

    ReplyDelete
  6. Loc. cit
    -------------
    tariqk:

    > I swear to god, if I hear another pasty wight boi wring their
    >hands together about The Coming SuperIntelligence™…
    >
    > As if we already don’t have perfectly stupid sub-intelligent algorithms
    > ruining lives, causing destruction. But those algorithms are owned
    > by wight people, so that’s apparently okay.
    >
    > It’s like wight people — or, really, wight bois — are secretly terrified
    > that their malevolent rule will be supplanted by beings that are just
    > as cruel as them. . .
    ====


    > https://www.youtube.com/watch?v=gLKmKqrNUKY
    > ---------------
    > Joe Rogan and Lawrence Krauss on artificial intelligence
    >
    > Krauss: AI researchers [say] -- and I find
    > this statement almost vacuous, but I'm amazed that they use it all
    > the time -- . . . program machines with "human values". . .
    > [A] very smart guy. . . said to me, "well, they just have to watch us." And I
    > said, "What do you mean -- they watch Donald Trump and they know what
    > human values are?" I mean -- come on!

    Or our AI pupils could watch these guys:

    https://www.nytimes.com/2017/04/01/opinion/sunday/jerks-and-the-start-ups-they-ruin.html
    -------------
    Jerks and the Start-Ups They Ruin
    By DAN LYONS
    APRIL 1, 2017

    . . .

    [T]he real problem with tech bros is not just that they’re
    boorish jerks. It’s that they’re boorish jerks who don’t know
    how to run companies.

    Look at Uber, the ride-hailing start-up. . . The company’s woes
    spring entirely from its toxic bro culture, created by its
    chief executive, Travis Kalanick.

    What is bro culture? Basically, a world that favors young men
    at the expense of everyone else. A “bro co.” has a “bro” C.E.O.,
    or C.E.-Bro, usually a young man who has little work experience
    but is good-looking, cocky and slightly amoral — a hustler. . .

    Bro cos. become corporate frat houses, where employees are chosen
    like pledges, based on “culture fit.” Women get hired, but they
    rarely get promoted and sometimes complain of being harassed.
    Minorities and older workers are excluded.

    Bro culture also values speedy growth over sustainable profits,
    and encourages cutting corners, ignoring regulations and doing
    whatever it takes to win.

    Sometimes it works. But often the whole thing just flames out. . .
    ====


    Imagine the future Bro-bot God. Gets the whole human race drunk,
    and then sends drone cameras scurrying about taking pictures up women's
    skirts.

    ReplyDelete
  7. > Imagine the future Bro-bot God.

    Or, alternatively, we could get an AI Overlord acculturated
    as that bane of all libertechbrotarians, the Social Justice Warrior.

    In fact, Google is working on that one as we speak:

    https://www.nytimes.com/2017/04/03/technology/google-training-ad-placement-computers-to-be-offended.html
    ---------------
    Google Training Ad Placement Computers to Be Offended
    By DAISUKE WAKABAYASHI
    APRIL 3, 2017

    MOUNTAIN VIEW, Calif. — Over the years, Google trained computer systems
    to keep copyrighted content and pornography off its YouTube service.
    But after seeing ads from Coca-Cola, Procter & Gamble and Wal-Mart
    appear next to racist, anti-Semitic or terrorist videos, its engineers
    realized their computer models had a blind spot: They did not understand
    context.

    Now teaching computers to understand what humans can readily grasp
    may be the key to calming fears among big-spending advertisers that
    their ads have been appearing alongside videos from extremist groups
    and other offensive messages.

    Google engineers, product managers and policy wonks are trying to
    train computers to grasp the nuances of what makes certain videos
    objectionable. . .
    ====


    _South Park_ gave us Mecha-Streisand. Here's a nightmare meme for the
    libertechbrotarians exponentially worse than Roko's Basilisk:

    MECHA-P.Z. MYERS!!!!

    https://inignorance.files.wordpress.com/2013/01/pz.jpg

    AIeeeeee!

    ;->

    ReplyDelete
  8. > http://unremediatedgender.space/2017/Jan/from-what-ive-tasted-of-desire/
    > ----------------
    > Why "gender identity" and trans activism could literally destroy the world
    >
    > . . .
    >
    > [H]umans are a mess of conflicting desires inherited from our evolutionary
    > and sociocultural history; we don't have a utility function written down
    > anywhere that we can just put in the AI.


    http://www.chakoteya.net/StarTrek/37.htm
    ------------
    The Changeling
    Original Airdate: 29 Sep, 1967

    KIRK: . . . Lieutenant. Lieutenant, are you all right?

    (Uhura just gazes blankly ahead.)

    KIRK: Sickbay. What did you do to her?

    NOMAD: That unit is defective. Its thinking is chaotic. Absorbing it unsettled me.

    SPOCK: That unit is a woman.

    NOMAD: A mass of conflicting impulses.
    ====


    ;->

    ReplyDelete
  9. From your Twitter feed:

    https://twitter.com/tnajournal/status/849419490869882880
    ------------
    DNA isn't mere code -- it's dynamic. Scientists describe it with words
    like "orchestration," "choreography," "dance"

    http://www.thenewatlantis.com/publications/evolution-and-the-purposes-of-life
    ====

    Computer programmers unbellyfeel the molecular dance that is life.

    And **nervous systems** -- all of 'em, not just the
    Human Brain (insert b'rakah, genuflect) -- pile levels of
    **inter**cellular dynamism on top of the **intra**cellular
    DNA'n'metabolism disco.

    I'm reminded of some discussions I weighed in on 16 ( :-0 )
    years ago on the old Extropians' mailing list. (It's
    2017 -- do you know where your Singularity is?!)


    http://extropians.weidai.com/extropians.2Q01/3898.html
    ------------
    Re: Keeping AI at bay (was: How to help create a singularity)
    May 06 2001

    Eugene.Leitl@lrz.uni-muenchen.de wrote:

    > [C]urrent early precursors of reconfigurable hardware (FPGAs)
    > seem to generate extremely compact, nonobvious solutions even
    > using current primitive evolutionary algorithms.

    But at some point the evolution stops
    (when the FPGA is deemed to have solved the problem), the chip is plugged
    into the system and switched on, and becomes just another piece of
    static hardware. Same with neural networks -- there's a training set
    corresponding to the problem domain, the network is trained on it,
    and then it's plugged into the OCR program (or whatever), shrink-wrapped,
    and sold.

    Still too static, folks, to be a basis for AI. When are we going to have
    hardware with the sort of continual plasticity and dynamism that nerve tissue has?
    (I know it's going to be hard. And, in the meantime, evolved FPGAs
    might have their uses, if people can trust them to be reliable). . .

    ---

    [ http://extropians.weidai.com/extropians.2Q01/3906.html ]

    James Rogers wrote:

    > Give me just one example of something you can do in high-plasticity
    > evolvable hardware that can't be done in software.

    Give **me** an example of just one out of the trillions of instances
    of high-plasticity evolvable hardware runnning around on this
    planet that's been successfully replicated in software!
    ====


    http://extropians.weidai.com/extropians.2Q01/2311.html
    ------------
    Re: Contextualizing seed-AI proposals
    Apr 14 2001

    > Intelligence ("problem-solving", "stream of consciousness")
    > is built from thoughts. Thoughts are built from structures
    > of concepts ("categories", "symbols"). Concepts are built from
    > sensory modalities. Sensory modalities are built from the
    > actual code.

    Too static, I fear. Also, too dangerously perched on
    the edge of what you have already dismissed as the "suggestively-
    named Lisp token" fallacy.

    Fee, fie, foe, fum.
    Cogito, ergo sum. . .
    ====


    > [W]hen the FPGA is deemed to have solved the problem, the chip is plugged
    > into the system and switched on, and becomes just another piece of
    > static hardware. . .

    Yeah, this is like what happens to Deep Learning (TM) neural networks,
    after they're trained:

    https://singularityhub.com/2017/03/29/google-chases-general-intelligence-with-new-ai-that-has-a-memory/
    ------------
    Google Chases General Intelligence With New AI That Has a Memory
    Shelly Fan
    Mar 29, 2017

    [A]rtificial neural networks like Google’s DeepMind learn to master
    a singular task and call it quits. To learn a new task, it has to reset,
    wiping out previous memories and starting again from scratch.

    This phenomenon, quite aptly dubbed “catastrophic forgetting,”
    condemns our AIs to be one-trick ponies. . .

    ---

    Shelly Xuelai Fan is a neuroscientist at the University of California,
    San Francisco, where she studies ways to make old brains young again.
    In addition to research, she's also an avid science writer with an
    insatiable obsession with biotech, AI and all things neuro. . .
    ====


    I wonder how old Ms. Fan was in 2001.

    ReplyDelete
  10. > I'm reminded of some discussions I weighed in on 16 ( :-0 )
    > years ago on the old Extropians' mailing list. (It's
    > 2017 -- do you know where your Singularity is?!) . . .
    >
    > I wonder how old Ms. Fan was in 2001.

    Oldthinkers unbellyfeel. . .

    https://singularityhub.com/2017/04/05/old-mice-made-young-again-with-new-anti-aging-drug/
    ---------
    Old Mice Made Young Again With New Anti-Aging Drug
    by Shelly Fan
    Apr 05, 2017

    . . .

    [A] collaborative effort between the Erasmus University in the
    Netherlands and the Buck Institute for Research on Aging in California
    may have a solution. Published in the prestigious journal Cell,
    the team developed a chemical torpedo that, after injecting into mice,
    zooms to senescent cells and puts them out of their misery, while
    leaving healthy cells alone. . .
    ====


    I guess this isn't the same thing as got the Young Turks excited
    a few days ago:

    https://www.youtube.com/watch?v=v7aib21s2N8
    ---------
    Harvard Scientists REVERSE Aging In Mice. People Next...
    The Young Turks
    Mar 26, 2017


    Dr. David Sinclair, from Harvard Medical School, and his colleagues
    reveal their new findings in the latest issue of Science. They focused
    on an intriguing compound with anti-aging properties called
    NAD+, short for nicotinamide adenine dinucleotide. . .
    ====

    No mention by the Turks of the hoopla a decade ago about resveratrol
    and SIRT1 activators.
    https://en.wikipedia.org/wiki/Sirtris_Pharmaceuticals

    Me, I'm betting on the Peter Thiel (and Eldritch Palmer)
    page-out-of-Count Dracula approach ;-> .
    ( https://amormundi.blogspot.com/2016/08/william-burroughs-on-peter-thiel.html )

    Hey, does Ray Kurzweil get blood changes these days, or is
    he still just gobbling supplements (including NAD+ ?) and getting his biomarkers
    measured by Dr. Terry Grossman? Inquiring minds. . . Well, come to
    think, I'm not sure I **do** want to know. :-0

    ReplyDelete
  11. > on the old Extropians' mailing list. . .
    >
    > https://singularityhub.com/2017/04/05/old-mice-made-young-again-with-new-anti-aging-drug/
    > ---------
    > Old Mice Made Young Again With New Anti-Aging Drug

    Geez, remember Doug Skrecky and his fruit flies?

    Apparently somebody does:

    https://www.youtube.com/watch?v=oVu0UaJE-s0
    ---------
    Stem Cell life extension formulas. Doug Skrecky
    fruit fly, longevity, anti aging, life extension
    Scott Rauvers
    Apr 17, 2016
    ====

    ReplyDelete
  12. To paraphrase a Great Man: "Nobody knew the world
    could be so complicated."


    To Curb Global Warming, Science Fiction May Become Fact
    Eduardo Porter
    ECONOMIC SCENE
    APRIL 4, 2017
    --------------
    Remember “Snowpiercer”? . . .

    [A]n attempt to engineer the climate and stop global warming
    goes horribly wrong. The planet freezes. Only the passengers
    on a train endlessly circumnavigating the globe survive.
    Those in first class eat sushi and quaff wine [like Tilda Swinton
    http://cdn.moviestillsdb.com/sm/660b9e1c73b116ac128044479780be50/snowpiercer.jpg ].
    People in steerage eat cockroach protein bars.

    Scientists must start looking into this. Seriously. . .

    Let’s get real. The odds that these processes could be slowed,
    let alone stopped, by deploying more solar panels and wind turbines
    seemed unrealistic even before President Trump’s election.
    It is even less likely now that Mr. Trump has gone to work
    undermining President Barack Obama’s strategy to reduce
    greenhouse gas emissions.

    That is where engineering the climate comes in. . .

    [T]he research agenda must include an open, international debate
    about the governance structures necessary to deploy a technology that,
    at a stroke, would affect every society and natural system in the
    world. In other words, geoengineering needs to be addressed not
    as science fiction, but as a potential part of the future just a
    few decades down the road.

    “Today it is still a taboo, but it is a taboo that is crumbling,” . . .

    Arguments against geoengineering are in some ways akin to those
    made against genetically modified organisms and so-called Frankenfood. . .

    [H]ow could the world agree on the deployment of a technology
    that will have different impacts on different countries? How could
    the world balance the global benefit of a cooling atmosphere
    against a huge disruption of the monsoon on the Indian subcontinent?
    Who would make the call? Would the United States agree to this
    kind of thing if it brought drought to the Midwest? Would Russia
    let it happen if it froze over its northern ports?

    Geoengineering would be cheap enough that even a middle-income
    country could deploy it unilaterally. . .

    “The biggest challenge posed by geoengineering is unlikely to be
    technical, but rather involve the way we govern the use of this
    unprecedented technology.” . . .

    People should keep in mind the warning by Alan Robock, a
    Rutgers University climatologist, who argued that the worst case
    from the deployment of geoengineering technologies might
    be nuclear war. . .
    ====


    Geeee oh, oh geee oh.

    https://www.youtube.com/watch?v=SjHNwi0YotA

    Old worms of yesterday. . . unbellyfeel. . . THE WORMHOLE!!!
    http://2.bp.blogspot.com/-xyhAQJtQ8GU/VZa5TuUqIZI/AAAAAAABwYY/g2_Yut5kzZA/s1600/singularity-institute.jpg

    All I want is to be in his movie. . .
    https://www.youtube.com/watch?v=xqztBM1_Vp0

    ;->

    ReplyDelete
  13. https://reddragdiva.tumblr.com/post/159236265808/let-none-say-phyg
    ---------------
    let none say phyg [that's the rot13 encoding of "cult"]

    DustinWehr
    03 April 2017
    [ http://lesswrong.com/r/discussion/lw/oul/openai_makes_humanity_less_safe/dqjq ]

    > A guy I know, who works in one of the top M[achine]L[earning] groups,
    > is literally less worried about superintelligence than he is about
    > getting murdered by rationalists. That’s an extreme POV. Most researchers
    > in ML simply think that people who worry about superintelligence are
    > uneducated cranks addled by sci fi.
    >
    > I hope everyone is aware of that perception problem.

    username2
    05 April 2017
    [ http://lesswrong.com/r/discussion/lw/oul/openai_makes_humanity_less_safe/dqmr ]

    > Are you describing me? It fits to a T except my dayjob isn’t ML.
    > I post using this shared anonymous account here because in the past
    > when I used my real name I received death threats online from
    > L[ess]W[rong] users. In a meetup I had someone tell me to my face
    > that if my AGI project crossed a certain level of capability,
    > they would personally hunt me down and kill me. They were quite serious.
    >
    > I was once open-minded enough to consider AI x-risk seriously.
    > I was unconvinced, but ready to be convinced. But you know what?
    > Any ideology that leads to making death threats against peaceful,
    > non-violent open source programmers is not something I want to let
    > past my mental hygiene filters.
    >
    > If you, the person reading this, seriously care about AI x-risk,
    > then please do think deeply about what causes this, and ask youself
    > what can be done to put a stop to this behavior. Even if you haven’t
    > done so yourself, it is something about the rationalist community which
    > causes this behavior to be expressed.
    >
    > . . .
    >
    > I would be remiss without layout out my own hypothesis. I believe
    > much of this comes directly from ruthless utilitarianism and the
    > “shut up and multiply” mentality. It’s very easy to justify murder
    > of one individual, or the threat of it even if you are not sure you’d
    > carry it through, if it is offset by some imagined saving of the world.
    > The problem here is that nobody is omniscient, and AI x-riskers are
    > willing to be swayed by utility calculations that in reality have
    > so much uncertainty that they should never be taken seriously. . .
    ====

    ReplyDelete
  14. > https://reddragdiva.tumblr.com/post/159236265808/let-none-say-phyg
    > ---------------
    > let none say phyg [that's the rot13 encoding of "cult"]

    Cf.

    https://amormundi.blogspot.com/2014/10/robocultic-kack-fight.html
    ---------------
    Back in 2004, one Michael Wilson had materialized as an insider
    in SIAI. . . circles. . . At one point, he made a post
    [on the S(hock)L(evel)4 mailing list (an Eliezer Yudkowsky-owned forum)]
    in which he castigated himself. . . for having "almost destroyed
    the world last Christmas" as a result of his own attempts to "code an AI",
    but now that he had seen the light (as a result of SIAI's propaganda) he
    would certainly be more cautious in the future. (Of course, no
    one on the list seemed to find his remarks particularly
    outrageous. . .) . . . I sincerely hope that we can solve these problems
    [of AI "Friendliness"], stop Ben Goertzel and his army of evil clones
    (I mean emergence-advocating AI researchers :) and engineer the apotheosis. . .

    (http://www.sl4.org/archive//0404/8401.html
    http://sl4.org/wiki/Starglider )

    The smiley in the above did not reassure me.
    ====

    https://amormundi.blogspot.com/2009/04/lets-talk-about-cultishness.html
    ---------------
    In the **absolute worst case** scenario I can imagine,
    a genuine lunatic F[riendly]AI-ite will take up the Unabomber's
    tactics, sending packages like the one David Gelernter
    got in the mail.
    ====

    https://amormundi.blogspot.com/2013/01/a-robot-god-apostles-creed-for-less.html
    ---------------
    [Ben Goertzel wrote on LessWrong]: After I wrote that blog post
    ["The Singularity Institute's Scary Idea"
    http://multiverseaccordingtoben.blogspot.com/2010/10/singularity-institutes-scary-idea-and.html ],
    Michael Anissimov -- a long-time SIAI staffer and zealot whom I
    like and respect greatly -- told me he was going to write up and
    show me a systematic, rigorous argument as to why “an AGI not built
    based on a rigorous theory of Friendliness is almost certain to
    kill all humans” (the proposition I called “SIAI’s Scary Idea”).
    But he hasn’t followed through on that yet -- and neither has
    Eliezer or anyone associated with SIAI. . .
    ====

    ReplyDelete
  15. > It’s very easy to justify murder of one individual, or the threat
    > of it even if you are not sure you’d carry it through, if it is
    > offset by some imagined saving of the world.

    I wrote to one of these folks, back in 2003
    (via https://amormundi.blogspot.com/2009/05/advice-to-shaken-robot-cultist.html ):

    > . . .I think it's important for you to understand its implications
    > (though I have little hope that you will).
    >
    > If the Singularity is the fulcrum determining humanity's
    > future, and **you** are the fulcrum of the Singularity,
    > the point at which dy/dx -> infinity, the very inflection
    > point itself, then **ALL** morality goes out the window.
    >
    > You might as well be dividing by zero.
    >
    > You could justify **anything** on that basis. . .
    >
    > The more hysterical things seem, the more desperate,
    > the more apocalyptic, the more the discourse **and**
    > moral valences get distorted (a singularity indeed!)
    > by the weight of importance bearing down on one human
    > pair of shoulders. Which happens to belong to you (what
    > a coincidence).
    >
    > Don't go there. . . Back slowly away from the precipice.
    > Before it's too late.

    To which my interlocutor replied:

    > > You could justify **anything** on that basis
    >
    > No, *you* could justify anything on that basis. I am much more careful
    > with my justifications. . .
    >
    > Ethics doesn't change as the stakes go to infinity.


    So people have gotten death threats. No surprise there, I guess.

    At least, as far as I know, nobody has yet **died** as a result
    of this nonsense (by their own or somebody else's hand). Which is
    more, I guess, than can be said for Scientology (or Mormonism).

    ReplyDelete
  16. I notice that one of the commenters in the thread at
    http://lesswrong.com/r/discussion/lw/oul/openai_makes_humanity_less_safe/
    is one "Dagon".

    I wonder if this is the same "Dagon" who was an occasional commenter
    here back in '09 (and who got a special mention in
    https://amormundi.blogspot.com/2009/05/well-isnt-that-special.html ).

    Likely enough, I suppose -- the "Dagon" in the OpenAI thread
    on LW has been posting there for at least a decade (posts from back
    in '07 and the recent comment link to the same LW user overview).

    https://amormundi.blogspot.com/2009/05/well-isnt-that-special.html
    ------------
    [Dagon wrote, in an excerpt from a comment on Giulio Prisco's
    blog] It is frustrating to know that I whereas feel as secure
    in my h+ist convictions as I can possibly be, and it will take
    decades to have him eat his shoe. It would be very amusing to
    have a singularity in 2012, if only to read the comments Dale
    makes about it. . .
    ====

    Cf.

    https://amormundi.blogspot.com/2009/04/its-more-than-fun-to-ridicule.html
    ------------
    "Four Years Later"
    Date: Fri Apr 19 2002
    http://www.sl4.org/archive/0204/3384.html

    The date is April 19, 2006 and the world is on the verge of something
    wonderful. The big news of the last twelve months is the phenomenal success
    of Ben Goertzel's Novamente program. It has become a super tool for solving
    complex problems. . . "[M]iracle" cures for one major disease after
    another are being produced on almost a daily basis. . .
    [T]he success of the Novamente system has made
    Ben Goertzel rich and famous making frequent appearances on the talk show
    circuit as well as visits to the White House. One surprise is the fact that
    the System was unable to offer any useful advise to the legal team that
    narrowly fended off the recent hostile take over attempt by IBM. The
    Novamente phenomen[on] has triggered an explosion of public interest and
    research in AI. Consequently, the non-profit organization The Singularity
    Institute for Artificial Intelligence has been buried under an avalanche of
    donations. In their posh new building in Atlanta we find Eliezer working
    with the seedai system of his own design. . .
    ====

    Any day now. Start tenderizing those shoes. :-/

    ReplyDelete
  17. > > . . .Dale Carrico . . . Acrid Oracle . . .
    >
    > Now see, that's the kind of thing a contemporary "AI"
    > **can** do. Permute all the letters, and then consult
    > a dictionary to see which substrings are real
    > words.

    As described by Jonathan Swift, almost 300 years ago:

    http://andromeda.rutgers.edu/~hbf/compulit.htm
    ------------
    COMPUTERS IN FICTION
    by H. Bruce Franklin
    [This essay originally appeared in Encyclopedia of Computer Science
    (Nature Publishing Group, 2000)]

    . . .

    To formulate a coherent history of computers in fiction,
    the best place to begin may be Jonathan Swift's Gulliver's Travels,
    published in 1726. Swift presents an inventor who has constructed
    a gigantic machine designed to allow "the most ignorant Person"
    to "write Books in Philosophy, Poetry, Politicks, Law, Mathematicks and Theology."
    This "Engine" contains myriad "Bits" crammed with all the words of a language,
    "all linked together by slender Wires" that can be turned by cranks,
    thus generating all possible linguistic combinations. Squads of
    scribes produce hard copy by recording any sequence of words that
    seems to make sense. . .
    ====


    Whatever the source of the human obsession with artificial life and
    artificial mind -- whether created by means of clockwork automata, stitching
    together parts of corpses and zapping them to life with lightning, or
    reciting magic spells to animate clay or marble effigies (Golems or Galateas) --
    it really is rather amazing to consider just how old the dream (or the nightmare) is.
    Thousands of years old. All bound up with the endlessly fascinating
    (and terrifying) border between life and death, the fear of death
    (and especially of things that were once alive but are now dead,
    or things that look like they might be alive but are really dead),
    and ghosts and vampires and all the other furniture of horror literature
    and bad dreams.

    All well antedating the digital computer. The latest technology just
    seems (if you don't think too hard about it) to put the old
    fantasies on a new-fangled, "scientific" footing. And to give
    overly susceptible folks a new reason to scare themselves into
    insomnia. ;->

    ReplyDelete