Sunday, August 21, 2016

Fraudsters Aren't Fabulous

Tech billionaires like Thiel, Musk and Branson hawking immortality, robot gods and Martian escape hatches aren't glamorous Bond Villains, people, they're tacky techno-televangelists.

8 comments:

  1. > Tech billionaires like. . . Musk. . . hawking. . . Martian escape hatches

    Via
    http://amormundi.blogspot.com/2010/05/robot-cultists-have-won.html
    ------------
    I was browsing in SF author Charlie Stross's blog the other day,
    and I came across his rather saturnine. . .
    analysis from three years ago of the prospects for interstellar
    travel and of colonization within our own solar system.

    The article generated over 800 replies, mostly of shrieking protest
    of the kind familiar from the responses of >Hists to Dale's blog.

    Here's a thumbnail of the article:

    http://www.antipope.org/charlie/blog-static/2007/06/the-high-frontier-redux.html
    +++++++++++
    The High Frontier, Redux

    . . .I write SF for a living. Possibly because of this, folks seem to think
    I ought to be an enthusiastic proponent of space exploration and space
    colonization. . .

    The long and the short of what I'm trying to get across is quite simply that,
    in the absence of technology indistinguishable from magic — magic tech that,
    furthermore, does things that from today's perspective appear to play fast
    and loose with the laws of physics — interstellar travel for human beings
    is near-as-dammit a non-starter. . .

    What about our own solar system?

    After contemplating the vastness of interstellar space, our own solar
    system looks almost comfortingly accessible at first. . .

    But when we start examining the prospects for interplanetary colonization
    things turn gloomy again. . .

    Colonize the Gobi desert, colonise the North Atlantic in winter — then get
    back to me about the rest of the solar system!
    ####

    and here's a characteristic response by Stross:

    +++++++++++
    Charlie Stross | June 17, 2007 17:30

    117:

    Matt @105:

    > I was quite disappointed with your latest rant, it seems you must
    > have had a very bad week and perhaps a brain tumor. How else to
    > imagine why a science fiction author would so publicly, stridently
    > and logically tear to shreds the hopes of anyone in space travel
    > that you yourself have helped to kindle? And with such... zest?

    ... Because I dislike willful ignorance and I hate being told
    comforting lies.

    In a nutshell -- and my third [non-introductory] paragraph should
    have been a honking great flashing neon Time Square sized sign --
    the space settler enthusiasts have basically swallowed a cartload
    of ideologically weighted propaganda, cunningly combined with emotive
    appeals to abstract (and thus unfalsifiable) ideals. Your use of
    the phrase "the high frontier" is itself a telling one -- and you
    use the term "frontier" repeatedly. Then you start going on about
    indoctrinating impressionable young minds to "absorb vast perspectives
    and faith in humanity and science" as if you think I've got some
    quasi-mystical **duty** to teach Ideologically Correct
    Gerard K. O'Neil Thought, and by implication, any kid who **doesn't**
    buy what is effectively a collectivist pie-in-the-sky daydream is
    deficient, unimaginative, and foolish, and any SF writer who
    refuses to pander to this political creed is evil and wrong.

    I don't like being told what thoughts I'm allowed to hint. I like to
    **question assumptions**. And this is just the result of my interrogating
    some of the assumptions underlying space opera, using the toolkit of
    Hard Science Fiction -- i.e., trust the numbers. You can take it as a
    default likely outcome. . .

    Michael @110: the sad thing is, I think a whole lot of them really
    believe it. As in, they **believe**. It's not rationally grounded
    optimism with an underpinning of facts, it's religion in disguise.
    ====

    ReplyDelete
  2. http://jalopnik.com/airbus-flying-car-concept-makes-the-same-mistake-as-eve-1785587681
    ---------------------'
    Airbus' Flying Car Concept Makes The Same Mistake
    As Every Other Flying Car
    Raphael Orlove

    Airbus’ new driverless airborne taxi/gigantic drone concept looks great!
    It’s so cool to see a major air company work on what’s basically a
    flying car. Oh, wait, does this thing pass the two year test?

    http://paleofuture.gizmodo.com/flying-cars-are-just-two-years-from-reality-_-_-1603669281
    +++++++++
    Flying Cars Are Just Two Years From Reality ¯\_(ツ)_/¯
    Matt Novak
    7/11/14

    Another day, another story about how flying cars are just two years away.
    Funny how they're always just two years away. . .
    ####

    . . .

    The two-year test, if you’re not familiar, is that every single maker
    of a flying car claims that their work is just two years away. This
    is a point of humor to those who follow the flying car quasi-industry,
    as literally every single attempted project of the past decade has
    either never made it off the ground or crashed if it did.

    As it turns out, producing a working, reliable, full-sized, FAA-approved
    flying vehicle on the scale and usability of an automobile is nigh-on
    impossible. They’re either too much like planes that are bad at driving,
    too much like cars that are bad at flying, or in the case of these new
    big boy drones, they don’t have the battery power to get anywhere.
    This leaves out the major issue of how difficult it is to manage all
    of these flying vehicles in the air over our cities without them hitting
    each other and crash landing onto our heads.

    http://gizmodo.com/why-flying-cars-are-difficult-and-dumb-1687179423
    +++++++++
    Why Flying Cars Are Difficult And Dumb
    Chris Mills
    2/21/15

    By this stage, it's fairly clear that flying cars aren't going to
    happen any time soon, despite what the media might want to say. And
    there's a simple reason for that — the whole concept of flying cars
    is pretty stupid in the first place.

    Vsauce uses this video
    [ https://www.youtube.com/watch?v=AYp8nCGzpiA
    Where's Our Future Technology? -- Thought Glass #10 ]
    to explain why a number of futuristic technologies — flying cars,
    teleportation, and space colonies — aren't quite here yet. It's slightly
    depressing to hear the long list of problems standing between us
    and Beam Me Up Scotty, but I'm sure science will come good in the end.
    ####

    Everything from Terrafugia to Moller to now Airbus has been saying that
    their work is just around the corner
    [ http://gizmodo.com/according-to-airbus-a-flying-car-reality-is-just-aroun-1785545526
    According to Airbus, A Flying Car Reality Is Just Around The Corner
    Carli Velocci ],
    always close enough to make the headlines, always far enough
    away so that nobody holds them too accountable when the project
    gets caught up in endless delays.

    Airbus’ work doesn’t look any different.

    ---

    GrannyShifter
    8/22/16

    > The two-year test, if you’re not familiar. . .

    If I had a nickel for every time one of my non-tech friends tries to tell
    me about some radical new technology that is two years away from being
    available, I tell them the same thing.

    Some examples:

    Battery technology that offers high capacity and super fast charging

    Affordable, long range electric cars

    Safe, practical, super cheap cars

    Power sources that will replace gas in cars

    autonomous cars

    Flexible/wearable screens

    Hollywood style Holographic interfaces

    Jet packs

    BTTF style Hover boards

    Implantable tech

    VR Motorcycle Helmets. (I’m working on this one myself.
    Should be ready in about...2 years).
    ====


    Hey, there's one piece of prediction from the _Popular Science_
    rags of my youth that has come absolutely, spectacularly
    true -- flat-screen TVs.

    If human technological civilization crashes and burns, let
    this stand as an epitaph we can all be proud of --

    THEY HAD **REALLY GREAT TVs**

    ReplyDelete
  3. So this seems like a pretty reasonable article (from your Twitter
    feed):

    https://aeon.co/essays/true-ai-is-both-logically-possible-and-utterly-implausible
    ----------------
    Should we be afraid of AI?

    Machines seem to be getting smarter and smarter and much
    better at human jobs, yet true AI is utterly implausible. Why?

    Luciano Floridi
    9 May, 2016

    [E]vil, ultra-intelligent machines. . . [are] an old fear.
    It dates to the 1960s, when Irving John Good, a British
    mathematician who worked as a cryptologist at Bletchley Park
    with Alan Turing, made the following observation:

    > Let an ultraintelligent machine be defined as a machine
    > that can far surpass all the intellectual activities of any
    > man however clever. Since the design of machines is one of these
    > intellectual activities, an ultraintelligent machine could
    > design even better machines; there would then unquestionably
    > be an ‘intelligence explosion’, and the intelligence of man
    > would be left far behind. Thus the first ultra-intelligent
    > machine is the last invention that man need ever make, provided
    > that the machine is docile enough to tell us how to keep it
    > under control. It is curious that this point is made so seldom
    > outside of science fiction. It is sometimes worthwhile to take
    > science fiction seriously.

    . . .

    [T]he amazing developments in our digital technologies have led
    many people to believe that Good’s ‘intelligence explosion’ is
    a serious risk, and the end of our species might be near,
    if we’re not careful. This is Stephen Hawking in 2014:

    > The development of full artificial intelligence could spell
    > the end of the human race.

    Last year, Bill Gates was of the same view:

    > I am in the camp that is concerned about superintelligence.
    > First the machines will do a lot of jobs for us and not be
    > superintelligent. That should be positive if we manage it well.
    > A few decades after that, though, the intelligence is strong enough
    > to be a concern. I agree with Elon Musk and some others on this,
    > and don’t understand why some people are not concerned.

    And what had Musk, Tesla’s CEO, said?

    > We should be very careful about artificial intelligence. If I
    > were to guess what our biggest existential threat is, it’s probably
    > that. . . Increasingly, scientists think there should be some
    > regulatory oversight maybe at the national and international level,
    > just to make sure that we don’t do something very foolish.
    > With artificial intelligence, we are summoning the demon. In all
    > those stories where there’s the guy with the pentagram and the
    > holy water, it’s like, yeah, he’s sure he can control the demon.
    > Didn’t work out.

    . . .

    [In t]he current debate about AI. . . the dichotomy is between those
    who believe in true AI and those who do not. Yes, the real thing,
    not Siri in your iPhone, Roomba in your living room, or Nest in
    your kitchen. . . Think instead of the false Maria in _Metropolis_ (1927);
    Hal 9000 in _2001: A Space Odyssey_ (1968), on which Good was one of the
    consultants; C3PO in _Star Wars_ (1977); Rachael in _Blade Runner_ (1982);
    Data in _Star Trek: The Next Generation_ (1987); Agent Smith in _The Matrix_ (1999)
    or the disembodied Samantha in _Her_ (2013). [Wot, no Ava in _Ex Machina_ (2015)?]. . .
    Believers in true AI and in Good’s ‘intelligence explosion’ belong to the
    Church of Singularitarians. . . For lack of a better term, I shall refer
    to the disbelievers as members of the Church of AItheists. Let’s have a
    look at both faiths and see why both are mistaken. . .
    ====

    ReplyDelete
  4. > Believers in true AI and in Good’s ‘intelligence explosion’ belong to the
    > Church of Singularitarians. . . For lack of a better term, I shall refer
    > to the disbelievers as members of the Church of AItheists. Let’s have a
    > look at both faiths and see why both are mistaken. . .

    Floridi continues:

    Op. cit.
    --------------------
    Deeply irritated by those who worship the wrong digital gods,
    and by their unfulfilled Singularitarian prophecies, disbelievers –
    AItheists – make it their mission to prove once and for all that
    any kind of faith in true AI is totally wrong. AI is just
    computers, computers are just Turing Machines, Turing Machines
    are merely syntactic engines, and syntactic engines cannot think,
    cannot know, cannot be conscious. End of story. . .
    ====

    But then he **also** says:

    Op. cit.
    --------------------
    Plenty of machines can do amazing things, including playing checkers,
    chess and Go and the quiz show Jeopardy better than us. And yet
    they are all versions of a Turing Machine, an abstract model that
    sets the limits of what can be done by a computer through its mathematical
    logic.

    Quantum computers are constrained by the same limits, the limits
    of what can be computed (so-called computable functions). No conscious,
    intelligent entity is going to emerge from a Turing Machine. . .
    ====

    The above sounds a lot like AItheism to me, or at least GOFAItheism.

    On the other hand, I myself am willing to concede that, if you
    had an immensely powerful computer (for some value of "immensely" --
    certainly orders and orders of magnitude beyond anything
    available today or even **foreseen** today, for that matter), you
    might be able to couple a digital computer **simulating**
    a non-Turing "machine" like the brain with some source of
    stochasticity (maybe even real quantum-uncertainty-derived
    noise) and get something that behaves "intelligently" the way
    biological organisms, including humans, behave "intelligently".

    We can both agree on this, though:

    Op. cit.
    --------------------
    True AI is not logically impossible, but it is utterly implausible.
    We have no idea how we might begin to engineer it, not least because
    we have very little understanding of how our own brains and
    intelligence work. This means that we should not lose sleep over
    the possible appearance of some ultraintelligence.
    ====

    (Presumably the illustration is meant to suggest that an AI threatening
    Manhattan is as implausible as a giant eggplant threatening
    Manhattan. ;-> )


    "Luciano Floridi

    is professor of philosophy and ethics of information at the
    University of Oxford, and a Distinguished Research Fellow
    at the Uehiro Centre for Practical Ethics. . ."


    I wonder if he's on speaking terms with Nick Bostrom. ;->
    (The latter does not seem to be mentioned in the article.)

    ReplyDelete
  5. http://shituationist.tumblr.com/post/129355512136/your-constant-frustration-with-peoples
    ---------------
    Sep 18th, 2015

    johnbrownsbodyy asked:

    > your constant frustration with people's overestimation of AI
    > is extremely amusing

    It’s one of those things where it’s a major part of our everyday
    lives – anyone who’s ever used Google Translate has used an
    artificial intelligence program, the police use artificial intelligence
    to ‘predict’ where crime is going to occur – but the specter of
    “strong AI” prevents us from even noticing, or examining it critically.
    This is basically the topic of the article I’m writing for Mask Mag.

    Most of the people in the strong AI camp aren’t academics, and aren’t
    involved in actual AI research. They’re “philosophers” like Yudkowsky
    or actually accomplished engineers like Kurzweil who are just
    (to paraphrase Jaron Lanier) really scared of death. Otherwise, they’re
    like Marvin Minsky, whose research in cognitive science was pioneering
    at first, but whose insistence against using neuroscience to understand
    consciousness (being more radical in this regard than the humanist
    Raymond Tallis) has probably held back cognitive science by a few
    decades. They see AI the same way alchemists saw their practice, as a
    way to potentially cheat death.

    The philosophical assumptions behind the research program, that the
    human mind can be reduced to an algorithm, has no basis in reality.
    The argument that’s usually given in response to pointing out that we’ve
    never found an algorithm for general AI and probably never will is
    usually the creationist response to an atheist pointing out that we’ve
    never found any evidence for God. “Well, that doesn’t mean we won’t!”
    Strong AI is a degenerating research program and the delusional ravings
    of Yudkowsky, the failed predictions of Kurzweil and the reactionary
    attitudes toward developments in cognitive science by Marvin Minsky are
    all indicative of this. AI has left strong AI behind and is now focused
    on doing what AI has always been good at: one thing in particular at
    once. Image recognition programs don’t need to be conscious, machine translators
    don’t need image recognition, etc. The general tendency in AI today is
    to augment human beings (not in the Deus Ex sense so much as in the
    World Wide Web sense) ability to acquire and disseminate knowledge. This
    itself deserves critical analysis, because of its actual applications by
    disciplinary institutions like schools and the police, but this is a
    different thing entirely than some conscious malevolent AI causing
    nuclear warfare because that’s what Yudkowsky would do if he were
    totally rational or some LessWrong bullshit.
    ====

    ReplyDelete
  6. http://argumate.tumblr.com/post/144026695339/critics-of-lesswrong-or-the-so-called-rationalist
    ---------------
    May 8th, 2016

    Critics of LessWrong or the so-called Rationalist movement probably
    have various people in mind like Eliezer Yudkowsky, Robin Hanson,
    or Peter Thiel and the Silicon Valley venture capitalist community.
    But surveys suggest that the median member of the community is
    more likely to be a 20-something autistic trans girl suffering
    from depression and pursuing STEM studies. Any critiques that
    don’t take this into account may end up being misinterpreted.
    ====


    Oh. So, dare I ask, what might be the connection between that, and this:


    https://www.youtube.com/watch?v=htqOIjzi-jE
    ---------------
    Cult Behaviour: An Analysis
    Sargon of Akkad
    Aug 17, 2016

    An analysis of Dr. Arthur Deikman's book on cult behaviour,
    _The Wrong Way Home_.
    ====


    https://www.youtube.com/watch?v=pxO_UWr43Rw
    ---------------
    Cult Case Studies
    Sargon of Akkad
    Aug 18, 2016

    Remind you of anyone?
    ====

    ReplyDelete
  7. > https://aeon.co/essays/true-ai-is-both-logically-possible-and-utterly-implausible
    >
    > Should we be afraid of AI?
    >
    > Presumably the illustration is meant to suggest that an AI threatening
    > Manhattan is as implausible as a giant eggplant threatening
    > Manhattan. ;-> )

    Or, uh, Chicago.

    https://www.youtube.com/watch?v=R04bwQ4EC2Y
    ---------
    The Eggplant That Ate Chicago
    Norman Greenbaum (1967)
    ====

    . . . if he's still hungry, the whole country's doomed.

    (I'm not sure I've heard this **since** 1967.)

    ReplyDelete
  8. This vegetarian finds eggplant quite disgusting.

    ReplyDelete