Using Technology to Deepen Democracy, Using Democracy to Ensure Technology Benefits Us All

Sunday, August 21, 2016

Fraudsters Aren't Fabulous

Tech billionaires like Thiel, Musk and Branson hawking immortality, robot gods and Martian escape hatches aren't glamorous Bond Villains, people, they're tacky techno-televangelists.


jimf said...

> Tech billionaires like. . . Musk. . . hawking. . . Martian escape hatches

I was browsing in SF author Charlie Stross's blog the other day,
and I came across his rather saturnine. . .
analysis from three years ago of the prospects for interstellar
travel and of colonization within our own solar system.

The article generated over 800 replies, mostly of shrieking protest
of the kind familiar from the responses of >Hists to Dale's blog.

Here's a thumbnail of the article:
The High Frontier, Redux

. . .I write SF for a living. Possibly because of this, folks seem to think
I ought to be an enthusiastic proponent of space exploration and space
colonization. . .

The long and the short of what I'm trying to get across is quite simply that,
in the absence of technology indistinguishable from magic — magic tech that,
furthermore, does things that from today's perspective appear to play fast
and loose with the laws of physics — interstellar travel for human beings
is near-as-dammit a non-starter. . .

What about our own solar system?

After contemplating the vastness of interstellar space, our own solar
system looks almost comfortingly accessible at first. . .

But when we start examining the prospects for interplanetary colonization
things turn gloomy again. . .

Colonize the Gobi desert, colonise the North Atlantic in winter — then get
back to me about the rest of the solar system!

and here's a characteristic response by Stross:

Charlie Stross | June 17, 2007 17:30


Matt @105:

> I was quite disappointed with your latest rant, it seems you must
> have had a very bad week and perhaps a brain tumor. How else to
> imagine why a science fiction author would so publicly, stridently
> and logically tear to shreds the hopes of anyone in space travel
> that you yourself have helped to kindle? And with such... zest?

... Because I dislike willful ignorance and I hate being told
comforting lies.

In a nutshell -- and my third [non-introductory] paragraph should
have been a honking great flashing neon Time Square sized sign --
the space settler enthusiasts have basically swallowed a cartload
of ideologically weighted propaganda, cunningly combined with emotive
appeals to abstract (and thus unfalsifiable) ideals. Your use of
the phrase "the high frontier" is itself a telling one -- and you
use the term "frontier" repeatedly. Then you start going on about
indoctrinating impressionable young minds to "absorb vast perspectives
and faith in humanity and science" as if you think I've got some
quasi-mystical **duty** to teach Ideologically Correct
Gerard K. O'Neil Thought, and by implication, any kid who **doesn't**
buy what is effectively a collectivist pie-in-the-sky daydream is
deficient, unimaginative, and foolish, and any SF writer who
refuses to pander to this political creed is evil and wrong.

I don't like being told what thoughts I'm allowed to hint. I like to
**question assumptions**. And this is just the result of my interrogating
some of the assumptions underlying space opera, using the toolkit of
Hard Science Fiction -- i.e., trust the numbers. You can take it as a
default likely outcome. . .

Michael @110: the sad thing is, I think a whole lot of them really
believe it. As in, they **believe**. It's not rationally grounded
optimism with an underpinning of facts, it's religion in disguise.

jimf said...
Airbus' Flying Car Concept Makes The Same Mistake
As Every Other Flying Car
Raphael Orlove

Airbus’ new driverless airborne taxi/gigantic drone concept looks great!
It’s so cool to see a major air company work on what’s basically a
flying car. Oh, wait, does this thing pass the two year test?
Flying Cars Are Just Two Years From Reality ¯\_(ツ)_/¯
Matt Novak

Another day, another story about how flying cars are just two years away.
Funny how they're always just two years away. . .

. . .

The two-year test, if you’re not familiar, is that every single maker
of a flying car claims that their work is just two years away. This
is a point of humor to those who follow the flying car quasi-industry,
as literally every single attempted project of the past decade has
either never made it off the ground or crashed if it did.

As it turns out, producing a working, reliable, full-sized, FAA-approved
flying vehicle on the scale and usability of an automobile is nigh-on
impossible. They’re either too much like planes that are bad at driving,
too much like cars that are bad at flying, or in the case of these new
big boy drones, they don’t have the battery power to get anywhere.
This leaves out the major issue of how difficult it is to manage all
of these flying vehicles in the air over our cities without them hitting
each other and crash landing onto our heads.
Why Flying Cars Are Difficult And Dumb
Chris Mills

By this stage, it's fairly clear that flying cars aren't going to
happen any time soon, despite what the media might want to say. And
there's a simple reason for that — the whole concept of flying cars
is pretty stupid in the first place.

Vsauce uses this video
Where's Our Future Technology? -- Thought Glass #10 ]
to explain why a number of futuristic technologies — flying cars,
teleportation, and space colonies — aren't quite here yet. It's slightly
depressing to hear the long list of problems standing between us
and Beam Me Up Scotty, but I'm sure science will come good in the end.

Everything from Terrafugia to Moller to now Airbus has been saying that
their work is just around the corner
According to Airbus, A Flying Car Reality Is Just Around The Corner
Carli Velocci ],
always close enough to make the headlines, always far enough
away so that nobody holds them too accountable when the project
gets caught up in endless delays.

Airbus’ work doesn’t look any different.



> The two-year test, if you’re not familiar. . .

If I had a nickel for every time one of my non-tech friends tries to tell
me about some radical new technology that is two years away from being
available, I tell them the same thing.

Some examples:

Battery technology that offers high capacity and super fast charging

Affordable, long range electric cars

Safe, practical, super cheap cars

Power sources that will replace gas in cars

autonomous cars

Flexible/wearable screens

Hollywood style Holographic interfaces

Jet packs

BTTF style Hover boards

Implantable tech

VR Motorcycle Helmets. (I’m working on this one myself.
Should be ready in about...2 years).

Hey, there's one piece of prediction from the _Popular Science_
rags of my youth that has come absolutely, spectacularly
true -- flat-screen TVs.

If human technological civilization crashes and burns, let
this stand as an epitaph we can all be proud of --


jimf said...

So this seems like a pretty reasonable article (from your Twitter
Should we be afraid of AI?

Machines seem to be getting smarter and smarter and much
better at human jobs, yet true AI is utterly implausible. Why?

Luciano Floridi
9 May, 2016

[E]vil, ultra-intelligent machines. . . [are] an old fear.
It dates to the 1960s, when Irving John Good, a British
mathematician who worked as a cryptologist at Bletchley Park
with Alan Turing, made the following observation:

> Let an ultraintelligent machine be defined as a machine
> that can far surpass all the intellectual activities of any
> man however clever. Since the design of machines is one of these
> intellectual activities, an ultraintelligent machine could
> design even better machines; there would then unquestionably
> be an ‘intelligence explosion’, and the intelligence of man
> would be left far behind. Thus the first ultra-intelligent
> machine is the last invention that man need ever make, provided
> that the machine is docile enough to tell us how to keep it
> under control. It is curious that this point is made so seldom
> outside of science fiction. It is sometimes worthwhile to take
> science fiction seriously.

. . .

[T]he amazing developments in our digital technologies have led
many people to believe that Good’s ‘intelligence explosion’ is
a serious risk, and the end of our species might be near,
if we’re not careful. This is Stephen Hawking in 2014:

> The development of full artificial intelligence could spell
> the end of the human race.

Last year, Bill Gates was of the same view:

> I am in the camp that is concerned about superintelligence.
> First the machines will do a lot of jobs for us and not be
> superintelligent. That should be positive if we manage it well.
> A few decades after that, though, the intelligence is strong enough
> to be a concern. I agree with Elon Musk and some others on this,
> and don’t understand why some people are not concerned.

And what had Musk, Tesla’s CEO, said?

> We should be very careful about artificial intelligence. If I
> were to guess what our biggest existential threat is, it’s probably
> that. . . Increasingly, scientists think there should be some
> regulatory oversight maybe at the national and international level,
> just to make sure that we don’t do something very foolish.
> With artificial intelligence, we are summoning the demon. In all
> those stories where there’s the guy with the pentagram and the
> holy water, it’s like, yeah, he’s sure he can control the demon.
> Didn’t work out.

. . .

[In t]he current debate about AI. . . the dichotomy is between those
who believe in true AI and those who do not. Yes, the real thing,
not Siri in your iPhone, Roomba in your living room, or Nest in
your kitchen. . . Think instead of the false Maria in _Metropolis_ (1927);
Hal 9000 in _2001: A Space Odyssey_ (1968), on which Good was one of the
consultants; C3PO in _Star Wars_ (1977); Rachael in _Blade Runner_ (1982);
Data in _Star Trek: The Next Generation_ (1987); Agent Smith in _The Matrix_ (1999)
or the disembodied Samantha in _Her_ (2013). [Wot, no Ava in _Ex Machina_ (2015)?]. . .
Believers in true AI and in Good’s ‘intelligence explosion’ belong to the
Church of Singularitarians. . . For lack of a better term, I shall refer
to the disbelievers as members of the Church of AItheists. Let’s have a
look at both faiths and see why both are mistaken. . .

jimf said...

> Believers in true AI and in Good’s ‘intelligence explosion’ belong to the
> Church of Singularitarians. . . For lack of a better term, I shall refer
> to the disbelievers as members of the Church of AItheists. Let’s have a
> look at both faiths and see why both are mistaken. . .

Floridi continues:

Op. cit.
Deeply irritated by those who worship the wrong digital gods,
and by their unfulfilled Singularitarian prophecies, disbelievers –
AItheists – make it their mission to prove once and for all that
any kind of faith in true AI is totally wrong. AI is just
computers, computers are just Turing Machines, Turing Machines
are merely syntactic engines, and syntactic engines cannot think,
cannot know, cannot be conscious. End of story. . .

But then he **also** says:

Op. cit.
Plenty of machines can do amazing things, including playing checkers,
chess and Go and the quiz show Jeopardy better than us. And yet
they are all versions of a Turing Machine, an abstract model that
sets the limits of what can be done by a computer through its mathematical

Quantum computers are constrained by the same limits, the limits
of what can be computed (so-called computable functions). No conscious,
intelligent entity is going to emerge from a Turing Machine. . .

The above sounds a lot like AItheism to me, or at least GOFAItheism.

On the other hand, I myself am willing to concede that, if you
had an immensely powerful computer (for some value of "immensely" --
certainly orders and orders of magnitude beyond anything
available today or even **foreseen** today, for that matter), you
might be able to couple a digital computer **simulating**
a non-Turing "machine" like the brain with some source of
stochasticity (maybe even real quantum-uncertainty-derived
noise) and get something that behaves "intelligently" the way
biological organisms, including humans, behave "intelligently".

We can both agree on this, though:

Op. cit.
True AI is not logically impossible, but it is utterly implausible.
We have no idea how we might begin to engineer it, not least because
we have very little understanding of how our own brains and
intelligence work. This means that we should not lose sleep over
the possible appearance of some ultraintelligence.

(Presumably the illustration is meant to suggest that an AI threatening
Manhattan is as implausible as a giant eggplant threatening
Manhattan. ;-> )

"Luciano Floridi

is professor of philosophy and ethics of information at the
University of Oxford, and a Distinguished Research Fellow
at the Uehiro Centre for Practical Ethics. . ."

I wonder if he's on speaking terms with Nick Bostrom. ;->
(The latter does not seem to be mentioned in the article.)

jimf said...

> Irving John Good, a British mathematician. . . made the following observation:
> > . . .the first ultra-intelligent machine is the last invention
> > that man need ever make, provided that the machine is docile. . .
> > It is curious that this point is made so seldom
> > outside of science fiction. It is sometimes worthwhile to take
> > science fiction seriously. . .
> Stephen Hawking in 2014:
> > The development of full artificial intelligence could spell
> > the end of the human race. . .
> Bill Gates was of the same view:

> > I agree with Elon Musk and some others on this,
> > and don’t understand why some people are not concerned. . .
> Believers in true AI and in Good’s ‘intelligence explosion’ belong to the
> Church of Singularitarians. . .
[David Gerard, "reddragdiva" wrote, in response to

i would question both your and phil [sandifer]’s view [in the latter's
book _Neoreaction: A Basilisk_] of what on earth yudkowsky thought he
was doing [in writing the "Sequences" -- the LessWrong/MIRI "guide
to rational thought"].

phil’s perception is imo not unreasonable: the sequences is presented in
the manner of a from-first-principles philosophical edifice. . .

your view is that yudkowsky was trying to do analytic philosophy. . .

yudkowsky noted (“it got to the point that after years of bogging
down I threw up my hands and explicitly recursed on the job of
creating rationalists”) that the sequences were a **practical**
project. this was part of his desperate and **urgent** work to
get friendly ai right. “this is crunch time for the entire human
species.” people didn’t listen to his claims concerning friendly ai,
o he wrote the sequences to get people to **listen**, to bridge
the inferential gap, to make others into better thinkers
(the first sequences post was “The Martial Art of Rationality”)
so they would understand and thus believe him. the sequences were
written with this goal in mind. . .

yudkowsky was desperate to **convince** people. this was the purpose.
the aim was not an edifice nor analytic philosophy nerding:
it was a **polemic**. a **manifesto**. . .

that the sequences resembled a from-first principles edifice,
or analytic philosophy nerding, or popular science writeups of
kahneman, is i think incidental. . .

the monster at the end of the book. . . _neoreaction a basilisk_. . .
wasn’t. . . the monster at the end of a **philosophy**, but
the monster at the end of an **ideology**.

(or, looking at the basilisk article and its promises of heaven
and forebodings of hell, a theology.)

In the form of CFAR, the "Center for Applied Rationality", this
"ideology/theology" has leaked right into the so-called Skeptical
movement, via folks like Jesse Galef and Julia Galef.

It reminds me of nothing so much as Scientology running front
organizations like Narconon, or maybe a closer match would be
"Applied Scholastics".

One of my "skeptical" friends told me, essentially, "I like Julia,
I consider her a friend, and I don't appreciate your saying
bad things about her." (I certainly wasn't claiming she's a bad
**person** -- I know next to nothing about her, and I've never met
her. I suspect she does, however, know all about the agenda that
CFAR is associated with. It wouldn't surprise me if she and/or her
brother have cryonics contracts, though I am certainly not privy
to such information.)

Ah well. There is no end to the foolishness of the world.
It will certainly outlast me (and hopefully provide **me**,
at any rate, with no worse than titillating entertainment.
Lead us not into Penn Station!).

jimf said...
Sep 18th, 2015

johnbrownsbodyy asked:

> your constant frustration with people's overestimation of AI
> is extremely amusing

It’s one of those things where it’s a major part of our everyday
lives – anyone who’s ever used Google Translate has used an
artificial intelligence program, the police use artificial intelligence
to ‘predict’ where crime is going to occur – but the specter of
“strong AI” prevents us from even noticing, or examining it critically.
This is basically the topic of the article I’m writing for Mask Mag.

Most of the people in the strong AI camp aren’t academics, and aren’t
involved in actual AI research. They’re “philosophers” like Yudkowsky
or actually accomplished engineers like Kurzweil who are just
(to paraphrase Jaron Lanier) really scared of death. Otherwise, they’re
like Marvin Minsky, whose research in cognitive science was pioneering
at first, but whose insistence against using neuroscience to understand
consciousness (being more radical in this regard than the humanist
Raymond Tallis) has probably held back cognitive science by a few
decades. They see AI the same way alchemists saw their practice, as a
way to potentially cheat death.

The philosophical assumptions behind the research program, that the
human mind can be reduced to an algorithm, has no basis in reality.
The argument that’s usually given in response to pointing out that we’ve
never found an algorithm for general AI and probably never will is
usually the creationist response to an atheist pointing out that we’ve
never found any evidence for God. “Well, that doesn’t mean we won’t!”
Strong AI is a degenerating research program and the delusional ravings
of Yudkowsky, the failed predictions of Kurzweil and the reactionary
attitudes toward developments in cognitive science by Marvin Minsky are
all indicative of this. AI has left strong AI behind and is now focused
on doing what AI has always been good at: one thing in particular at
once. Image recognition programs don’t need to be conscious, machine translators
don’t need image recognition, etc. The general tendency in AI today is
to augment human beings (not in the Deus Ex sense so much as in the
World Wide Web sense) ability to acquire and disseminate knowledge. This
itself deserves critical analysis, because of its actual applications by
disciplinary institutions like schools and the police, but this is a
different thing entirely than some conscious malevolent AI causing
nuclear warfare because that’s what Yudkowsky would do if he were
totally rational or some LessWrong bullshit.

jimf said...
May 8th, 2016

Critics of LessWrong or the so-called Rationalist movement probably
have various people in mind like Eliezer Yudkowsky, Robin Hanson,
or Peter Thiel and the Silicon Valley venture capitalist community.
But surveys suggest that the median member of the community is
more likely to be a 20-something autistic trans girl suffering
from depression and pursuing STEM studies. Any critiques that
don’t take this into account may end up being misinterpreted.

Oh. So, dare I ask, what might be the connection between that, and this:
Cult Behaviour: An Analysis
Sargon of Akkad
Aug 17, 2016

An analysis of Dr. Arthur Deikman's book on cult behaviour,
_The Wrong Way Home_.
Cult Case Studies
Sargon of Akkad
Aug 18, 2016

Remind you of anyone?

jimf said...

> Should we be afraid of AI?
> Presumably the illustration is meant to suggest that an AI threatening
> Manhattan is as implausible as a giant eggplant threatening
> Manhattan. ;-> )

Or, uh, Chicago.
The Eggplant That Ate Chicago
Norman Greenbaum (1967)

. . . if he's still hungry, the whole country's doomed.

(I'm not sure I've heard this **since** 1967.)

Dale Carrico said...

This vegetarian finds eggplant quite disgusting.