Using Technology to Deepen Democracy, Using Democracy to Ensure Technology Benefits Us All

Thursday, July 21, 2016

Every Futurism Is A Retro-Futurism


jimf said...

> Every Futurism Is A Retro-Futurism

Just ask Noam Chomsky:
Noam Chomsky - The Threat of Supercomputers
Chomsky's Philosophy
Jul 3, 2016

[excerpted from
In Conversation with Noam Chomsky - A British Academy event
The British Academy
Nov 28, 2014 ]

Questioner: . . .[Y]ou've written quite extensively [that] the two
dominant threats to human survival [are] climate change and nuclear
war. I was just wondering to what extent you've considered a
third possibility. . . [the threat of Silicon Valley firms working
secretively on projects] which are purported to create a kind of superintellgence
that will render humanity obsolete. I was just wondering what your
thoughts are on that, and whether that's in fact
a bad thing at all.

Chomsky: Well, so what about the development of supercomputers that'll
be more intelligent than humans, and Singularity, you know, they'll
take over everything, and we'll be superfluous, and so on?

I've been listening to this. . . I've been at MIT for 60 years,
since 1950, and I've been **listening** to this for 60 years.
[laughs from the audience]
The line has always been "In six months we will have computers
which will do X, Y, and Z." Uh, we don't have 'em. There's
a famous paper by Alan Turing called "How To Make Machines Think"
or some title like that -- it's a short paper, eight pages, around 1950.
It's the basis for all of this work on what's called the
Turing Test, trying to develop machine which will think. You know, a
machine that'll defeat the grandmaster in chess, that'll win
a prize in a television program, and so on. All of this work --
and you can win a hundred thousand dollars if you develop what
they call a "machine" -- a machine means a program, it's not the
machine -- if you develop a program that will pass the so-called
Turing Test, you know -- to fool a human and fool a jury of humans
into thinking it's a person and not a machine. All of this work
overlooks the brief sentence in Turing's paper, namely --
"The question whether machines 'think' is too meaningless to
deserve discussion." OK?
[laughs from the audience]
He didn't bother explaining it, but it's pretty obvious.
I mean, you can develop -- again, it's kind of sexy to talk about
a "machine", but remember -- a machine, in itself, it's kind of like
a paperweight, doesn't do anything. It's the program that's
doing something. And the program is just some kind of theory,
complicated theory. So you can develop theories that will do
specific tasks. Like, it was obvious in 1950 that if you put
enough time and energy into it, you could develop a program that
would win a chess game against a grandmaster. How? By getting
a hundred grandmasters to sit around for years and years
figuring out what to do in all possible circumstances and so on
and so forth, and program it, and it'll do better than a grand
master who has a half an hour to think about the next move.
OK. It's completely uninteresting. Intellectually, of zero
interest. It's good for IBM -- they sell a lot of computers that way,
[laughs from the audience]
but it has no intellectual interest. The same is true of winning
in a quiz show. You know, you toss a lot of data into the machine
and it'll do better than a person. But getting a machine to do --
a program, again -- to do anything that's at all like the creative
activities that every four-year-old child can carry out -- that's
quite different. And I don't think there's -- we have any grasp
even on how to go ahead to do that. And so I think one can have
a fair degree of skepticism about the PR on superintelligent machines
and the Singularity and so on.

jimf said...

> Just ask Noam Chomsky:

Just one more thing for Chomsky and Sam Harris to disagree
about. ;->
Sam Harris - On Artificial Intelligence
Jul 8, 2015
Sam Harris discussing artificial intelligence from
Joe Rogan Experience Podcast #641. Podcast edited to
include portions on AI.
Sam Harris - On Artificial Intelligence II
May 3, 2016
Waking Up Podcast
Featuring David Chalmers

Sam Harris says he's only become aware of the "problem" in
the last year or so, because he's "not a sci-fi geek".
But he's been "drinking the kool-aid" (as he calls it himself)
since public pronouncements by Elon Musk on the dangers
of superintelligence ("and Elon Musk is a friend of mine,
and he wouldn't be saying these things if there wasn't something
in it").

And Nick Bostrom.

jimf said...
Trying to reduce the odds of a catastrophe by .0001%
May 18, 2015

The academic study of existential risk is being taken seriously.
The University of Cambridge has the CSER. . . Oxford has the
Future of Humanity Institute, headed by Nick Bostrom. . ., which
has produced this taxonomy of threats. . . In the US, work is
done in thinktanks like the Global Catastrophic Risk Institute
and the Machine Intelligence Research Institute, which is focused
on trying to tame AI, and predict when it will arrive.
Though climate change gets a nod, the main concerns appear to be
largely AI (which they are really worried about), nuclear war
(chance of happening: between 7% and .0001% a year), threats from
technological innovation like biotech or nanotech.

From the comment thread:

Existential Dread
May 18, 2015

I've heard that anecdote about the early atomic bomb developers
wondering if they could trigger fusion of atmospheric nitrogen
(essentially turning the entire atmosphere into an uncontrolled
fusion reaction), although I've heard it accredited to Teller,
not Oppenheimer.

Amusing perspective from Wikipedia:

> Teller also raised the speculative possibility that an atomic
> bomb might "ignite" the atmosphere because of a hypothetical fusion
> reaction of nitrogen nuclei. . . Oppenheimer mentioned it to
> Arthur Compton, who "didn't have enough sense to shut up about it.
> It somehow got into a document that went to Washington" and was
> "never laid to rest".

That kind of speculation must have a long history, because that's
basically how the First Human Species meets its final end in
Olaf Stapledon's _Last and First Men_ (1930). After the fall of the
American World State 5000 years in the future (as a result of
running out of fossil fuels), after a period of some tens of thousands
of years of savagery, the Patagonians of 100,000 years in the future
rediscover "sub-atomic power" (which had been discovered near
our own time but had been suppressed as too dangerous by a cabal
of scientists, who persuaded the Chinese guy who demonstrated it
to them to destroy his work and commit suicide). So the Patagonian
world state has "sub-atomic power" generators, but social dissidents
get hold of one of them and manage to set off a chain reaction in the
crust of the earth involving all deposits of the element involved
in the "sub-atomic disintegration" process, causing wide-spread
vulcanism and extinction of the human race (except for a handful
of survivors aboard an exploratory ship in the Arctic, whereby
hangs the rest of the two-billion-year story ;-> ).

From the same comment thread:

May 18, 2015

> I have a hard time getting too riled up about the AI concern.

Being frightened of AI is roughly like worrying that if your
grocery list gets long enough and complicated enough, eventually
it will go do the shopping itself.

People who do not have careers in IT: "omg AI is coming and
it's going to destroy us all omg omg".

People with careers in IT: "Why doesn't the printer work? Why
won't Linux talk to the projector this morning?"


May 18, 2015

> People who do not have careers in IT: . . .

I feel like people who make a career out of these concerns are
often taking advantage of people who watch too many movies. Or
they watch too many movies themselves with just a smattering
of philosophy-of-mind thrown in to make dangerous speculations
sound interesting.

As the Dowager Countess would say, "She reads too many novels. . ."


jimf said...

> Noam Chomsky:
> All of this work overlooks the brief sentence
> in Turing's paper, namely -- "The question whether
> machines 'think' is too meaningless to
> deserve discussion." OK? . . .
> It's good for IBM -- they sell a lot of computers
> that way. . .

Ya gotta love this ad:

Plugboards and vacuum tubes.

Does Univac 120 really **think**?

I dunno -- whaddya **you** think? ;->

SPOCK: If only I could tie this tricorder in with the
ship's computers for just a few moments.

KIRK: Couldn't you build some form of computer aid here?

SPOCK: In this zinc-plated vacuum-tubed culture?

KIRK: Yes, well, it would pose an extremely complex problem
in logic, Mr. Spock. Excuse me. I sometimes expect too much of you.


SPOCK: Captain, I must have some platinum. A small block would
be sufficient, five or six pounds. By passing certain circuits
through there to be used as a duodynetic field core. . .

KIRK: Mr. Spock, I've brought you some assorted vegetables,
baloney in a hard roll for myself, and I've spent the other
nine tenths of our combined salaries for the last three days
on filling this order for you. Mr. Spock, this bag doesn't
contain platinum, silver or gold, nor is it likely to in the
near future.

SPOCK: Captain, you're asking me to work with equipment which
hardly very far ahead of stone knives and bearskins. . .
[I]n three weeks at this rate, possibly a month, I might reach
the first mnemonic memory circuits. . .

EDITH: . . . What on Earth is that?

SPOCK: I am endeavoring, ma'am, to construct a
mnemonic memory circuit using stone knives and bearskins.