Using Technology to Deepen Democracy, Using Democracy to Ensure Technology Benefits Us All

Monday, December 17, 2012

Daily Dumb Dvorsky: Futurological "Existential Risk" Discourse As Existential Risk Edition

It is notable that in his list of the "the top nine ways humanity could eventually bring about its own destruction" Robot Cultist George Dvorsky corrals nuclear war and anthropogenic climate change together with complete nonsense like robot uprisings and runaway replicating goo. Something Dvorsky is calling "global ecophagy," for example -- a Very Serious-seeming term for a completely non-serious notion appears at the number two spot on Dvorsky's list, right after the terrible actual reality of the threat of nuclear terrorism and war. "Global ecophagy" is a typically pseudo-scientific futurological coinage naming the family of "gray-goo" scenarios (so-called after their appearance in an episode of "Star Trek: The Next Generation," the usual futurological confusion of science fiction with science policy) in which runaway self-replicating nanobots eat the Earth. Dvorsky fancies this a threat right up there with stockpiles of nuclear weapons despite the fact that the enabling technology of reliably programmable controllable robust self-replicating room-temperature swarms of nanoscale devices of the Drexlerian type happen to be the furthest thing from being actually practically realizable or possibly even theoretically plausible -- which is not to deny that biochemistry and materials science aren't real and important, but is to deny that futurological hyperbole tells us anything useful about real and important technoscience.

But there's no time for quibbling, Dvorsky has already moved right along, next declaring that "[i]t's all but guaranteed that we'll develop artificial general intelligence some day." Dvorsky provides absolutely no support for this guarantee, naturally. The facts that all actually-existing intelligence has been organismicaly incarnated and that futurological fanboys like himself have been declaring the accomplishment of artificial intelligence on the horizon every year on the year since before WWII with exactly Dvorsky's certainty and yet they have always been completely wrong seems not to dissuade him from his AI Fundamentalist Faith on this question. As a good materialist I concede the possibility that something enough like organismically materialized intelligence to be called "intelligence" might be materialized otherwise, but that doesn't mean I think this is guaranteed to happen nor that I fail to regard claims otherwise as extraordinary claims demanding extraordinary evidence none of which is ever forthcoming from AI dead-enders. But Dvorsky's faithful utterance on this question sets the stage for his treatment of three more futurological fancies as realities demanding the sort of attention real problems actually do demand.

Two of these non-threat threats are the closely kindred scenarios of "Robocalypse" (once again, futurologists have a real problem mistaking things like the scripts for the Terminator and the Matrix movies as policy papers) or of the arrival of a history shattering Singularity with a Robot God protagonist. A third scenario arising from Dvorsky's faith in AI is even worse, if you can believe it, the idea of a universal uploading of human consciousnesses into cyberspace. Precisely because I am a materialist I grasp that the material organismic incarnation of consciousness is indispensable and not incidental to that consciousness, and hence I have little patience for glib metaphorizations of a "migration" or "translation" or an "uploading" of consciousness from one substrate to another without significant violation. Further, "scanning" a brain is no more an eternalization of an "info-self" (who ever heard of an actually existing scan or gizmo that was eternal anyway, by the way?) than a picture of you is you, of all things, let alone an immortalization of you. The very notion of uploading is incoherent, an anti-science farrago relying on metaphorical fudging and terminological con-artistry pretending to a scientific perspective. What we have here is the usual effort to take an initially implausible (AI) or even incoherent (uploading) notion, and amplifying its stakes into the appearance of Seriousness, when the Unserious never scales into the Serious but merely into the Ridiculous.

I should add, by the way, that just as the importance of biochemistry and materials science doesn't render futurological nanotech discourse important even as the futurological discourse deranges the ways in which we actually understand the stakes and importance of the very biochemistry and materials science futurology hyperbolizes into nanotech discourse in the first place, so too there are real questions about network security, user-friendliness, automation in the workplace and on the battlefield that are enormously important and which when they are hyperbolized into futurological AI discourses always only derange our grasp of and deliberation on the problems and stakes of the real technoscience and real technodevelopment to which they obliquely refer and would hijack.

I have to note that just as the real threat of stockpiles of nuclear weapons is followed by the futurological nonsense of "gray goo" in Dvorsky's accounting, so too these three ridiculous scenarios premised on the digital-utopianism of implausible AI and incoherent uploading notions precede Dvorsky's arrival at another real threat, that of catastrophic anthropogenic climate change. Although Dvorsky points out the reality of this threat and castigates those who deny it, I would point out that Dvorsky's own futurologically-typical advocacy of geo-engineering schemes elsewhere in his writing, that is to say schemes for the parochially profitable technical circumvention of climate change by the corporate-military actors most responsible for that catastrophic climate change, actually functions as a second-order denialism, the denial of actually accountable political processes of education, regulation, incentivization, and public investment to address our shared planetary crisis of civilization.

The other three threats Dvorsky discusses, world-ending particle-accelerator accidents, engineered pandemics, and planetary "conventional" warfare all deserve consideration in some form. But I think questions of budgetary priorities should loom larger in public deliberation over accelerators in a world with urgent shared social and climate problems, I think questions of the global monitoring and public provisions for rapid mobilization of containment and healthcare support should loom larger in public deliberation over pandemics in a networked world, I think questions of global governance more accountable to its constituencies than elite-incumbent appointments to the UN General Assembly, World Bank, WTO and so on should loom larger in public deliberation over global conflicts. That is to say, once again it seems to me that Dvorsky's futurological formulations of threat amount to fantastic hyperbolizations of real world threats that always only derange efforts at real world threat amelioration. While there are things to be said about these threats, futurologists are the last people in the world you want to bring to the table where these threats in some measure or form are going to be seriously under discussion.

To say that nuclear weapons stockpiles or human caused global warming are threats in the way nanogoo and robot uprisings are threats (they are all, and equally, you will recall, "top threats") is really just a way of saying that nuclear weapons stockpiles and human caused global warming are not real threats. The threat Dvorsky and futurologists like him always fail to mention is the threat posed by talking about threats in terms framed by futurologists. Oh, George!

8 comments:

jimf said...

> Dvorsky has already moved right along, next declaring that
> "[i]t's all but guaranteed that we'll develop artificial general
> intelligence some day."

Ya gotta wonder whence he gets his faith. I almost envy him. [*]
I, too, took charts like this one seriously, once upon a time:
http://www.frc.ri.cmu.edu/~hpm/book97/ch3/AI.power.300.jpg

Unfortunately, all the hot air has since leaked out of my
AI-aint-just-sci-fi balloon.

[*] Also James Hughes, whom I saw a video of at the podium
at a Singularity Summit (or someplace) declaring "I've felt
in my bones since about the age of 10 that AI is just around the
corner."

Well, yeah, I read Clarke's _Profiles of the Future_ around
the same age.

Sigh. I've lost my belief in SantAI Claus. Poor poor pitiful me.

-----------------------
There are about 10 billion switches -- or neurons -- inside your
skull, "wired" together in circuits of unimaginable complexity.
Ten billion is such a large number that, until recently, it could
be used as an argument against the achievement of mechanical
intelligence. In the 1950s a famous neurophysiologist made a
statement (still produced like some protective incantation by the
advocates of cerebral supremacy) to the effect that an electronic
model of the human brain would have to be as large as the
Empire State Building and would need Niagara Falls to keep it
cool when it was running.

This must now be classed with such interesting pronouncements
as "No heavier than air machine will ever be able to fly." For
the calculation was made in the days of the vacuum tube, the precursor
of the transistor, and the transistor has now completely
altered the picture. Indeed -- such is the rate of technological
progress today -- the transistor itself has been replaced by
smaller and faster devices, based upon principles of quantum
physics. If the problem was merely one of space, electronic
techniques today would allow us to pack a computer as complex
as the human brain on only one small portion of the first floor
of the Empire State Building.

The human brain surpasses the average stereo set by a thousandfold,
packing its 10 billion neurons into a tenth of a cubic foot.
And although smallness is not necessarily a virtue, even this may
be nowhere near the limit of possible compactness.

For the cells composing our brains are slow-acting, bulky, and
wasteful of energy -- compared with the scarcely more than atom-
sized computer elements that are theoretically possible.
The mathematician John von Neumann once calculated that electronic
cells could be 10 billion times more efficient than protoplasmic
ones; already they are a million times swifter in operation,
and speed can often be traded for size. If we take these ideas
to their ultimate conclusion, it appears that a computer equivalent
in power to one human brain need not be much bigger than a
matchbox, and probably much, much smaller.

-- Arthur C. Clarke, _Profiles of the Future_,
"The Obsolescence of Man"
-----------------------

Nope. Can't get it up anymore. I need a new fluffer!

Dale Carrico said...

James Hughes... at a Singularity Summit (or someplace) declaring "I've felt in my bones since about the age of 10 that AI is just around the corner."

Well, from that quotation it seems his sensible skepticism over the wisdom of repugnance -- or what he calls the "yuck factor" -- apparently does not correlate to a sensible skepticism over the wisdom of wish-fulfillment fantasizing -- or what one might call the "aw shucks factor" -- whether originating in the gut or felt in the bones. Needless to say, undercritical technophilia is exactly as useless to useful technodevelopmental deliberation as undercritical technophobia. Futurology is a twin engine derangement craft, driven by technophilic and technophobic hyperbole.

Barkeron said...

I remember all the frantic cries of talking heads after the fall of the Soviet Union how every terrorist group which could afford a pickup now had the opportunity to get their hands on nukes. Well, only ten years later the arguably best organized and funded terrorist organization relied on conventional passenger planes.

I think we can write that off as hard right FUD meant to get the US to "intervene" in the post-Soviet Russia period.

The ludicrous idea of apocalypse through particle-accelerator only proves Very Serious WASP Men Of The World To Come exclusively know about consumer electronics and scripts, but haven't got the faintest about the underlying physics because they're mostly failed Microsoft helpdesks anyway.

I have no clue who would be insane enough to create an omnicidal plague (maybe EBIL LUDDITES?!?!) or how conventional warfare could wreck the entire planet, but I'm sure one of the SF books Dvorsky got these points from explains that.

jimf said...

> ...his sensible skepticism over the wisdom of repugnance --
> or what he calls the "yuck factor" -- apparently does not
> correlate to a sensible skepticism over the wisdom of
> wish-fulfillment fantasizing -- or what one might call
> the "aw shucks factor"

http://www.artificialbrains.com/blue-brain-project#objections
---------------------------
"Since the 1960s computers have shown a steady increase in
their memory capacity and processing power. There's no reason
to think Moore's law will stop in the forseeable future. Even
if die shrinkage reaches physical limits at around 10 nm,
there are other technologies that can continue the tend further,
e.g. GPUs and multi-core processors, asynchronous computing,
neuromorphic hardware, adiabatic quantum computation, 3D stacking,
memristors, and graphene. It's inevitable that one day supercomputers
will become powerful enough to simulate the human brain. We
don't yet know exactly how much computational power is required
or when we'll have it, but Henry Markram claims an exascale
supercomputer will suffice and that such computers will become
available by the year 2023."


Aw shucks, Aunt Pittypat, can't I please have an adiabatic neuromorphic
3D quantum computer, with graphene memristors, this Christmas?

joe said...

I thought you'd get a hoot out of this guys...

Anissimov ‏tweeted it a while ago

"Michael Anissimov ‏@MikeAnissimov

I really believe that the pursuit of immortality via biological means is a red herring. Only cyborgs live forever."


I can see what he means, what with the vast number of immortal cyborgs wondering around the place right now....If you don't watch your step you can plant you boot right on one, like a cockroach.

Seriously though I love the way he is so definitive with the cyborg bit, liek it's happening right now.

Athena Andreadis said...

Not surprisingly, he's wrong again (still). Depending on the definition of cyborg, we may already be such hybrids. Secondly, even cyborgs by the TH definition won't live forever if/when they appear.

Dale Carrico said...

Indeed, a major point of departure for Weiner's cybernetics (or at any rate something figuring in his early and continuing efforts to elaborate it) was the **ancient** sophistical puzzle asking if a blind man's cane is part of who the man is. Clearly, the facile futurists just mean by "technology" those techniques/ artifacts they happen to have invested with their hyperbolic fears/ fantasies. As usual, the whole discourse unfolds at a completely uncritical and frankly mostly symptomatic level that whatever else it is manages mostly to be just dumb.

jollyspaniard said...

We also wear clothes, does that make us cyborgs? They are "technology" and they're pretty important if you live up North. They won't make you immortal but if you live in the arctic circle they'll increase your expected lifespan thousands of times over. Somehow I don't recall any Inuit saying "The Singularity is Near" though.

You can get cybernetic bits implanted in you nowadays. However if you need them it's usually a bad thing.