Using Technology to Deepen Democracy, Using Democracy to Ensure Technology Benefits Us All

Wednesday, November 28, 2012

Insecurity Theater: How Futurological Existential-Risk Discourse Deranges Serious Technodevelopmental Deliberation

Also published at the World Future Society.

BBC:
The Centre for the Study of Existential Risk (CSER) will study dangers posed by biotechnology, artificial life, nanotechnology and climate change. The scientists said that to dismiss concerns of a potential robot uprising would be "dangerous". Fears that machines may take over have been central to the plot of some of the most popular science fiction films.
Robocalypse! Really? Few things apart from reportage about Robot Cultists could bring you from the pretense of sobriety of climate change talk by futurologists so Very Serious that their think-tank reminds you of its abbreviation (why, they must like UNESCO or CERN!) but then manage to degenerate into observations about scary b-movie science fiction plots by the third sentence. I breathlessly await the BBC report that fears about dragons destroying the castle have been central to the plot of some of the most popular fantasy films, and existential risk assessment by Very Serious Futurologists are forthcoming from their tech-celebrity-CEO vanity-funded think tanks at Stanford and Oxford any minute now. Not to spend time worrying about the odds of Dragon Conflagration would be irresponsible and dangerous!

Here we have a perfect illustration of the disasterbatory flip side of techno-transcendental hyperbole.

Of course, I have pointed out many times the way superlative futurists will devote a sentence to, say, observing some promising research effort in organ cryopreservation to facilitate transplantation operations only to provide the pretext for indulging instead in page after page of handwaving about "info-soul" preservation in hambergerized brains ready for "uploading" in Holodeck Heaven. They will leap in a paragraph from real world advances in biochemistry all the way into dreamy daydreams about reliably self-replicating programmable swarms of nanobots that can make next to anything for next to nothing any day now. They bound ecstatically from making reasonable noises one moment about qualified medical research results and healthcare advocacy all the way to cheerleading for genetically-enhanced comic-book super-bodies with "indefinite lifespans" and "techno-immortalization" the next moment.

In each case, superlative futurologists pretend the comparatively modest, qualified, sensible substance of consensus science and real research authorizes techno-transcendent wish-fulfillment fantasizing. Rather than think through the diverse impacts of technoscientific change in terms of their actual costs, risks, benefits, demands, significance to their real stakeholders in the real world, they amplify technodevelopmental realities in the present into Signs for the Robo-faithful to read, burning bushes announcing that immortality, superpowers, and wealth beyond the dreams of avarice are on the horizon in The Future.

When superlative futurists sit down to talk about what they call "Existential Risk" they offer up the other side of the counterfeit coin of expertise provided by their hyperbolic promotional/ self-promotional pseudo-discipline.

There is no question that it is reasonable, even urgent, that we study the toxicity of synthetic materials that make recourse to biochemical techniques making changes discernible at the nanoscale. What if a process that makes a synthetic fabric stronger and lighter also makes it abrade neurotoxins into surfaces with which it is in contact, for example? There is no question that it is reasonable, even urgent, that we monitor closely the pathogenesis and track the transmission pathways of dangerous viruses in a planet inter-connected by rapid transportation and communication networks. What if a virus mutates into an incomparably lethal form in a population center that is no doubt also a global transportation hub, for example?

But what exactly are futurologists supposed to bring to the table to such discussions? While the radically underfunded, already beleaguered Food and Drug Administration and comparable agencies worldwide are busy examining synthetic materials for toxicity, are we supposed to pretend that there is something helpful about Robot Cultists grabbing headlines with a splashy PowerPoint sonorously intoning about the "existential threat" of "gray goo" -- the so-called grave danger of an incompetent or evil programmer sending swarms of self-replicating nanobots to eat the planet? While the World Health Organization and the Centers for Disease Control are tracking viral outbreaks and issuing global health warnings on a daily basis, are we supposed to pretend that there is something useful about Robot Cultists handwaving in a viral YouTube video about the danger of white-racist mad scientists bioengineering trait-specific pandemics in the name of racial purity?

Actually existing techniques making changes at the nanoscale are making useful materials and introduce real worries -- but they are not opening doors leading either into Edenic superabundance or apocalypse. Actually emerging medical techniques are changing lives and introducing new risks and costs into our understanding of healthcare provision -- but they are not creating super designer babies, clone armies, comic book superheroes, or millennial lifespans. Superlative futurological frames activating transcendental hopes and apocalyptic fears contribute nothing of any use to our deliberation about actually-existing and actually-emerging technoscientific changes and the diversity of their costs, risks, and benefits in the immediate and longer term to their stakeholders in the world.

While I am the last to discount the perils of anthropogenic catastrophic climate change and resource descent created by a generation of extractive-industrial-petrochemical profiteering, I cannot think of a single contribution futurologists can uniquely introduce into environmentalist theory, practice, education, agitation, organization, resistance, or reform that could be of any use to anybody who takes these issues the least bit seriously. At best, by treating climate change as a risk alongside absolutely ridiculous non-risks like out of control nanobots and Robot uprisings, these futurologists are trivializing a real crisis -- at worst, these futurologists will use real environmental crises as an opportunity to peddle quintessentially futurological non-solutions like unilateral "geo-engineering" interventions with unknowable consequences but great potential for profitability for the very same corporate-military interests that created and still exacerbate the very crisis itself.

Any second an actually accountable health and safety administrator is distracted from actually existing problems by futurological hyperbole is a second stolen from the public good and the public health. Any public awareness of shared concerns or public understanding of preparedness for actually existing risks and mutual aid skewed and deranged by futurological fancies is a lost chance of survival and help for real people in the real world. In a world where indispensable public services are forced to function on shoestring budgets after a generation of market fundamentalist downsizing and looting and neglect, it seems to me that there are no extra seconds or extra dollars to waste on the fancies of Very Serious Futurologists in suits playing at being policy wonks.

I would concede the usefulness of specifically futurological scenario-spinning for pitch-meetings in LA for science fiction miniseries, but the fact is that these are already hoary sfnal clichés and it is no doubt from science fiction that the futurologists have cribbed them. That is to say, these futurologists are of no real use to anyone, except to the extent that they manage to attract attention, funding, and reputations for seriousness they have not earned, which is useful only to themselves at the expense of everybody else. When the matrix of actual risks to which public service administrators feel bound and accountable is skewed by the fictions of Robot Cultists, in part because the sensational stories they tell attract the attention of inexpert media figures craving dramatic narratives and because these stories in turn activate the usual irrational passions of loose technological talk (eg, dreams of omnipotence, nightmares of impotence) in the public at large to which government is convulsively responsive, the resulting mismanagement of limited time and resources, the misplacement of the priorities, the misunderstanding of the stakes at hand creates new problems, imposes new costs, proliferates new risks.

Not to put too fine a point on it, these lost seconds of attention and effort, these confused priorities and concerns, can and probably have already and most certainly will contribute very directly to lost lives.

Generations of futurological fantasists who fail even remotely to grasp the nature of the organismically-incarnated historically-situated phenomenon of intelligence have been promising and failing to deliver artificial intelligence every year on the year for years and years and years and years. Now that some of them are re-framing that claim as a concern with the "existential threat" of an intelligent robot uprising we should take care to understand this is an old tired song they are singing. The risk of an automated bulldozer losing control and trampling a laborer on the warehouse floor is real and reasonably well-understood and provided for by actual experts. The risk of a robot uprising is zero, and even if the person is wearing a suit when he tells you otherwise he is no expert but a futurological flim-flam artist.

Let me be the one to say plainly that the single greatest "existential risk" that futurological existential-risk experts will never admit is the existential risk posed by existential-risk analysis to the public address of real problems in the real world.

No comments: