Nick Bostrom is described in the piece as a scholar concerned with "existential risks" and as the head of Oxford's Future of Humanity Institute. The reference to Oxford confers an immediate gloss of legitimacy on Bostrom's brand of futurism (entirely as it was meant to do), and it is important to note that Bostrom is not also described in the piece as a transhumanist, and the founder of the World Transhumanist Association and writer of the FAQ that continues to define that techno-transcendental "movement" for many of its members. That the World Transhumanist Association subsequently repackaged itself as a slick pop-tech site called Humanity-plus (non-member humans by comparison being of course merely humanity-minus), and that Bostrom went on to found the more reputably monikered stealth-transhumanist Institute for Ethics and Emerging Technologies provides what seems to me to be indispensable background restraining any acceptance out of hand of the scholarly legitimacy of Oxford's Future of Humanity Institute as well, especially once one discovers how many familiar faces from Bostrom's more robocultic earlier outings throng the ranks of his august corporate-sponsored Oxford-inflected effort.
A couple of years ago I wrote a piece entitled Insecurity Theater: How Futurological Existential-Risk Discourse Deranges Serious Technodevelopmental Deliberation, in which I proposed that the sort of existential-risk analysis Bostrom is promoting in the Salon piece represents "the other side of the counterfeit coin of expertise provided by [the] hyperbolic promotional/ self-promotional pseudo-discipline" of futurology. I usually critique superlative futurism/futurology of the kind associated with transhumanism, techno-immortalism, singularitarianism, digital-utopianism, nano-cornucopism, and so on as essentially faith-based initiatives, hyperbolizing consensus science and legible policy concerns into promises of techno-transcendence modeled on the omni-predicates of theological godhood, omniscience, omnipotence, omnibenevolence.
Existential risk discourse is the apocalyptic obverse to these futurological raptures, at once diverting attention from threatening criticisms of the belief system itself while also providing still more diversions from real science, actual harm-reduction policy-making, substantial problem-solving, serious deliberation over actually-existing dangers, risks, costs, inequities, crimes. In that earlier piece I warned: Any second an actually accountable health and safety administrator is distracted from actually existing problems by futurological hyperbole is a second stolen from the public good and the public health. Any public awareness of shared concerns or public understanding of preparedness for actually existing risks and mutual aid skewed and deranged by futurological fancies is a lost chance of survival and help for real people in the real world. In a world where indispensable public services are forced to function on shoestring budgets after a generation of market fundamentalist downsizing and looting and neglect, it seems to me that there are no extra seconds or extra dollars to waste on the fancies of Very Serious Futurologists in suits playing at being policy wonks. To put the point more concisely, existential-risk discourse seems to me an existential risk.
Am I over-reacting? Am I indulging yet again in "hate-speech" against the innocent futurist subcultures? Am I engaging -- as the robocultically-inclined charge again and again -- in name-calling without any substance to back my pessimistic and relativistic and nihilistic and "deathist" assertions? Am I being unfair to serious futurologists with important concerns about the public good? Judge for yourself. From the Salon piece itself:
“Global warming is very unlikely to produce an existential catastrophe,” Nick Bostrom, head of the Future of Humanity Institute at Oxford, told me when I met him in Boston last month. “The Stern report says it could hack away 20 percent of global GDP. That’s, like, how rich the world was 10 or 15 years ago. It would be lost in the noise if you look in the long term.” ... But, Bostrom believes, even the misery caused by this kind of decline pales in comparison to what could be inflicted by high-tech nightmares: bioengineered pandemic, nanotechnology gone haywire, even super-intelligent AI run amok.While the author of the piece may seem to want to insert some sanity at this point by declaring (the obvious) that "[t]hese [are] exotic and unlikely-sounding disasters" he immediately confounds that expectation by completing the sentence with the observation that "these possibilities that are finally getting some attention." Whew! Finally! Won't somebody please think of the Robocalypse! Enough of this shilly-shallying about silly atmospheric carbon and human trafficking and arms proliferation 'n stuff!
The piece goes on to describe a burgeoning institutionalization of this sort of discourse with serious corporate-military dollars behind it and serious academic heft lending it prestige (it is not irrelevant to the latter that the neoliberal corporatization of the academy renders once-prestigious places of scholarship much more ready to eat their legacies for cash on the barrelhead):
In the last few years, a number of institutes have sprung up to begin to do serious research on the risks of emerging technology, some of them attached to the world’s most prestigious universities and stocked with famous experts. In addition to FHI, there is the Center for the Study of Existential Risk at Cambridge and the Future of Life Institute at MIT, along with the Lifeboat Foundation, the Foresight Institute and several others. After years of neglect, the first serious efforts to prevent techno-apocalypse may be underway.It is unfortunate that the Salon piece provided no links to the organizations listed here as undertaking these serious efforts. For example, The Lifeboat Foundation which is soliciting participants in, among other things, its "LifeShield Bunkers program [which] is a compliment to our Space Habitats program. It is a fallback position in case programs such as our BioShield and NanoShield fail globally or locally. A bunker can be quite large, such as Biosphere 2. A large bunker would be a place where babies are born and children play and go to school... Let us know if you wish to participate in a local LifeShield Bunker. We will contact you if and when we find a cluster of interested people in your area. Read The Case for Survival Colonies: Soliciting Colonists." Very serious! Or The Foresight Institute, devoted to Eric Drexler's promises of a super-abundance and near-immortalization "technology... based upon putting atoms where we want them rather than upon handling 'atoms in unruly herds[.]'" Again, very serious! For those who read my blog with any regularity you will find many familiar faces recurring in the advisory boards and recommended readings of these organizations (both the ones that are striving for mainstream respectability and the ones that are letting their futurological freak flags fly -- the talent pool for this brand of moonshine isn't exactly capacious), just scroll down the names of the futurologists who have come in for analysis, and no small amount of ridicule, in this anti-futurological archive of mine, and you will find most of them there.
All that said, I will concede Salon's point, to a point. This business is indeed serious -- serious as a heart attack. The ramification of these institutional spaces for futurological flim-flammery is taking place across a terrain in which think-tanks have already displaced or deranged the role of the academy as a source of rigorous scholarly support and critique of public policy deliberation (and even those with absolutely valid critiques of the academy -- stratified as it is by sexism, white-racism, and plutocratic upward-fail -- can grant the force of this point). Indeed, rather than view the latest arrival of futurology to the pseudo-scholarly commandeering of and feasting on public deliberation as a particularly new phenomenon, I would insist instead on the role of a futurological/instrumental/computational logic especially congenial to the ends of corporate-military "competitiveness" -- a logic originating in no small part out of the host of speculative pseudo-disciplines connected to the inequitable distributions of costs, risks, and benefits of market futures -- in the original and abiding corporate-military think-tankification of the public sphere in the first place. To these developments should be added the recent entrance of faith-based techno-transcendental aspirations like coding Robot-Gods or "solving death" into the budgets of (what now passes for) elite technology companies, like Google and Apple. It is no surprise that the celebrity CEOs of venture capitalism whose skim and scam operations have thrust some of them into lotto-luxe multi-billionaire precincts they rationalize through assertions of sooper-genius would find themselves attracted to infantile fantasies and facile pseudo-scientific formulations of faith-based futurisms -- but at some point throwing real money and directing real media spotlights at this nonsense threatens to have an effect in the real world.
The "Sixth" of my Ten Reasons to Take Robot Cultists Seriously goes right to the heart of this concern:
As Margaret Mead famously insisted, "Never underestimate the power of a small group of committed people to change the world." The example of the neoliberals of the Mont Pelerin Society reminds us that a small band of ideologues committed to discredited notions that happen to benefit and compliment the rich can sweep the world to the brink of ruin and the example of the neoconservatives reminds us that a small band of committed people can prevail even when they are peddling not only discredited but frankly ridiculous and ugly notions. Futurologists pretend that hypberbolic marketing projections are the same thing as serious technoscience policy deliberation, which is a gesture enormously familiar to the investor class and the technology sector's customary membership, and the futurologists inevitably cast rich entrepreneurs as the protagonists of history, which is a gesture enormously attractive to the skimmers and scammers and celebrity CEOs of the technology sector's essentially narcissistic culture. Although their various predictions are rarely more accurate than those of chimpanzees at typewriters, although their various transcendental glossy-mag editorials and tee-vee ready techno-rapture narratives are rarely more scientific in their actual substance than those of evangelical preachers, although their dog and pony show sounds almost exactly the same now as it did five years ago, ten years ago, fifteen years ago, twenty years ago, twenty-five years ago as they still drag out the same old tired litany (super-parental robot gods! genetic fountains of youth! cheap nanobotic superabundance! better than real immersive VR treasure caves! soul-uploading into shiny robot bodies!), and all with the same fervent True Belief, the same breathless insistence that this is all New! the same static repetition that change is accelerating up! up! up! it is not really surprising to discover that the various organizations associated with superlative futurology are attracting more and more money and support and attention from the rich narcissistic CEOs of the technology sector whose language they have been speaking and whose egos they have been stroking so assiduously for years and for whom they provide such convenient rationalizations for elite-incumbent rule. You better believe that, ridiculous and crazy though they may be, the Robot Cultists with well funded organizations (like the Future of Humanity Institute at Oxford, Global Business Network, Long Now Foundation, Institute for Ethics and Emerging Technology, Singularity Summit to name a few) to disseminate their pet wish-fulfillment fantasies and authoritarian rationalizations can do incredible damage in the real world.It is easy to find yourself smugly shaking your head at the stark realities implied by the concession in the Salon piece that, "Some of these institutes have not even started to do research yet; they’re still raising funds." You don't say! Why, that's like declaring "some evangelicals haven't researched their fire and brimstone claims; they're too busy passing the collection plate"! But the funding these organizations are starting to attract from corporate sponsors and the derangement of the terms of public policy discourse introduced by the multiplication of techno-transcendental figures, frames, conceits, narratives are all too real whatever their palpable idiocy -- just look how eager people are to describe unintelligent artifacts as "smart" to the denigration of their own intelligence, just look how eager people are to describe as "disruptive" completely conventional right-wing deregulatory schemes, just look how regularly the public will describe the stasis of our unsustainable conformist consumer socioculture as a period of "accelerating growth," in each case a futurological reframing in the service of elite-incumbent interests and to the utter detriment of sense.
New to me from reading the piece, and I must say enormously interesting, was its observation that "[t]he field [of futurological existential risk discourse] has benefited from a well-informed patron, the Estonian entrepreneur and computer programmer Jaan Tallinn, co-founder of Skype... [who] told me that his concern with the future of humanity began in 2009, when a lawsuit between Skype and eBay left him temporarily sidelined. Finding himself with millions of dollars in the bank and no obligations, he spent his time reading through the web site Less Wrong, 'a community blog devoted to refining the art of human rationality.'" It is very interesting that any of this is supposed to qualify Mr. Tallinn as "well-informed" in some way. Does the article tell us that Tallinn spent this time earning a degree in some legible scientific field in a university or doing research in a laboratory setting or consulting with policymakers beholden to majorities or even coding usable software or building prototypes of workable devices? No, no, no, no, no. Instead we hear of a techbro sitting on a pile of lucky-lucre with time on his hands reading some guru wannabe's internet manifesto about the coming of the Robot God and thinking this puts the Keys of History in his hands. Very serious, well-informed. About Tallinn's singularitarian guru Eliezer Yudkowsky and his Less Wrong coterie you may find it enlightening to read A Robot God Apostle's Creed for the Less Wrong Set, or Deep Thoughts on Democracy from Eliezer Yudkowsky, or So Not A Cult. I did a little more digging and discovered soon enough that when Jaan Tallinn isn't giving his money to futurologists who would worry us about Robocalypse he is devoted to the work of "a medical-consulting firm" called MetaMed which he founded and which received an infusion of start-up cash from none other than market-libertopian and singularitarian Robot Cultist Peter Thiel.
I will leave it as an exercise for the reader to think through, by way of conclusion, the political implications of championing "personalized medicine" for the super-rich when universal single-payer basic healthcare hasn't arrived after more than a century of heartbreaking mass social struggle backed by a consensus of healthcare expertise, or to think what eugenic transhumanists may have in mind when they speak of providing client performance enhancement. No doubt the countless millions of people who die from treatable or neglected diseases because they cannot afford basic healthcare or because they live in over-exploited regions of the world without access to basic healthcare does not rise to the level of an "existential threat" that would attract the notice of our futurological faithful -- even if every single human being who dies for the lack of healthcare available to others is a human being who potentially could have contributed their measure of imagination and intelligence and effort to the solution of shared problems that really do imperil humanity as well as to the archive of creative expressivity that makes life worth living for us all.