I am, of course, a longstanding and sometimes rather acerbic critic of the transhumanist movement (see here and here and here, for example), despite the fact that I count a few self-identified transhumanists -- like the democratic socialist feminist Buddhist green James Hughes -- among my friends and colleagues.
To the extent that transhumanism is simply a kind of literary salon culture of enthusiasts for science fiction and futurological blue-skying, I suppose many of these are relatively harmless and amiable folks with considerable geek-to-geek allure for the likes of me. But there are also, I fear, a significant number of transhumanist-identified persons who fall very squarely into the teeth of the critiques of Superlative Technology Discourses and Sub(cult)ural Futurisms I've been hammering at for the last few weeks here at Amor Mundi.
There is no need to rehearse the whole critique again (I've Immaculately Collected many of the relevant documents in a Superlative Summary, for those who are interested), but I will say that it is rather intriguing to observe how readily the facile Superlative frames boil to the surface in a popular piece like this.
Among the charges of my critique that receive the most pouting and stamping from Superlative Technocentrics are these:
 That there is a tendency to separatism and alienation in their marginal sub(cult)ural identification with particular projected technodevelopmental outcomes;
 That their exhibition of self-appointed technocratic elitism on questions of technodevelopmental decision-making tends to devalue democratic deliberation;
 That their regularly reiterated fantasy that "progress" is simply a matter of a socially indifferent and autonomous accumulation of technical capacities tends to yield linear, unilateral, elite-imposed models of technoscientific change;
 That their further belief that such accumulation can deliver (and even, in some versions, will inevitably deliver) quite on its own, emancipatory powers and abundances so profound as to permit us to circumvent the impasse of stakeholder-politics altogether, tends to feed and to feed on anti-political and anti-popular attitudes more generally.
I argue that, taken together, these tendencies render Superlative Technocentricities and Sub(cult)ural Futurisms absolutely anti-democratizing in their assumptions, their ends, and their overall thrust -- so much so as to subvert the democratizing ends of even those Superlative Technocentrics who consciously espouse more progressive ideals -- and also provide powerful rhetorical rationales congenial to neoliberal/neoconservative outlooks and the incumbent corporate-militarist interests.
Needless to say, the nicest and most well-meaning folks among the Transhumanist-types and Singularitarians and Technological Immortalists (poor things) take extreme umbrage at these charges when I make them. How ferociously they deny the very idea that their politics might be reactionary (what I sometimes describe as "retro-futurist").
Why, I voted for John Kerry! one incensed young Singularitarian once took pains to reassure me. Why, proposing such structural correlations between these broader attitudes toward technoscientific change and one's effective political orientation is nothing but sloppy armchair psychoanalyzing, another fulminated. This is nothing but name calling, nothing but ad hominem attack, nothing but character assassination, chimes an interminable chorus of Superlative Technocentrics to my every political and cultural intervention into their discourse. You'll be hearing from my lawyer! threatened another (true story).
Let's see how some of them speak for themselves, then, shall we? (Skip my parenthetic commentary if you just want the voice of the journalist and the various Transhumanists quoted in the piece. And do follow the link to the whole piece, since I'm only excerpting it here.)
[T]ranshumanists have been attacked for jeopardizing the future of humanity. What if they ended up creating a race of elite superhumans bent on enslaving the unmodified masses, or unwittingly programmed an army of self-replicating nanobots that would turn us all into grey goo?
(For me the former problem seems far likelier than the latter, indeed, something like the former looks to me like a straightforward programmatic entailment of the typical transhumanist advocacy of "enhancement" (treated as prosthetic perfectionism in a direction that everybody already agrees on in advance, though certainly we don't, or should agree on -- in a way that squares with the parochial transhumanist takes on these matters -- so it doesn't matter much that we don't), rather than a more technoprogressive advocacy of universal (informed, nonduressed) consensual non-normative healthcare provision. But, more to the point, if you think these typically hyperbolic journalistic question are just so much provocation and noise (I dismissed them as such at first myself), it is rather astonishing to observe the eagerness with which some transhumanists seem to want to encourage such worries. For more of what I mean, simply read along.)
In 2004, political scientist Francis Fukuyama singled out transhumanism as the world's "most dangerous idea."
(As we all know, of course, Francis Fukuyama has a certain experience with marginal sub(cult)ures bent on imposing their extreme and anti-democratic worldview upon an unwilling and unready world, having carried water for years for the Neoconservative Death-Eaters, a klatch of mostly white assholes utterly convinced they were the smartest people in the room as they engineered world-scale disaster after world-scale disaster in plain sight of an appalled world.)
Now this small-scale movement aims to go mainstream. WTA membership has risen from 2000 to almost 5000 in the past seven years, and transhumanist student groups have sprung up at university campuses from California to Nairobi.
(Think carefully about those numbers. At best, transhumanism, so-called, remains an utterly marginal sub(cult)ure, especially given the scale of its presence in the media landscape, not to mention the authority some of its rhetorical framings of technodevelopmental issues have come to assume in that landscape.)
It has attracted a series of wealthy backers, including Peter Thiel, co-founder of PayPal, who recently donated $4 million to the cause, and music producer Charlie Kam, who paid for the Chicago conference.
(Needless to say this development is the furthest thing from evidence of the development with which this paragraph began, "this small-scale movement aims to go mainstream." If it is true, as I think it is, that the technocratic elitism so prevalent in the transhumanist movement is especially congenial to incumbent interests with a stake in assuring the powerful that ordinary people are too ill-informed to be entrusted with a say in the developmental decisions that affect them, and if the techno-centric emphasis in transhumanist attitudes toward social problem-solving is especially congenial to incumbent interests with a stake in assuring a continued flow of money always in the direction of corporate-military research (welfare for the already-rich stealthed, of course, as "national defense" and "economic development"), then we can expect quite a bit of money to find its way eventually into transhumanist and quasi-transhumanist organizations. It remains to be seen how the more democratically-minded transhumanists will cope with this development. My expectations are shaped by the sense that money, attention, and success provide plenty of material for rationalization, and hence I think that the democratic transhumanists will, over the long term, prove to have provided respectability, credibility, and cover for the more reactionary elements in their movement, while corporatist support assures that these reactionary elements direct the movement. It may interest people to know that Peter Thiel serves on the Board of the Hoover Institution and is co-author of a book, The Diversity Myth: 'Multiculturalism' and the Politics of Intolerance at Stanford.)
Other well-known speakers are also on the roster, including… Ray
Kurzweil, the group's unofficial prophet.
(Not all groups have "prophets," official or non-official. Just saying.)
They don't look very threatening, though perhaps not very diverse
either. Most WTA members are white, middle-aged men…
AI theorist Eliezer Yudkowsky also believes the movement is driven by an ethical imperative. He sees creating a superhuman AI as humanity's best chance of solving its problems: "Saying AI will save the world or cure cancer sounds better than saying 'I don't know what's going to happen'." Yudkowsky thinks it is crucial to create a "friendly" super-intelligence before someone creates a malevolent one, purposefully or otherwise. "Sooner or later someone is going to create these technologies,"
(So, by God, let it be MEEEEE. Hard to believe this paragraph began with the claim that "the movement is driven by an ethical imperative." What kind of ethical imperative, one wonders, drives you into a Robot Overlord arms race with unspecified antagonists for control of the world, exactly?)
The theme of saving humanity continues with presentations on... raising baby AIs in the virtual world of Second Life, as well as surveillance tactics for weeding out techno-terrorists and a suggested solution for the population explosion: uploading 10 million people onto a 50-cent computer chip.
(All Very Serious, indeed.)
More immediate issues facing humanity, such as poverty, pollution and the devastation of war, tend to get ignored.
(Hm. Fancy that.)
I discover the less egalitarian side to the transhumanist community
(You mean, even less egalitarian?)
when I meet Marvin Minsky, the 80-year-old originator of artificial neural networks and co-founder of the AI lab at the Massachusetts Institute of Technology. "Ordinary citizens wouldn't know what to do with eternal life," says Minsky. "The masses don't have any clear-cut goals or purpose." Only scientists, who work on problems that might take decades to solve appreciate the need for extended lifespans, he argues.
He is also staunchly against regulating the development of new technologies.
(Shall I pretend to be shocked?)
"Scientists shouldn't have ethical responsibility for their inventions, they should be able to do what they want," he says. "You shouldn't ask them to have the same values as other people."
(Marvin Minsky, ladies and gentlemen.)
The transhumanist movement has been struggling in recent years with bitter arguments between democrats like [James] Hughes and libertarians like Minsky. Can [unofficial "prophet," Ray] Kurzweil's keynote speech unite the opposing factions?
(Let me reiterate, in my view these factions are easily reconciled: the democrats need only be tolerated so long as they provide respectable cover for the reactionaries among them, meanwhile both sides foreground their shared technological enthusiasm to the exclusion of their real substantive political differences -- so divisive! so negative! -- with the consequence that the incumbent corporatist interests that overwhelmingly shape technological discourse always actually benefit without having to fight for this outcome, the reactionaries get something for nothing and the democrats get nothing in exchange for everything. Hey, what's not to like?)
On the final day of the meeting… Kurzweil offers a few possible solutions to today's global dilemmas, such as nano-engineered solar panels to free the world from its addiction to fossil fuels.
(He means, surely, that we must all struggle to fund and regulate and educate and promote such technodevelopmental outcomes in the public interest? That we must all learn from our many historical mistakes that we have to attend to the actual diversity of stakeholders to technodevelopmental change? That the distribution of costs, risks, and benefits of technoscience must better reflect that diversity, else "development" become a short-sighted parochial environmentally unsustainable socially destabilizing project of planetary precarization, exploitation, confiscation, and violence? Right? Right?)
But he is opposed to taxpayer-funded programmes such as universal healthcare as well as any regulation of new technology, and believes that even outright bans will be powerless to control or delay the end of humanity as we know it.
(Wow, I guess he did provide a way to unite the democratic and reactionary transhumanists, after all! What a relief!)
"People sometimes say, 'Are we going to allow transhumanism and artificial intelligence to occur?'" he tells the audience. "Well, I don't recall when we voted that there would be an internet."
(Ray Kurzweil, ladies and gentlemen. Unofficial "prophet" of the transhumanist movement. It should go without saying, by the way, that those of us fighting for Net Neutrality, p2p, a2k, FlOSS, and so on are engaged in precisely the kind of democratic social struggle that is being denigrated in the glib dismissal of the very idea that "we voted that there would be an internet.")
Remember the hyperbolic questions with which the piece began? "What if they ended up creating a race of elite superhumans bent on enslaving the unmodified masses, or unwittingly programmed an army of self-replicating nanobots that would turn us all into grey goo?" Look no further for the source of worries such as these. They arise from the self-congratulatory and anti-social pronouncements of transhumanists themselves.