Using Technology to Deepen Democracy, Using Democracy to Ensure Technology Benefits Us All

Wednesday, October 31, 2007

Richard Jones on "The Uses and Abuses of Speculative Futurism"

Over at his blog Soft Machines, Richard Jones has expanded and clarified his own discussion of "Superlativity" and "Speculative Futurism" in response to some criticisms he has received (and some he has observed me receiving lately).

He expresses a perplexity that I have to admit feels very much like my own when he declares: "I think transhumanists genuinely don’t realise quite how few informed people outside their own circles think that the full, superlative version of the molecular manufacturing vision is plausible."

Later in the comments section he makes a comparable claim about the faith in an Artificial General Intelligence that organizes the Singularitarian sub(cult)ure: "[D]iscussions of likely futures… go well beyond making lists of plausible technologies to consider the socio-economic realities that determine whether technologies will actually be adopted. One also needs to recognise that some advances are going to need conceptual breakthroughs whose nature or timing simply cannot be predicted, not just technology development (I believe AGI to be in this category)."

Needless to say, I think that the claims of so-called "Technological Immortalists" that the personal lives of people now living might plausibly be immortalized by recourse to emerging genetic therapies, "uploading" selves into informational forms, or cryonic suspension also belong to this category.

Taken together, these three basic technodevelopmental derangements constitute what I have described elsewhere as the key super-predicated idealized "outcomes" that drive much contemporary Superlative Technology Discourse: Superintelligence, Superlongevity, Superabundance. Thus schematized, it isn't difficult to grasp that Superlative Technology Discourse will depend for much of its intuitive force on its citation and translation of the terms of the omni-predicated terms omniscience, omnipotence, omnibenevolence that have long "characterized" godhood for those who aspire to "know" it, but now updated into hyperbolic pseudo-scientific "predictions" for those who would prosthetically aspire to "achieve" it in their own persons.

It is interesting to note that these idealizations also organize the Sub(cult)ural Futurist formations to which I also direct much of my own Critique: In place of open futures arising out of the unpredictable contestation and collective effort of a diversity of stakeholders with whom we share and are building the world in the present that becomes future presents, Sub(cult)ural Futurists substitute idealized outcomes with which they have identified and which they seek to implement unilaterally in the world. This identificatory gesture in Sub(cult)ural Futurists tends to be founded on an active dis-identification with the present (and with futurity as future presents open in the way the political present is open) and identification with idealized futures in which they make their imaginative home, as well as on an active correlated dis-identification with that portion of the diversity of stakeholders with whom they share the world in fact and any actual futures available to that world who they take to oppose their implementation of that idealized future.

And so, Richard Jones writes: "The only explanation I can think of for the attachment of many transhumanists to the molecular manufacturing vision is that it is indeed a symptom of the coupling of group-think and wishful thinking." And it isn't surprising that one can reel off with utter documentary ease a host of curious marginal sub(cult)ural self-identifications affirmed by most of the prominent participants in Superlative Technology Discourse -- Extropians, Transhumanists, Singularitarians, Immortalists (it seems to me there are further connections to Randian Objectivism, and illuminating resonances one discerns in comparing them to Scientology, Raelians, and even Mormonism) -- nor surprising to stumble on conjurations of tribal "outsiders" against which Sub(cult)ural Futurists imagine themselves arrayed -- "Luddites," "Deathists," "Postmodernists," and so on.

To those who charge that his critique (and by extension, my own) amounts to a straightjacketing of speculative imagination, Jones offers up this nice response with which I have quite a lot of sympathy:
[M]y problem is not that I think that transhumanists have let their imaginations run wild. Precisely the opposite, in fact; I worry that transhumanists have just one fixed vision of the future, which is now beginning to show its age somewhat, and are demonstrating a failure of imagination in their inability to conceive of the many different futures that have the potential to unfold.

And as I have pointed out elsewhere myself, these Superlative and especially Sub(cult)ural Futurisms tend to have
a highly particular vision of what the future will look like, and [are] driven by an evangelical zeal to implement just that future. It is a future with a highly specific set of characteristics, involving particular construals of robotics, artificial intelligence, nanotechnology, and technological immortality (involving first genetic therapies but culminating in a techno-spiritualized "transcendence" of the body through digitality). These characteristics, furthermore, are described as likely to arrive within the lifetimes of lucky people now living and are described as inter-implicated or even converging outcomes, crystallizing in a singular momentous Event, the Singularity, an Event in which people can believe, about which they can claim superior knowledge as believers, which they go on to invest with conspicuously transcendental significance, and which they declare to be unimaginable in key details but at once perfectly understood in its essentials. [The] highly specific vision in Stiegler's story ["The Gentle Seduction"] is one and the same with the vision humorously documented in Ed Regis's rather ethnographic Great Mambo Chicken and the Transhuman Condition, published… in 1990, and in Damien Broderick's The Spike, published twelve years later, and, although the stresses shift here and there… sometimes emphasizing robotics and consciousness uploading (as in Hans Moravek's Mind Children…), sometimes emphasizing Drexlerian nanotechnology (as in Chris Peterson's Unbounding the Future), or longevity (as in Brian Alexander's Rapture), it is fairly astonishing to realize just how unchanging this vision is in its specificity, in its ethos, in its cocksure predictions, even in its cast of characters. Surely a vision of looming incomprehensible transformation should manage to be a bit less… static than transhumanism seems to be?

Jones goes on to remind us, crucially, just how much "futurism is not, in fact, about the future at all -- it’s about the present and the hopes and fears that people have about the direction society seems to be taking now." This is why it can be so illuminating to treat futurological discourse generally as symptomatic rather than predictive and also it explains, when we make the mistake of taking it at "face value" as a straightforwardly predictive exercise, "precisely why futurism ages so badly, giving us the opportunity for all those cheap laughs about the non-arrival of flying cars and silvery jump-suits." When tech talk turns Superlative, I fear, we are relieved of the necessity to wait: the cheap laughs and groaners are abundantly available already in the present.

Saturday, October 27, 2007

My Failures of Imagination

Upgraded and adapted from Comments. I fear my response to long-time Friend of Blog Giulio Prisco is a bit testy, but I've been told that my writing is clearer when I am in this temper, and I've also been told that my Superlative Critique is presented in terms that are too abstruse in general whatever its usefulness, so here goes:

I criticize your intolerance for those who, while basically agreeing with you on the points above, have ideas different from yours on other, unrelated things, and affirm their right to think with their own head.

I distinguish instrumental, moral, esthetic, ethical, and political modes of belief. (I spell out this point at greater length here, among other places.) Rationality, for me, consists not only in asserting beliefs that comport with the criteria of warrant appropriate to each mode, but also in applying to our different ends the mode actually appropriate to it.

I'm perfectly tolerant of estheticized or moralized expressions of religiosity, for example, but I keep making the point that religiosity (even in its scientistic variations) when misapplied to domains, ends, and situations for which it is categorically unsuited creates endless mischief.

Superlativity as a discourse consists of a complex of essentially moral and esthetic beliefs mistaking themselves for or overambitious to encompass other modes of belief.

This sort of thing is quite commonplace in fundamentalist formations, as it happens, and one of the reasons religiosity comes up so often in discussions of Superlativity is because many people already have a grasp of what happens when fundamentalist faiths are politicized or pseudo-scientized and so the analogy (while imperfect in some respects) can be a useful way to get at part of the point of the critique of Superlativity.

Because, my friend, you will never persuade me that one who finds intellectual or spiritual pleasure in contemplating nanosanta-robot god-superlative technology-etc. cannot be a worthy political, social and cultural activists.

This line is total bullshit, and I'm growing quite impatient with it. I don't know how else to say this, I feel like throwing up my hands. Look, I'm a big promiscuous fag, a theoryhead aesthete, and an experimentalist in matters of, well, experiences available at the, uh, extremes as these things are timidly reckoned among the charming bourgeoisie. Take your pleasures where you will, I say, and always have done. Laissez les bons temps rouler. I'm a champion of multiculture, experimentalism, and visionary imagination, and that isn't exactly a secret given what I write about endlessly here and elsewhere.

But -- now read this carefully, think about what I am saying before you reply -- if you pretend your religious ritual makes you a policy wonk expect me to call bullshit; if you demand that people mistake your aesthetic preferences and preoccupations for scientific truths expect me to call bullshit; if you go from pleasure in to proselytizing for your cultural and subcultural enthusiasms expect me to call bullshit; if you seek legitimacy for authoritarian circumventions of democracy in a marginal defensive hierarchical sub(cult)ural organization or as a way to address risks you think your cronies see more clearly than the other people in the world who share those risks and would be impacted by your decisions, all in the name of "tolerance," expect me to call bullshit.

"I can believe in Santa Claus and Eastern Bunny if I like, and still agree with you on political issues."

No shit, Sherlock. I've never said otherwise.

But -- If you form a Santa cult and claim Santa Science needs to be taught in schools instead of Darwin, or if you become a Santa True Believer who wants to impose his Santa worldview across the globe as the solution to all the world's problems, or you try to legitimize the Santalogy Cult by offering up "serious" policy papers on elf toymaking as the real solution to global poverty and then complain that those who expose this as made up bullshit are denying the vital role of visionaries and imagination and so on, well, then that's a problem. (Please don't lose yourself in the details of this off-the-cuff analogy drawn from your own comment, by the way, I'm sure there are plenty of nit-picky disanalogies here, I'm just making a broad point here that anybody with a brain can understand.)

Unless, of course, you persuade me that the two things are really incompatible.

I despair of the possibility of ever managing such a feat with you. (Irony impaired readership insert smiley here.)

I will gladly take the Robot God and Easter Bunny then.

Take Thor for all I care. None of them exist, and any priesthood that tries to shore up political authority by claiming to "represent" them in the world I will fight as a democrat opposed to elites -- whether aristocratic, priestly, technocratic, oligarchic, military, "meritocratic" or what have you. I can appreciate the pleasures and provocations of a path of private perfection organized through the gesture of affirming faith in a Robot God, Thor, or the Easter Bunny. I guess.

I have no "trouble" with spirituality, faith, aestheticism, moralism in their proper place, even where their expressions take forms that aren't my own cup of tea in the least. I've said this so many times by now that your stubborn obliviousness to the point is starting to look like the kind of conceptual impasse no amount of argument can circumvent between us.

Perhaps you guys are so scared of "superlative technology discourse" because you are afraid of falling back into the old religious patterns of thought, that perhaps you found difficult to shed.

I've been a cheerful nonjudgmental atheist for twenty-four years. It wasn't a particularly "difficult" transition for me, as it happens. Giving up pepperoni when I became a vegetarian was incomparably more difficult for me than doing without God ever was. And I'm not exactly sure what frame of mind you imagine I'm in when I delineate my Superlative Discourse Critiques when you say I'm "so scared." I think Superlativity is wrong, I think it is reckless, I think it is comports well with a politics of incumbency I abhor, I think it produces frames and formulations that derange technodevelopmental discourse at an historical moment when public deliberation on technoscientific questions urgently needs to be clear. It gives me cause for concern, it attracts my ethnographic and critical interest. But "so scared"? Don't flatter yourself.

Some of us, yours truly included, never gave much importance to religion. So we feel free to consider interesting ideas for their own sake, regardless of possible religious analogies.

You are constantly claiming to have a level of mastery over your conscious intentions and expression that seems to me almost flabbergastingly naïve or even deluded. It's very nice that you feel you have attained a level of enlightenment that places you in a position to consider ideas "for their own sake," unencumbered one presumes by the context of unconscious motives, unintended consequences, historical complexities, etymological sedimentations, figural entailments, and so on. I would propose, oh so modestly, that no one deserves to imagine themselves enlightened in any useful construal of the term who can't see the implausibility of the very idea of the state you seem so sure you have attained.

Friday, October 26, 2007

A Superlative Schema

In the first piece I wrote critiquing Superlative Technology Discourse a few years ago, Transformation Not Transcendence," I wrote that
It pays to recall that theologians never have been able comfortably to manage the reconciliation of the so-called omnipredicates of an infinite God. Just when they got a handle on the notion of omnipotence, they would find it impinging on omniscience. If nothing else, the capacity to do anything would seem to preclude the knowledge of everything in advance. And of course omnibenevolence never played well with the other predicates. How to reconcile the awful with the knowledge of it and the power to make things otherwise is far from an easy thing, after all… As with God, so too with a humanity become Godlike. Any “posthuman” conditions we should find ourselves in will certainly be, no less than the human ones we find ourselves in now, defined by their finitude. This matters, if for no other reason, because it reminds us that we will never transcend our need of one another.
My point in saying this was to highlight the incoherence in principle of the superlative imaginary, to spotlight what looks to me like the deep fear of finitude and contingency (exacerbated, no doubt, by the general sense that we are all of us caught up in an especially unsettling and unpredictable technoscientific storm-churn) that drives this sort of hysterical transcendental turn, and to propose in its stead a deeper awareness and celebration of our social, political, and cultural inter-dependence with one another to cope with and find meaning in the midst of this change.

Of course, there is no question that no technology, however superlative, could deliver literally omni-predicated capacities, nor is it immediately clear even how these omni-predicates might function as regulative ideals given their basic incoherence (although this sort of incoherence hasn't seemed to keep "realists" from claiming interminably that vacuous word-world correspondences function as regulative ideals governing warranted assertions concerning instrumental truth, so who knows?). Rather like the facile faith of a child who seeks to reconcile belief with sense by imagining an unimaginable God as an old man with a long beard in a stone chair, Superlativity would reconcile the impossible omnipredicated ends at which it aspires with the terms of actual possibility through a comparable domestication: of Omniscience into "Superintelligence," of Omnipotence into "Supercapacitation" (especially in its "super-longevity" or techno-immortalizing variations), of Omnibenevolence into "Superabundance."

In such Superlative Technology Discourses, it will always be the disavowed discourse of the omni-predicated term that mobilizes the passion of Superlative Techno-fixations and Techno-transcendentalisms and organizes the shared identifications at the heart of Sub(cult)ural Futurisms and Futurists. Meanwhile, it will be the disavowed terms of worldly and practical discourses that provide all the substance on which these Superlative discourses finally depend for their actual sense: Superintelligence will have no actual substance apart from Consensus Science and other forms of warranted knowledge and belief, Supercapacitation (especially the superlongevity that is the eventual focus of so much "enhancement" talk) will have no actual substance apart from Consensual Healthcare and other forms of public policy administered by harm-reduction norms, Superabundance will have no actual substance apart from Commonwealth and other forms of public investment and private entrepreneurship in the context of general welfare. In each case a worldly substantial reality -- and a reality substantiated consensually, peer-to-peer, at that -- is instrumentalized, hyper-individualized, de-politicized via Superlativity in the service of a transcendental project re-activating the omni-predicates of the theological imaginary.

As with most fundamentalisms -- that is to say, as with all transcendental projects that redirect their energies to political ends to which they are categorically unsuited -- whenever Superlativity shows the world its Sub(cult)ural "organizational" face, it will be the face of moralizing it shows, driven by the confusion of the work of morals/mores with that of ethics/politics, a misbegotten effort to impose the terms of private-parochial moral or aesthetic perfection with the terms of public ethics (which formally solicits universal assent to normative prescriptions), politics (which seeks to reconcile the incompatible aspirations of a diversity of peers who share the world), and science (which provisionally attract consensus to instrumental descriptions).

Very Schematically, I am proposing these correlations:

OMNI-PREDICATED THEOLOGICAL / TRANSCENDENTAL DISCOURSE

Omniscience
Omnipotence
Omnnibenevolence

SUPER-PREDICATED SUPERLATIVE DISCOURSE

Superintelligence
Supercapacitation (often amounting to Superlongevity)
Superabundance

WORDLY SUBSTANTIAL (democratizing/p2p) DISCOURSE

Reasonableness -- that is to say, the work and accomplishments of Warranted Beliefs applied in their proper plural precincts, scientific, moral, aesthetic, ethical, political, legal, commercial, etc.
Civitas -- that is to say the work and accomplishments of Consensual Culture, where culture is presumed to be-extensive with the prosthetic, and health and harm reduction policy are construed as artifice.
Commonwealth -- that is to say, the work and accomplishments of collaborative problem-solving, public investment, and private entrepreneurship in the context of consensual civitas.

On one hand the Super-Predicated term in a Superlative Technology Discourse always deranges and usually disavows altogether -- but, crucially, while nonetheless depending on -- the collaboratively substantiated term in a Worldly Discourse correlated with it, while on the other hand activating the archive of figures, frames, irrational passions, and idealizations of the Omni-Predicated term in a Transcendental Discourse (usually religious or pan-ideological) correlated with it. The pernicious effects of these shifts are instrumental, ethical, and political in the main, but quite various in their specificities.

That complexity accounts for all the ramifying dimensions of the Superlativity Critique one finds in the texts collected in my Superlative Summary at this point. I would like to think one discerns in my own formulations some sense of what more technoscientifically literate and democratically invested worldly alternatives to Superlativity might look like. In these writings, I try to delineate a perspective organized by a belief in technoethical pluralism, on an insistence on a substantiated rather than vacuous scene of informed, nonduressed consent, on the consensualization of non-normative experimental medicine (as an elaboration of the commitment to a politics of Choice) and the diversity of lifeways arising from these consensual practices, on the ongoing implementation of sustainable, resilient, experimentalist, open, multicultural, cosmopolitan models of civilization, on the celebration and subsidization of peer-to-peer formations of expressivity, criticism, credentialization, and the collaborative solution of shared problems, and, through these values and for them, a deep commitment to the ongoing democratization of technodevelopmental social struggle -- using technology (including techniques of education, agitation, organization, legislation) to deepen democracy, while using democracy (the nonviolent adjudication of disputes, good accountable representative governance, legible consent to the terms of everyday commerce, collective problem-solving, peer-to-peer, ongoing criticism and creative expressivity) to ensure that technology benefits us all as Amor Mundi's signature slogan more pithily puts the point.

It should go without saying that there simply is no need to join a marginal Robot Cult as either a True Believer or would-be guru to participate in technodevelopmental social struggle peer-to-peer, nor to indulge in the popular consumer fandoms, digital plutocratic financial and developmental frauds, or pseudo-scientific pop-tech infomercial entertainment of more mainstream futurology. There is no need to assume the perspective of a would-be technocratic elite. There is nothing gained in identifying with an ideology that you hope will "sweep the world" or provide the "keys to history." There is nothing gained in claiming to be "pro-technology" or "anti-technology" at a level of generality at which no technologies actually exist. There is nothing gained in foreswearing the urgencies of today for an idealized and foreclosed "The Future" nor in dis-identifying with your human peers so as to better identify with imaginary post-human or transhuman ones. There is nothing gained in the consolations of faith when there is so much valuable, actual work to do, when there are so many basic needs to fulfill, when there is so much pleasure and danger in the world of our peers at hand. There is nothing gained by an alliance with incumbent interests to secure a place in the future when these incumbents are exposed now as having no power left but the power to destroy the world and the open futurity altogether.

The Superlative Technology Critique is not finally a critique about technology, after all, because it recognizes that "technology" is functioning as a surrogate term in these discourses it critiques, the evocation of "technology" functions symptomatically in these discourses and sub(cult)ures. The critique of Superlativity is driven first of all by commitments to democracy, diversity, equity, sustainability, and substantiated consent. I venture to add, it is driven by a commitment to basic sanity, sanity understood as a collectively substantiated worldly and present concern itself. The criticisms I seem to be getting are largely from people who would either deny the relevance of my own political, social, and cultural emphasis altogether (a denial that likely marks them as unserious as far as I'm concerned) or who disapprove of my political commitment to democracy, my social commitment to commons, and my cultural commitment to planetary polyculture (a disapproval that likely marks them as reactionaries as far as I'm concerned). There is much more for me to say in this vein, and of course I will continue to do so as best I can, and everyone is certainly free and welcome to contribute to or to disdain my project as you will, but I am quite content with the focus my Critique has assumed so far and especially by the enormously revealing responses it seems to generate.

Sanewashing Superlativity (For a More Gentle Seduction)

In his latest deft response to my "so-called [?] Superlative Technology Critique," Michael Anissimov reassures his readers that "[Richard] Jones and Carrico are both wrong [to say] that transhumanists have a 'strong, pre-existing attachment to a particular desired outcome.' A minority of transhumanists maybe, but not a majority." Since Michael isn't a muzzy literary type like me (Is this critique postmodernism or something? wonders one of Michael's readers anxiously in his Comments section, No, no: It's Marxism another reader grimly corrects her), we can be sure that when Michael insists that "a majority" of "transhumanists" have no strong attachments to particular desired outcomes, well, no doubt he has crunched all the relevant numbers before saying so. Who am I to doubt him?

"What transhumanists want is for humanity to enjoy healthier, longer lives and higher standards of living provided by safe, cheap, personalized products," Anissimov patiently explains. Since there are hundreds of millions of people who would surely cheerfully affirm such vacuities (among them, me) and yet after over twenty years of organizational effort the archipelago of technophilic cult organizations that trumpet their "transhumanism" -- so-called! -- has never managed yet to corral together more than a few thousand mostly North Atlantic white middle-class male enthusiasts from among these teeming millions to their Cause, one suspects that there may be some more problematic transhumanistical content that is holding them back. Contrary to the rants about a dire default "Deathism" and "Luddism" in the general populace one hears from some transhumanists exasperated that their awesome faith, er, "movement," has not yet swept the world, I will venture to suggest that it isn't actually a rampaging general desire for short unhealthy unsafe unfree lives of poverty or feudalism that keeps all these people from joining their fabulous Robot Cult.

Back in 1989 Marc Stiegler wrote a short story entitled "The Gentle Seduction" that has assumed a special place in the transhumanist sub(cult)ural imaginary. In the opening passage one of the main characters, Jack, asks the other main character -- who never gets a name, interestingly enough, and is referred to merely pronomially as "her" and "she" -- the following portentious question: "Have you ever heard of Singularity?" "She" hasn't, of course, and Jack explains the notion with relish:
"Singularity is a time in the future as envisioned by Vernor Vinge. It'll occur when the rate of change of technology is very great -- so great that the effort to keep up with the change will overwhelm us. People will face a whole new set of problems that we can't even imagine." A look of great tranquility smoothed the ridges around his eyes.

It is very curious that after the conjuration of such a looming unimaginably transformative and overwhelming change Jack would become tranquil rather than concerned like any sensible person would, however optimistic, at such a prospect, but of course the reason for this is that he is lying. Already we have been told that when he speaks "of the future… [it was as if] he could see it all very clearly. He spoke as if he were describing something as real and obvious as the veins of a leaf..." Of course, in Superlative discourses, especially in the Singularitarian variations that would trump history through a technodevelopmental secularization of the Rapture, the use of the term "unimaginable" is deployed rather selectively: to invest pronouncements with an appropriately prophetic cadence or promissory transcendentalizing significance, or to finesse the annoying fact that while Godlike outcomes are presumably certain the ways through all those pesky intermediary technical steps and political impasses that stand between the way we live now and all those marvelous Godlike eventualities remain conspicuously uncertain.

The future that Jack "sees" so clearly, as it happens, is not one he characterizes in Anissimov's reassuringly mainstream terms; that is to say, as a future in which people "enjoy healthier, longer lives and higher standards of living provided by safe, cheap, personalized products." No, Jack insists, in his future "you'll be immortal." But, wait, there's more. "You'll have a headband… It'll allow you to talk right to your computer." He continues on: "[W]ith nanotechnology they'll build these tiny little machines -- machines the size of a molecule… They'll put a billion of them in a spaceship the size of a Coke can, and shoot it off to an asteroid. The Coke can will rebuild the asteroid into mansions and palaces. You'll have an asteroid all to yourself, if you want one." Gosh, immortality alone on an asteroid stuffed with mansions and jewels and a smart AI to keep you company. How seductive (see story title)! Even better is this rather gnomic addendum, a favorite of would-be gurus everywhere: "'I won't tell you all the things I expect to happen,' he smiled mischievously, 'I'm afraid I'd really scare you!" Father Knows Best, eh? And it's hard not to like the boyish oracularity of that "mischievous smile." As the story unfolds, we discover that Jack likely refers here to the fact that "she" will eventually download her consciousness into a series of increasingly exotic, and eventually networked, robot "bodies" and then utterly disembodied informational forms.

The story is a truly odd and symptomatic little number -- definitely an enjoyable and enlightening read for all that -- juxtaposing emancipatory rhetoric in a curious way to the sort of reactionary details one has come to expect from especially American technophilic discourse. (For some of the reasons, Richard Barbrook and Andy Cameron's The Californian Ideology always repays re-reading, as does Jedediah Purdy's The God of the Digirati, and Paulina Borsook's excellent book Cyberselfish, which entertainingly provides a wealth of supplementary detail.) The very first sentence mobilizes archetypes so bruisingly old-fashioned (but, you know, it's the future!) to make you blush even if you never even heard of eco-feminism: "He worked with computers; she worked with trees." By sentence two we are squirming with discomfort: "She was surprised that he was interested in her. He was so smart; she was so… normal." ("Normal" people aren't "smart"? "Normal" people should feel privileged when our smart betters deign to notice us?) Later in the story, progress and emancipation and even revolution are drained of social struggle and political content altogether and reduced to a matter of shopping for ever more powerful gizmos offered for sale in catalogues -- elaborate robots, rejuvenation pills, genius pills, brain-computer interfaces, robot bodies, the promised asteroid mansions, and so on. Politics as consumption, how enormously visionary. One also detects in the story a discomforting insinuation of body-loathing, rather like the hostility to the "meat body" one encounters in some Cyberpunk fiction, from the initial curious fact that Jack and the unnamed protagonist sleep together but never have sex (an odd detail in a story that so clearly means to invoke the conventions of romantic love), and that the emancipatory sequence of technological empowerments undergone by the protagonist are always phrased as a series of relinquishments, of her morphology, of her body, of embodiment altogether, of narrative selfhood by the end, and each relinquishment signaled by the repeated refrain, "it just didn't seem to matter," where it is a loss of matter that fails to matter.

Be all that as it may, the specific point I would want to stress here is that "The Gentle Seduction" has a highly particular vision of what the future will look like, and is driven by an evangelical zeal to implement just that future. It is a future with a highly specific set of characteristics, involving particular construals of robotics, artificial intelligence, nanotechnology, and technological immortality (involving first genetic therapies but culminating in a techno-spiritualized "transcendence" of the body through digitality). These characteristics, furthermore, are described as likely to arrive within the lifetimes of lucky people now living and are described as inter-implicated or even converging outcomes, crystallizing in a singular momentous Event, the Singularity, an Event in which people can believe, about which they can claim superior knowledge as believers, which they go on to invest with conspicuously transcendental significance, and which they declare to be unimaginable in key details but at once perfectly understood in its essentials. This highly specific vision in Stiegler's story is one and the same with the vision humorously documented in Ed Regis's rather ethnographic Great Mambo Chicken and the Transhuman Condition, published the following year, in 1990, and in Damien Broderick's The Spike, published twelve years later, and, although the stresses shift here and there, sometimes emphasizing connections between cybernetics and psychedelia (as in early Douglas Rushkoff), sometimes emphasizing robotics and consciousness uploading (as in Hans Moravek's Mind Children -- whose work is critiqued exquisitely in N. Katherine Hayle's How We Became Posthuman), sometimes emphasizing Drexlerian nanotechnology (as in Chris Peterson's Unbounding the Future), or longevity (as in Brian Alexander's Rapture), it is fairly astonishing to realize just how unchanging this vision is in its specificity, in its ethos, in its cocksure predictions, even in its cast of characters. Surely a vision of looming incomprehensible transformation should manage to be a bit less… static than transhumanism seems to be?

Although Anissimov wants to reassure the world that transhumanists have no peculiar commitments to particular superlative outcomes one need only read any of them for any amount of time to see the truth of the matter. Far more amusing than his denials and efforts at organizational sanewashing go, however, is his concluding admonishment of those -- oh, so few! -- transhumanists or Singularitarians who might be vulnerable to accusations of Superlativity: "If any transhumanists do have specific attachments to particular desired outcome," Anissimov warns, "I suggest they drop them — now." Well, then, that should do it. "The transhumanist identity," he continues, "should not be defined by a yearning for such outcomes. It is defined by a desire to use technology to open up a much wider space of morphological diversity than experienced today." It is very difficult to see how a transhumanist "identity" would long survive being evacuated of its actual content apart from a commitment to something that looks rather like mainstream secular multicultural pro-choice attitudes that seem to thrive quite well, thank you very much, without demanding people join Robot Cults. The truth is, of course, that this is all public relations spin on the part of a Director of the Singularity Insitutute for Artificial Intelligence (Robot Cult Ground Zero) and co-founder of The Immortality Institute (a Technological Immortalist outfit), and all around muckety muck and bottle washer in the World Transhumanist Association (Sub(cult)ural Superlativity Grand Central Station), and so on. Although one can be sure that none of the sub(cult)ural futurists among his readership will really take Michael up on his suggestion to icksnay on the azycray obotray ultcay stuff in public places, at least he has posted something to which he can regularly refer whenever sensible people gently suggest he and his friends are sounding a little bit nuts on this or that burning issue concerning Robot Gods Among Us, the Pleasures of Spending Eternity Uploaded into a Computer, or coping with the Urgent Risks of a World Turned into Nano-Goo, from time to time.

I will remind my own readers that Extropians, Dynamists, Raelians, Singularitarians, Transhumanists, Technological Immortalists and so on have formed a number of curious subcultures and advocacy organizations which I regularly castigate for their deranging impact on technodevelopmental policy discourse and for the cult-like attributes they seem to me to exhibit. Since these organizations and identity movement are really quite marginal as far as their actual memberships go, it is important to stress that apart from some practical concerns I have about the damaging and rather disproportionate voice these Superlative Sub(cult)ural formulations have in popular technology discourse and on public technoscientific deliberation it is really the way these extreme sub(cult)ures represent and symptomize especially clearly what are more prevailing general attitudes toward and broader tendencies exhibited in technodevelopmental change that makes them interesting to me and worthy of this kind of attention.

Wednesday, October 24, 2007

Richard Jones Critiques Superlativity

Over on the blog Soft Machines yesterday, Richard Jones -- a professor of physics, science writer, and currently Senior Advisor for Nanotechnology for the UK's Engineering and Physical Sciences Research Council -- offered up an excellent (and far more readable than I tend to manage to be) critique of Superlative Technology Discourses, in a nicely portentiously titled post, “We will have the power of the gods”. Follow the link to read the whole piece, here a some choice bits:
"Superlative technology discourse… starts with an emerging technology with interesting and potentially important consequences, like nanotechnology, or artificial intelligence, or the medical advances that are making (slow) progress combating the diseases of aging. The discussion leaps ahead of the issues that such technologies might give rise to at the present and in the near future, and goes straight on to a discussion of the most radical projections of these technologies. The fact that the plausibility of these radical projections may be highly contested is by-passed by a curious foreshortening….

[T]his renders irrelevant any thought that the future trajectory of technologies should be the subject of any democratic discussion or influence, and it distorts and corrupts discussions of the consequences of technologies in the here and now. It’s also unhealthy that these “superlative” technology outcomes are championed by self-identified groups -- such as transhumanists and singularitarians -- with a strong, pre-existing attachment to a particular desired outcome - an attachment which defines these groups’ very identity. It’s difficult to see how the judgements of members of these groups can fail to be influenced by the biases of group-think and wishful thinking….

The difficulty that this situation leaves us in is made clear in [an] article by Alfred Nordmann -- “We are asked to believe incredible things, we are offered intellectually engaging and aesthetically appealing stories of technical progress, the boundaries between science and science fiction are blurred, and even as we look to the scientists themselves, we see cautious and daring claims, reluctant and self- declared experts, and the scientific community itself at a loss to assert standards of credibility.” This seems to summarise nicely what we should expect from Michio Kaku’s forthcoming series, “Visions of the future”. That the program should take this form is perhaps inevitable; the more extreme the vision, the easier it is to sell to a TV commissioning editor…

Have we, as Kaku claims, “unlocked the secrets of matter”? On the contrary, there are vast areas of science -- areas directly relevant to the technologies under discussion -- in which we have barely begun to understand the issues, let alone solve the problems. Claims like this exemplify the triumphalist, but facile, reductionism that is the major currency of so much science popularisation. And Kaku’s claim that soon “we will have the power of gods” may be intoxicating, but it doesn’t prepare us for the hard work we’ll need to do to solve the problems we face right now.

More like this, please.

Superlativity as Anti-Democratizing

Upgraded and adapted from Comments:

Friend of Blog Michael Anissimov said: Maybe "superlative" technologies have a media megaphone because many educated people find these arguments persuasive.

There is no question at all that many educated people fall for Superlative Technology Discourses. It is very much a discourse of reasonably educated, privileged people (and also, for that matter, mostly white guys in North Atlantic societies). One of the reasons Superlativity comports so well with incumbent interests is that many of its partisans either are or identify with such incumbents themselves.

However, again, as I have taken pains to explain, even people who actively dis-identify with the politics of incumbency might well support such politics inadvertently through their conventional recourse to Superlative formulations, inasumuch as these lend themselves so forcefully to anti-pluralistic reductionism, to elite technocratic solutions and policies, to the naturalization of neoliberal corporate-military "competitiveness" and "innovation" and such as the key terms through which technoscientific "development" can be discussed, to special vulnerability to hype, groupthink, and True Belief, and so on, all of which tend to conduce to incumbent interests and reactionary politics in general.

If a majority

Whoa, now, just to be clear: The "many" of your prior sentence, Michael, represents neither a "majority" of "educated" people (on any construal of the term "educated" I know of), nor a "majority" in general.

If a majority decides to allocate research funds towards Yudkowskian AGI and Drexlerian MNT, who would you be to question the democratic outcome?

Who would I be to question a democratic outcome? Why, a democratic citizen with an independent mind and a right to free speech, that's who.

I abide by democratic outcomes even where I disapprove of them time to time, and then I make my disapproval known and understood as best I can in the hopes that the democratic outcome will change for the better -- or if I fervently disapprove of such an outcome, I might engage in civil disobedience and accept the criminal penalty involved to affirm the law while disapproving the concrete outcome. All that is democracy, too, in my understanding of it.

In the past, Michael, you have often claimed to be personally insulted by my suggestions that Superlative discourses have anti-democratizing tendencies -- you have wrongly taken such claims as equivalent to the accusation that Superlative Technocentrics are consciously Anti-Democratic, which is not logically implied in the claim at all (although I will admit that the evidence suggests that Superlativity is something of a strange attractor for libertopians, technocrats, Randroids, Bell Curve racists and other such anti-democratic dead-enders). For me, structural tendencies to anti-democratization are easily as or more troubling than explicit programmatic commitment to anti-democracy (which are usually marginalized into impotence in reasonably healthy democratic societies soon enough, after all). When you have assured me that you are an ardent democrat in your politics yourself, whatever your attraction to Superlative technodevelopmental formulations, I have tended to take your word for it.

But when you seem to suggest that "democracy" requires that one "not question" democratic outcomes I find myself wondering why on earth you would advocate democracy on such terms? It's usually only reactionaries, after all, who falsely characterize democracy as "mob rule" -- and they do so precisely because they hate democracy and denigrate common people (with whom they dis-identify). Actual democratically-minded folks tend not to characterize their own views in such terms. Democracy is just the idea that people should have a say in the public decisions that affect them -- for me, democracy is a dynamic, experimental, peer-to-peer formation.

Because that [AGI/MNT funding] is what is likely going to happen in the next couple decades.

Be honest: if you were you as you are now twenty years ago, would you have said the same? What could happen in twenty years' time to make you say otherwise?

I personally think it is an arrant absurdity to think that majorities will affirm specifically Yudkowskian or Drexlerian Superlative outcomes by name in two decades. Of the two, only Drexler seems to me likely to be remembered at all on my reckoning (don't misunderstand me, I certainly don't expect to be "remembered" myself, I don't think that is an indispensable measure of a life well-lived, particularly).

On the flip side, it seems to me that once one has dropped the Superlative-tinted glasses, one can say that funding decisions by representatives democratically accountable to majorities are already funding research and development into nanoscale interventions and sophisticated software. I tend to be well pleased by that sort of thing, thank you very much. If one is looking for Robot Gods or Utility Fogs, however, I suspect that in twenty years' time one will find them on the same sf bookshelves where one properly looks for them today, or looked for them twenty years ago.

Saturday, October 20, 2007

Superlative Church

Upgraded and Adapted from Comments:

ONE

Me: "Do I have to remind you that you have responded in the past to some of my characterizations of Superlative outcomes as implausible by arguing not that they were wrong but that they constituted defamation against transhumanists like you?"

Giulio Prisco: Do I have to remind you that what I told you was that the TONE and LANGUAGE you used were unnecessarily conflictive and insulting -- not an argument about your ideas, just a remark about your lack of manners.

I must say that this seems a bit disingenuous to me. Of course you have castigated me for my tone and language and so on in the past many times, but that's hardly the substance of the discussion we've been having here.

Look, I am offering up rhetorical, cultural, and political analyses delineating general tendencies (oversimplification of technodevelopmental complexities, fixations on particular idealized outcomes, vulnerabilities to technological determinism and technocratic elitism, and so on) that seem to me to arise from certain Superlative and Sub(cult)ural ways of framing technodevelopmental problems.

Individual people who read such analyses and then complain that I am insulting them are, for the most part, finding in what I say a shoe that fits on their own and wearing it themselves. And that is simply not the same thing as me insulting them. It is basic incomprehension or cynical distortion to say otherwise (sometimes my critiques are of particular authors or particular texts, and the charge of personal insult could conceivably make sense in such contexts, but not when my analyses are general and when the categories I deploy name general tendencies and general social and cultural formations).

The fact is, that you have actually compared your personal "transhumanist" identity, and in earlier exchanges with me your "Technological Immortalist" identity to identity categories like being gay, or gypsy, and so on. Clearly these comparisons are designed to mobilize proper progressive intuitions about lifeway diversity in multiculture by analogy to persecuted minorities. I think this sort of analogy is wildly inappropriate, and perhaps your response here suggests that you have come to agree that it is upon further consideration. Maybe you weren't entirely conscious of your rhetoric here?

As you know I am immensely interested in the politics and policy of ongoing technodevelopmental social struggle, and one of the things that troubles me enormously is that any shift from a deliberative/open into a subcultural/identity mode of technodevelopmental politics is going to be extremely vulnerable to mistake critique for persecution, disagreement for defamation.

But how can one debate about a changing diversity of best technodevelopmental outcomes when some will feel threatened in their very identities by the prospect of a failure to arrive at their own conception of best outcomes? How can such subcultural identifications with particular outcomes comport with democratic intuitions that we must keep the space of deliberation radically open -- even as we struggle together to find our way to our provisional sense of best, fairest, safest, emancipatory outcomes -- so as always to remain responsive to the inevitable existing diversity of stakeholders to any technodevelopmental state of affairs, in the present now as well as in future presents to come?

This is why I stress that the anti-democratizing effects of Superlative and Sub(cult)ural Technocentrisms are often more structural than intentional: one can affirm democratic ideals and yet contribute to discursive subversions of democracy against the grain of one's affirmed beliefs in these matters. It completely misses the force of my point and the nature of my deepest worries to imagine that I am calling people anti-democratic names when I try to delineate these tendencies. If only it were so simple as a few anti-democratic bad apples! Such personalizations of the problem utterly trivialize the issues and stakes on my own terms, quite apart from these issues some of my conversational partners complain of that their feelings have been hurt by what they see as unfair accusations or what have you.

None of this is to deny, by the way, that there are indeed explicit reactionaries and authoritarian personalities -- both gurus and followers -- to be found aplenty in Superlative Technocentric social formations. It is well documented that there are unusual numbers of both to be seen in these curious marginal spaces. And I have exposed and ridiculed these manifestions among the Extropians, Singularitarians, transhumanists, and so on many times before, and I will continue to spotlight and to ridicule them as they well deserve.

But my own sense is that it is the larger structural tendencies that preoccupy my own attention that make these formations strange attractors for some reactionaries, occasional authoritarians, legions of True Believers, and so on, rather than vice versa. And it is also true that these structural tendencies can yield their anti-democratizing effects just as well when Superlative and Sub(cult)ural Technocentrics have no explicit anti-democratizing intentions in the least.

Since you probably read all of those claims about general tendencies as personal insults in any case it isn't entirely clear to me that you will have quite grasped the force of my critique by my lights, but such are the risks of interpersonal communication.

TWO

Me: "So you really think Superlative frames have no impact on your assessments of the significance and stakes of emerging genetic and prosthetic healthcare, nanoscale toxicity and sensors or current biotechnologies, security issues connected with networked malware today, cybernetic totalist ideology in contemporary coding cultures, and so on?

Giulio Prisco: Yes. Why should I have written it otherwise? The timescales involved are quite different aren't they? The Robot God, the Eschaton or whatever you like have nothing to do with health care and network security (and civil rights, and world peace, and...), so why should I let the RG have any impact on my assessments here and now?

Well, needless to say, not all Superlative Technocentrics would agree with you that the timescales are that different, inasmuch as they finesse this problem through the expedient recourse to accelerationalism, whereby extreme or distant outcomes are rendered "proximate" by way of accelerating change, and even accelerating acceleration to really confuse matters and make the hype more plausible.

But setting all that aside, you simply can't have thought about this issue very clearly. Of course becoming wedded to Superlative outcomes influences your sense of the stakes and significance of technoscience quandaries in the present.

Much of the force of Jeron Lanier's cybernetic totalism critique, for example, derives from the way he shows faith in the Superlative outcome of Strong disembodied AI becomes a lens distorting the decisions of coders here and now. Word processing programs will correct authorial "errors" that aren't errors in fact, substituting the program's "judgement" for the author's in part because too many coders see this crappy feature through the starry-eyed anticipation of an AI that will actually have judgments, properly speaking.

The fears and fantasies of medicalized immortality crazily distort contemporary bioethical framings of genetic and prosthetic medicine here and now, all the time, and almost always to the cost of sense. Surely you agree with that, at least when the distortions involve bioconservative intimations of apocalypse and, as they like to handwave, "Playing God," arising from research and development into new genetic and prosthetic medical techniques to relieve people from suffering unnecessarily from now treatable diseases.

There are also incredibly energetic debates about whether the definition of "nanotechnology" will refer to current and proximately upcoming interventions at the nanoscale (and all their problems) or to more Superlative understandings of the term when public funds are dispersed or regulations contemplated.

So, of course your Superlative framing of technodevelopmental outcomes impacts your present perception of technodevelopmental stakes. I suspect that you are now going to walk back your claim yet again and try another tack altogether while claiming I have misunderstood you all along, correct?

THREE

Giulio Prisco: Note to Nick: I agree with "rooting for the Transhumanist Team is different from and secondary to actually trying to make the world better". This is not the issue here.

It seems to me that this IS INDEED no small part of the issue here.

I connect Sub(cult)ural Futurism to Superlative Technocentricity, inasmuch as a shared enthusiasm for particular, usually Superlative technodevelopmental outcomes is the bond that actually serves to maintain these subcultures. But the politics of subcultural maintenance in turn impose restrictions on the openness, experimentalism, flexibility of the technoscientific deliberation you can engage in without risk to the solidarity of the identity formation itself.

This is why so many Superlative and Sub(cult)ural Technocentrics can constantly pretend that the future is going to be wildly different from the present and wildly soon, and yet the Superlative Sub(cult)ural vision of the future itself, from its depiction in Regis's Great Mambo Chicken, to Stiegler's "The Gentle Seduction," to Peterson and Drexler's, Unbounding the Future, to Alexander's Rapture to the latest pop futurological favorites in the Superlative mode simply reproduce virtually the same static vision, over and over again, calling attention to the same "visionaries," the same promissory technologies (strong AI, ubiquitous automation, virtual reality, cryonics, nanotechnology, genetic medicine, and often "uploading" personality into information after first discursively reducing it to information already), the same appeals to "superhuman" capacities, technological immortality, personal wealth beyond the dreams of avarice and post-political abundance in general (usually framed in a way that appeals to neoliberal/libertarian anti-political intuitions), the same seductive conjuration of the conventional omni-predicates of theology, but this time personalized and prostheticized, omnipotence, omniscience, omnibenevolence, the same scientistic championing of reductive totalizing explanations coupled with anti-intellectual pillorying of every other kind of thought, poetic, cultural, political, and on and on and on. For two decades and counting the vision has remained unchanged in its essentials, including the insistence on utter, totalizing, accelerating, transcendentalizing change, usually uttered in the tonalities of ecstasy or of dread, a prophetic declaration of transformation that never seems to transform itself, of change that never seems to change, a static, dwindling, tired repetition of platitudes in the midst of a planetary technodevelopmental disruption (corporate precarization, catastrophic climate change, rampant militarization, emerging peer-to-peer network formations, emerging Green movements demanding decentralized nontoxic sustainable appropriate techs, emerging non-normativizing anti-eugenicist movements to democratize medical research, development, and provision, etc.) to which static Sub(cult)ural Superlativities seem to respond less and less in substance.

Superlative Sub(cult)ural Technocentrisms are too much like straightforward faiths, with a particular heaven in mind and a few Churches on hand with marginal memberships. And, as I keep saying, as esthetic and moral formations faithful lifeways seem to me perfectly unobjectionable even when they are not my own cup of tea. What is troubling about Superlativity is that its faithful seem curiously cocksure that they are champions of science rather than True Believers in the first place, which makes them ill-content to confine themselves to their proper sphere, that is offering up to their memberships the moral satisfactions of intimate legibility and belonging as well as esthetic pathways for personal projects of perfection. They fancy themselves instead, via reductionism, to be making instrumental claims that solicit scientific consensus, or, via moralizing pan-ideology, to be making ethical claims that solicit universal assent. (For a sketch of my sense of the different modes of rationality in play here see my Technoethical Pluralism.)

This sort of delusion is common enough in variously faithful people (especially in the fundamentalist modes of belief that Sub(cult)ural Futurisms seem so often to typify) and would hardly qualify as particularly harmful or worthy of extended attention given the incredible marginality of these formations -- an abiding marginality that has remained unchanged for decades, after all. But Superlative Technology Discourses seem to me to have an enormous and disproportionately influential media megaphone deriving -- on the one hand -- from their symptomatic relation to much broader fears/fantasies of agency occasioned by contemporary technodevelopmental churn and -- on the other hand -- from their rhetorical congeniality to neoliberal assumptions that serve the interests of incumbent corporate-military interests. That is why I devote so much of my own attention to their exposure and analysis.

Friday, October 19, 2007

Superlativity Is an Affront to Common Sense

"To be natural is such a difficult pose to keep up."--Oscar Wilde

Upgraded and adapted from Comments:

Giulio wrote: I wish to... focus on concrete things... [Y]ou are... focusing on abstract issues characterized by endless questioning of others' hidden motivations and "identity". As I see things, I am focused on outcomes, and you are focused on identity.

This comment would be, frankly, flabbergasting if I weren't so completely used to hearing it at this point from my Superlatively Technocentric critics.

Let me ask you a few questions: Are you denying you have unconscious motives that can sometimes be more intelligible to others than to yourself? Are you denying that the normative pressures exerted by social formations impact conduct and perception and can look different depending on whether one is inside or outside that formation? Are you denying that all progress, including technoscientific progress, is shaped by social, cultural, and political factors -- or do you believe that progress is simply a matter of the socially indifferent accumulation of useful techniques and technical devices? Are you denying that the predominate vocabularies through which people talk about "progress" and "development" are shaped by the demands of corporate and military competitiveness, and that this might matter to people concerned with matters of sustainability, democracy, and social justice? Are you denying that argument actually consists of more than simply the delineation of logical propositions and the relations of entailment that obtain between them -- and that the force of argument will depend on metaphor, framing, schematism, to the citation of customary topics, tropes, generic conventions, and so on? Do you deny that cults exist, do you deny that they can be studied in ways that generate useful observations, and that these observations can illuminate authoritarian organizational structures beyond the marginal cult form, fundamentalist religiosity, for example, charismatic art movements, marginal subcultural identity politics, populist politics with mass-mediated celebrity leaders, and so on? Do you deny that some people exhibit what is popularly described as True Belief, largely insulated from criticism and driven by defensive forms of identification? Do you deny that there is a difference between forms of identity politics and other modes of politics, or do you simply deny that these differences make a difference? Do you deny that stress and fear of technodevelopmental change can activate irrational passions at the level of personal and mass psychology and that it actually pays to attend to these effects?

These are all perfectly concrete questions as far as I can tell, to all of which I devote considerable attention here at Amor Mundi, and they are none of them more "abstract" than most of the scientific and "technical" claims that exercise the attention of Superlatively Technocentric people -- they are certainly no more abstract than the sociological claims that preoccupy some who would pooh-pooh cultural readings like mine as "literary" or "abstruse," and -- honestly -- these concerns are easily quite as "concrete," you can be sure, as "hardboiled" predictions about the arrival any time soon of Robot Overlords or digital immortality for embodied human people.

These concerns I've listed aren't the only things in the world that repay our attention, certainly, but to dismiss these sorts of issues as some of my interlocutors sometimes seem to do is really just too obtuse for words. It's hard even to know how to respond to such attitudes sometimes. There is something painfully insufferable about the smug dismissals of "abstractness" in favor of "concreteness" one hears from facile reductionists. And there is something painfully self-defeatingly anti-intellectual about the incessant attribution to whole modalities of intelligent expressivity of a disqualifying "effeteness" "eliteness" "muddle-mindedness" "abstruseness" all in a self-promotional effort to market one's own reductionism as paragon. (To the inevitable idiotic response that I am doing the same thing to the scientifically-minded: I actually deeply respect and affirm scientific rationality, but know that to force it to apply everywhere is as distortive of its dignity and worth as to deny it application everywhere.) I just can't tell you how incessant testaments to these attitudes gets things really off on the wrong foot for somebody like me.

Michael Anissimov, for example, offers up this helpful statement in the opening gambit of a response to me elsewhere in the Moot: "A stumbling point is the sometimes unnecessary verbosity of your writing."

What am I supposed to say to that? Hey, fuck you? Honestly!

"Unnecessary verbosity?" Tell me, Michael, are all your words precisely the necessary ones? Necessary to whom, for what purpose? What if I chose some of my words because they delight me? Because they strike me as funny, because I like the sound of them? You got a problem with that?

What kind of self-image drives the choice to frame a discussion with moves like that in the first place? If I may offer up one of my questionable "armchair psychologizing" readings, I'll admit that there are times when I find myself getting the eerie feeling that advocates for AI try to write like Spock or Colossus would (or think they are so doing, since such projects inevitably fail: conceptual language is always metaphorical, argumentative moves are always as figurative as literal) as some kind of amateur performative tribute to the post-biological AI they believe in but which never seems to arrives on the scene as such. Be the change you want to see in the world... only now with robots!

This isn't a clinical diagnosis, of course, this isn't even an accusation. I know this seems to be hard for you guys to grasp, but I don't really believe it's "true" that you are performatively compensating for the interminably failed arrival of AI by acting out in this way. I obviously don't have remotely enough in the way of data to affirm this "theory" as a warranted belief or anything. Treat it as you would surely treat comparable glib conjectures offered up in actual social conversation, as a kind of placeholder for the real perplexity I often find myself feeling when confronted with the curious claims and moves of technocentric folks. Clearly, something rather odd is afoot that makes you guys talk and act this way. Who knows what finally it's all about? It certainly seems to have a measure of defensiveness and projection in it. Treat my proposal of one kind of provisional explanation that seems to hit the pitch and scale of the phenomenon at least as an utterance of the form: What is up with you guys?

Be that as it may, just to be clear: I do like writing words that are "unnecessary." There are plenty of things I like doing that are unnecessary. I am unafraid of the dimensions of experience and expression that are not governed only by necessity. I can cope quite well with such necessities as I must, but I don't want to live in the (to me) gray world where everything gets framed in those terms. That's a dull, ugly world, a robot world, as far as I can tell. Stop crowing about how not verbose, not abstract, not esthetic you guys are if you want to impress me or the people who likely come here for pleasure or provocation. It's not at all a winning strategy in a place like this.

Giulio continues: I am telling you... forget that I am a God Robot Cultist who engages in Superlative Technology Discourse and believes that the Eschaton will upload him to a Techno-Heaven, and let's join forces to achieve the common objectives.

Are you shitting me? I am quite happy to ignore the Robot Cult thing if we are in conversation about whether it's better to wipe your ass with two-ply or three-ply, but if we are talking about documenting and shaping the ongoing discursive formations through which technodevelopmental social struggle is articulated, then I'm not going to ignore the Robot Cult thing. It's relevant. It's WAY relevant. It's epically WAY relevant.

I think different identities should not matter much as long as there is agreement on outcomes.

Agreement on outcomes? I want to democratize technodevelopmental social struggle, I don't aspire to prostheticized Omni-Predicated transcendence of finitude, incarnation, and contested plurality. Dude, five thousand mostly white North Atlantic sf/popular futurology enthusiasts joined a social club they enjoy (which is perfectly fine and possibly charming) and decided they were a "movement" (which is rather silly but still mostly fine).

Now, when you sometimes seem to want to compare your own discomfort as a "transhumanist-identified person" because people are inquiring into the politics of your organizational structure, the leading metaphors in your canonical literature, and the peculiar entailments of your arguments with the suffering of persecuted ethnic, religious, gender minorities (minorities consisting of millions of people with long histories of documented abuses leaving palpable legacies generating irrational stigmas with which we all must cope as people who share a diverse world) it is, to be blunt, fairly flabbergasting.

When, on the other hand, you try to pretend that your status as a "transhumanist-identified person" is a triviality like eye color or preferring boxers to briefs, this is nearly as flabbergasting: inasmuch as if it were true you wouldn't be going on about it so endlessly, even in the face of my ongoing critique, but also because it's so palpably like saying a person's actually-affirmed identification as a Scientologist, a Mormon, a Freemason, or a Republican is something that one should pay no attention to in matters of urgent concern directly shaped by that affiliation and in ways that tend to subvert one's own ends. Who in the hell would ever think like that? What are you talking about?

"[B]eing or not being a cuckold has nothing to do with politics. It is something else, that belongs to a separate and non overlapping part of life. Same with sexual, religious, and football team preferences: these things have NOTHING to do with politics."

Well, actually, all of these things are enormously imbricated in politics. But bracketing all that, just sticking to the specifically wrong thing you are saying instead of all the many more generally wrong things you are saying here: If you happen to be a cuckold I agree that I could care less about that if we are arguing about a current technoscientific issue (unless attitudes toward infidelity come into that issue in some way, obviously). If you see current technoscientific issues as stepping stones along a path that eventuates in the transcendentalizing arrival at Superlative Omni-predicated outcomes in which you have invested deep personal faith, then it would be almost unthinkably stupid for me not to care about how these curious attitudes of yours might derange your sense of the stakes at hand, the significance of the issues, your estimation of likely developments in the near to middle term, the terms, frames, and formulations that will appeal to you and have intuitive force, and so on.

Even worse tactically, because you wish to disqualify many people who would be in _your_ camp when concrete and practical matters are discussed.

I don't have the power to "disqualify" people. I say what I think is right, what I think is wrong, what I think is possible, what I think is important, and what I think is ridiculous. And I won't stop.

In general, I am not looking for a "camp" to find my way to in any case, and while I am a big believer in political organizing I understand p2p politics well enough to know how foolish pan-movements, party machines, and camp mentalities should be to anybody who claims "tactical" wisdom in contemporary politics.

Wednesday, October 17, 2007

Superlative Outcomes Versus Open Futures

Giulio Prisco was the Executive Director of the World Transhumanist Association and is a member of the Board of Directors of the Institute for Ethics and Emerging Technologies (where I am still Human Rights Fellow). About my recent discussions of Superlative Technology Discourses and Sub(cult)ural Futurisms, Prisco offered up this response in Comments:
I do not see any actual argument being made, besides "I do not like those who waste their time discussing superlative technologies" which is not an argument and is not relevant to anything concrete anyway….. but your position still sounds to me like "you cannot play in my team because you are black / Xian / gipsy / gay / ... [insert some other thing unrelated to whatever the team does]". Don't you see that the only possible result can be alienating potential allies? If you do not want someone in your football team because he is Xian, at some point he will say "well then, screw your team". Is that what you want?

Of course, I have actually written thousands upon thousands upon thousands of words explaining why identifying with idealized outcomes yields sub(cult)ural movement-politics that, I have argued and offered up reasons to believe, [1] tend to derange practical foresight, [2] tend to facilitate True Belief and hierarchical political organizations, [3] tend to support elite-incumbent political interests (even when advocated by people who explicitly renounce such politics), and [4] tend to frustrate actual diverse democratic practices of stakeholder deliberation over technoscientific change.

But, in spite of all that Prisco doesn't see any argument happening here. What he sees is me saying "I don't like you." My position still sounds to [him] like "screw your team."

In other words, with perfect behavioral predictability according to the terms of the very critique of mine he is supposedly responding to (or at least discounting), Giulio Prisco's own sub(cult)ural futurism (he is a transhumanist-identified person) makes him literally unable to see anything but personal defamation where I am offering up structural critical analysis. Points [1] through [4] above, look to him more like racist, sexist, or homophobic slurs than analysis of technodevelopmental politics.

That is to say, he is sitting there at the keyboard in the livid glow of the screen, calmly, relentlessly, interminably proving one of my key points. No, Prisco can't see anything but sub(cult)ural politics in what I say, even though I am saying nothing of the kind. Sub(cult)ural Futurism is the organizational cul-de-sac he seems to be caught in, it's the lens that appears to be organizing his political intuitions here.

That is, of course, the very problem under discussion. It seems to me Prisco and other transhumanist-identified or Singularitarian-identified or Extropian-identified people (among countless other marginal but symptomatic sub(cult)ural movement-formations organized at the site of "technocentricity" or "futurism") all want to be on the "Team" that holds the Keys to History.

What I want, very much to the contrary, is for all the stakeholders to technoscientific change with whom I share the world to have a say in public decisions that affect them.

These are radically different political paradigms, it seems to me. This is a matter of something like Pan-Movement Politics as opposed to Democratic Politics. One of the key registers in which this difference plays out is in an opposition between Superlative Outcomes and Open Futures as a guiding organizational aspiration. (For Pan Movements, by the way, see Hannah Arendt's Origins of Totalitarianism.)

My critique is not about Liking or Not Liking particular people. Perfectly likeable people can misunderstand politics at a fundamental level. I don't want "allies" for some Ideal-Futurological Implementation "Team," I want a world of Peers collaborating and contending with with me in democratic and emancipatory technodevelopmental social struggle toward open unpredictable futures. Education, agitation, and organizing is not the same thing as whomping up enthusiastic "members" for a would-be Pan-Movement.

I think the reason I have such trouble playing this discursive game with Sub(cult)ural Futurists is that we seem to be playing on two separate boards altogether and I don't think they have quite grasped this yet.

Monday, October 15, 2007

Positive/Negative

I've had another flurry of comments and e-mails castigating the recent "negativity" of Amor Mundi. I find myself perplexed and exasperated as I always do when this happens.

I do critical theory. I criticize. What do people honestly expect of me?

Look, the road to hell is paved with good intentions. We all know this. What must look like "negativity" from the perspective of such hellbound good intentions may well do the only measure of "positivity" available to us at an historical juncture, causing us to pause in our hellward tumble, causing us to interrogate after our real motives, causing us to question the extent of our knowledge, the soundness of our assumptions, the breadth of our survey of our options.

I like "positive" programmatic proposals as much as the next person and have offered up my share of them here on Amor Mundi, heaven knows, but I also know well the value of criticism and know that criticism is critical (in more ways than one).

I think it has to be mostly pampered and privileged people who would bridle at a confrontation with "negativity" in the world today, such as it. But, quite apart from that, it seems to me that those who would always only relentlessly "accentuate the positive" -- as the old song goes -- risk being forced by that commitment into an uncritical acceptance of the definitive terms of the status quo.

Radical futurists may boggle at the suggestion that they are accommodating rather than overturning convention when they go off on their marvelous arias about digital immortality, utility fog paradises, traversible wormholes, prostheticized superbodies, and so on. But just as ancient orders would legitimate their rule through their presumed connection to some more glorious past, modern orders would legitimate their rule through their presumed connection to some more glorious future our commitment and obedience will award us. Both gestures are ultimately as conservative as can be.

No progress that is more providential than collective in its organizing assumptions is a truly progressive progress: If one's vision of the future is imagined to express as well as to arrive through the machineries of the same corporate-militarist competitiveness and prioritization of parochial profit-taking as animates current incumbent interests, then this vision is retro-futural however many chromed curved surfaces it promises us. If incumbent interests are content to imagine their disproportionate powers intact after the arrival of some radical vision of the future, you can be sure that vision is not so radical as all that. Worse, if such incumbents are among those most eager to invest in and handwave into the airwaves about this vision, it is a safe bet that this vision is positively retro-futural, even if it is swarming with nanobots and ubiquitous computation.

The only way to be sure that one is always positive and never negative is to acquiesce to the terms of the status quo -- in its animating essentials rather than its distracting superficial details -- and acquiesce so absolutely that one neither threatens it nor even looks outside of it for insight, solace, or pleasure.

Now, here's the thing. All of this I have been saying is true enough as far as it goes, but it is also true that the roads to better futures will be no less paved with good intentions than are the roads to hells. Those who would influence the future to democratic and emancipatory ends will surely be the ones who will have inspired the imagination of the next generation -- and inspiration is much more the work of the hopeful than it is the scarred or the scared.

What seems important to me is to grasp that both of these platitudes are powerfully true, and that they are not easily reconciled (indeed, you haven't really grasped the paradox involved until you grasp its abiding difficulty). The too reflexive and too unreflective ascription of "negativity" to criticism seems to me too complacent and too anti-intellectual by far to conduce to the benefit of genuinely progressive ends.

One cannot know whether it will be the critical or the programmatic intervention that will be the more positive one, the one that will enlist the imagination and the work that enables or builds the next bit of road to a better place. What will count as the positive or the negative from the perspective of the better place we might get to will likely differ from what registers as "positive" or "negative" where we stand now.

Friday, October 12, 2007

Democracy at the Transhumapalooza

An article has appeared online from The New Scientist discussing the recent conference of the World Transhumanist Association, held not long ago in Chicago.

I am, of course, a longstanding and sometimes rather acerbic critic of the transhumanist movement (see here and here and here, for example), despite the fact that I count a few self-identified transhumanists -- like the democratic socialist feminist Buddhist green James Hughes -- among my friends and colleagues.

To the extent that transhumanism is simply a kind of literary salon culture of enthusiasts for science fiction and futurological blue-skying, I suppose many of these are relatively harmless and amiable folks with considerable geek-to-geek allure for the likes of me. But there are also, I fear, a significant number of transhumanist-identified persons who fall very squarely into the teeth of the critiques of Superlative Technology Discourses and Sub(cult)ural Futurisms I've been hammering at for the last few weeks here at Amor Mundi.

There is no need to rehearse the whole critique again (I've Immaculately Collected many of the relevant documents in a Superlative Summary, for those who are interested), but I will say that it is rather intriguing to observe how readily the facile Superlative frames boil to the surface in a popular piece like this.

Among the charges of my critique that receive the most pouting and stamping from Superlative Technocentrics are these:
[1] That there is a tendency to separatism and alienation in their marginal sub(cult)ural identification with particular projected technodevelopmental outcomes;

[2] That their exhibition of self-appointed technocratic elitism on questions of technodevelopmental decision-making tends to devalue democratic deliberation;

[3] That their regularly reiterated fantasy that "progress" is simply a matter of a socially indifferent and autonomous accumulation of technical capacities tends to yield linear, unilateral, elite-imposed models of technoscientific change;

[4] That their further belief that such accumulation can deliver (and even, in some versions, will inevitably deliver) quite on its own, emancipatory powers and abundances so profound as to permit us to circumvent the impasse of stakeholder-politics altogether, tends to feed and to feed on anti-political and anti-popular attitudes more generally.

I argue that, taken together, these tendencies render Superlative Technocentricities and Sub(cult)ural Futurisms absolutely anti-democratizing in their assumptions, their ends, and their overall thrust -- so much so as to subvert the democratizing ends of even those Superlative Technocentrics who consciously espouse more progressive ideals -- and also provide powerful rhetorical rationales congenial to neoliberal/neoconservative outlooks and the incumbent corporate-militarist interests.

Needless to say, the nicest and most well-meaning folks among the Transhumanist-types and Singularitarians and Technological Immortalists (poor things) take extreme umbrage at these charges when I make them. How ferociously they deny the very idea that their politics might be reactionary (what I sometimes describe as "retro-futurist").

Why, I voted for John Kerry! one incensed young Singularitarian once took pains to reassure me. Why, proposing such structural correlations between these broader attitudes toward technoscientific change and one's effective political orientation is nothing but sloppy armchair psychoanalyzing, another fulminated. This is nothing but name calling, nothing but ad hominem attack, nothing but character assassination, chimes an interminable chorus of Superlative Technocentrics to my every political and cultural intervention into their discourse. You'll be hearing from my lawyer! threatened another (true story).

Let's see how some of them speak for themselves, then, shall we? (Skip my parenthetic commentary if you just want the voice of the journalist and the various Transhumanists quoted in the piece. And do follow the link to the whole piece, since I'm only excerpting it here.)
[T]ranshumanists have been attacked for jeopardizing the future of humanity. What if they ended up creating a race of elite superhumans bent on enslaving the unmodified masses, or unwittingly programmed an army of self-replicating nanobots that would turn us all into grey goo?

(For me the former problem seems far likelier than the latter, indeed, something like the former looks to me like a straightforward programmatic entailment of the typical transhumanist advocacy of "enhancement" (treated as prosthetic perfectionism in a direction that everybody already agrees on in advance, though certainly we don't, or should agree on -- in a way that squares with the parochial transhumanist takes on these matters -- so it doesn't matter much that we don't), rather than a more technoprogressive advocacy of universal (informed, nonduressed) consensual non-normative healthcare provision. But, more to the point, if you think these typically hyperbolic journalistic question are just so much provocation and noise (I dismissed them as such at first myself), it is rather astonishing to observe the eagerness with which some transhumanists seem to want to encourage such worries. For more of what I mean, simply read along.)
In 2004, political scientist Francis Fukuyama singled out transhumanism as the world's "most dangerous idea."

(As we all know, of course, Francis Fukuyama has a certain experience with marginal sub(cult)ures bent on imposing their extreme and anti-democratic worldview upon an unwilling and unready world, having carried water for years for the Neoconservative Death-Eaters, a klatch of mostly white assholes utterly convinced they were the smartest people in the room as they engineered world-scale disaster after world-scale disaster in plain sight of an appalled world.)
Now this small-scale movement aims to go mainstream. WTA membership has risen from 2000 to almost 5000 in the past seven years, and transhumanist student groups have sprung up at university campuses from California to Nairobi.

(Think carefully about those numbers. At best, transhumanism, so-called, remains an utterly marginal sub(cult)ure, especially given the scale of its presence in the media landscape, not to mention the authority some of its rhetorical framings of technodevelopmental issues have come to assume in that landscape.)
It has attracted a series of wealthy backers, including Peter Thiel, co-founder of PayPal, who recently donated $4 million to the cause, and music producer Charlie Kam, who paid for the Chicago conference.

(Needless to say this development is the furthest thing from evidence of the development with which this paragraph began, "this small-scale movement aims to go mainstream." If it is true, as I think it is, that the technocratic elitism so prevalent in the transhumanist movement is especially congenial to incumbent interests with a stake in assuring the powerful that ordinary people are too ill-informed to be entrusted with a say in the developmental decisions that affect them, and if the techno-centric emphasis in transhumanist attitudes toward social problem-solving is especially congenial to incumbent interests with a stake in assuring a continued flow of money always in the direction of corporate-military research (welfare for the already-rich stealthed, of course, as "national defense" and "economic development"), then we can expect quite a bit of money to find its way eventually into transhumanist and quasi-transhumanist organizations. It remains to be seen how the more democratically-minded transhumanists will cope with this development. My expectations are shaped by the sense that money, attention, and success provide plenty of material for rationalization, and hence I think that the democratic transhumanists will, over the long term, prove to have provided respectability, credibility, and cover for the more reactionary elements in their movement, while corporatist support assures that these reactionary elements direct the movement. It may interest people to know that Peter Thiel serves on the Board of the Hoover Institution and is co-author of a book, The Diversity Myth: 'Multiculturalism' and the Politics of Intolerance at Stanford.)
Other well-known speakers are also on the roster, including… Ray
Kurzweil, the group's unofficial prophet.

(Not all groups have "prophets," official or non-official. Just saying.)
They don't look very threatening, though perhaps not very diverse
either. Most WTA members are white, middle-aged men…

(Mm-hm.)
AI theorist Eliezer Yudkowsky also believes the movement is driven by an ethical imperative. He sees creating a superhuman AI as humanity's best chance of solving its problems: "Saying AI will save the world or cure cancer sounds better than saying 'I don't know what's going to happen'." Yudkowsky thinks it is crucial to create a "friendly" super-intelligence before someone creates a malevolent one, purposefully or otherwise. "Sooner or later someone is going to create these technologies,"

(So, by God, let it be MEEEEE. Hard to believe this paragraph began with the claim that "the movement is driven by an ethical imperative." What kind of ethical imperative, one wonders, drives you into a Robot Overlord arms race with unspecified antagonists for control of the world, exactly?)
The theme of saving humanity continues with presentations on... raising baby AIs in the virtual world of Second Life, as well as surveillance tactics for weeding out techno-terrorists and a suggested solution for the population explosion: uploading 10 million people onto a 50-cent computer chip.

(All Very Serious, indeed.)
More immediate issues facing humanity, such as poverty, pollution and the devastation of war, tend to get ignored.

(Hm. Fancy that.)
I discover the less egalitarian side to the transhumanist community

(You mean, even less egalitarian?)
when I meet Marvin Minsky, the 80-year-old originator of artificial neural networks and co-founder of the AI lab at the Massachusetts Institute of Technology. "Ordinary citizens wouldn't know what to do with eternal life," says Minsky. "The masses don't have any clear-cut goals or purpose." Only scientists, who work on problems that might take decades to solve appreciate the need for extended lifespans, he argues.

(Lovely.)
He is also staunchly against regulating the development of new technologies.

(Shall I pretend to be shocked?)
"Scientists shouldn't have ethical responsibility for their inventions, they should be able to do what they want," he says. "You shouldn't ask them to have the same values as other people."

(Marvin Minsky, ladies and gentlemen.)
The transhumanist movement has been struggling in recent years with bitter arguments between democrats like [James] Hughes and libertarians like Minsky. Can [unofficial "prophet," Ray] Kurzweil's keynote speech unite the opposing factions?

(Let me reiterate, in my view these factions are easily reconciled: the democrats need only be tolerated so long as they provide respectable cover for the reactionaries among them, meanwhile both sides foreground their shared technological enthusiasm to the exclusion of their real substantive political differences -- so divisive! so negative! -- with the consequence that the incumbent corporatist interests that overwhelmingly shape technological discourse always actually benefit without having to fight for this outcome, the reactionaries get something for nothing and the democrats get nothing in exchange for everything. Hey, what's not to like?)
On the final day of the meeting… Kurzweil offers a few possible solutions to today's global dilemmas, such as nano-engineered solar panels to free the world from its addiction to fossil fuels.

(He means, surely, that we must all struggle to fund and regulate and educate and promote such technodevelopmental outcomes in the public interest? That we must all learn from our many historical mistakes that we have to attend to the actual diversity of stakeholders to technodevelopmental change? That the distribution of costs, risks, and benefits of technoscience must better reflect that diversity, else "development" become a short-sighted parochial environmentally unsustainable socially destabilizing project of planetary precarization, exploitation, confiscation, and violence? Right? Right?)
But he is opposed to taxpayer-funded programmes such as universal healthcare as well as any regulation of new technology, and believes that even outright bans will be powerless to control or delay the end of humanity as we know it.

(Wow, I guess he did provide a way to unite the democratic and reactionary transhumanists, after all! What a relief!)
"People sometimes say, 'Are we going to allow transhumanism and artificial intelligence to occur?'" he tells the audience. "Well, I don't recall when we voted that there would be an internet."

(Ray Kurzweil, ladies and gentlemen. Unofficial "prophet" of the transhumanist movement. It should go without saying, by the way, that those of us fighting for Net Neutrality, p2p, a2k, FlOSS, and so on are engaged in precisely the kind of democratic social struggle that is being denigrated in the glib dismissal of the very idea that "we voted that there would be an internet.")

Remember the hyperbolic questions with which the piece began? "What if they ended up creating a race of elite superhumans bent on enslaving the unmodified masses, or unwittingly programmed an army of self-replicating nanobots that would turn us all into grey goo?" Look no further for the source of worries such as these. They arise from the self-congratulatory and anti-social pronouncements of transhumanists themselves.

Monday, October 08, 2007

Superla-Pope Peeps

Eliezer Yudkowsky, Co-Founder and Research Fellow of an outfit called the Singularity Institute for Artificial Intelligence (SIAI) has recommended a short text of his posted on the blog Overcoming Bias as his response to my exchanges with Aleksei Riikonen and others in the Singularitarian Agony posts from the last couple of days (just scroll down a bit for these if you like). I responded to Eliezer's recommended essaylet on a discussion list we occasionally nip in and out of and then he and I sniped at one another very entertainingly a few circuits around the roller-rink. I regret that I can't re-post all that edifying snark -- since it seems to me content posted to that list is probably offered up with the expectation of a certain privacy I will respect -- but since the original blog-post Yudkowsky recommended as the corrective to my position is freely available I do think I can let you in on my response to that.

In Yudkowsky's text he proclaims that "what scares me is wondering how many people, especially in the media, understand science only as a literary genre."

Since he is offering up this thesis as one that applies to me, I can only assume he wants to paint me as a person who "understand[s] science only as a literary genre." Think carefully about what it means that Yudkowsky either can't see why he is wrong here, or why he thinks it is a good idea to peddle such a false impression.

Certainly I will attest that there are key aspects of technoscientific change that are well describable in cultural, social, and political terms. Because my training and temperament suits me to talk about just those aspects, and because I address my arguments to technocentric folks who often underestimate just those aspects to the cost of sense it makes enormous sense that those are the aspects I devote much of my attention to in my writing. But it is quite a leap from here to the conclusion that I think of science "only as a literary genre." I certainly don't believe any such thing. I challenge Yudkowsky to unearth such a reductive claim in my writing. One place to look might be my essaylet Is Science Democratic? Another, Technoethical Pluralism.

I daresay Yudkowsky should be able grasp, at least in principle, that even a scholar whose training best suits him to discuss the literary dimensions of the texts in which scientists seek to communicate their findings, their inspiration, their significance, and so on will still be able to distinguish those dimensions from the proper scientific dimensions (the collective practices of invention, testing, debate, and publication through which descriptions are offered up as candidates for warranted belief in matters of prediction and control and which satisfy the criteria hacked over millenia to select among these candidates, testability, coherence, concision, saving the phenomena, and so on) also exhibited in these texts.

Later in his post Yudkowsky offers up this bit of homespun wisdom:
"Probably an actual majority of the people who believe in evolution use the phrase 'because of evolution' because they want to be part of the scientific in-crowd -- belief as scientific attire, like wearing a lab coat. If the scientific in-crowd instead used the phrase 'because of intelligent design,' they would just as cheerfully use that instead -- it would make no difference to their anticipation-controllers."

I would be very interested to know more about the empirical experiences on the basis of which Yudkowsky has "induced" this "probability." I guess he imagines people in general too dull witted to affirm instrumental beliefs warranted by consensus science because these have proved especially good in facilitating their satisfaction of instrumental wants. Yudkowsky seems to think the poor benighted masses choose surgeons over faith healers so regularly only because they hope to be taken for the cool kids in lab coats in University Science Departments. It's an, uh, interesting theory.

I must say that the very notion of a "scientific in-crowd" so compelling in their allure that a dim-witted "Mob" parrots their utterances so as to be mistaken for them seems to me at once so flabbergastingly off-the-reservation and so baldly elitist that I laughed out loud upon reading it.

As an aside, I do agree that sometimes utterances that appear to affirm as warranted what are in fact absurd would-be instrumental descriptions ("An angry Patriarchal Creator-God with a gray beard sitting in a great Stone Chair exists," "the Earth was created in seven days," "my team won because Jesus likes us better than our opponents," and so on) are really functioning as social signals, indications of moral identification/ disidentification. And so I think, for example, that the worrisome reports one sometimes reads that vast numbers of Americans affirm idiotically Creationist beliefs in surveys are often actually using these affirmations of belief to signal to the surveyor and to an imagined audience something like the statement "I try to be a good person" -- where this correlates to membership in some church they haven't attended more than sporadically most of their adult lives. I suspect that misinterpretations of these reports often make sensible secularists and atheists panic unnecessarily about the state of this actually promisingly secular multicultural American society. But the proof is in the pudding: it seems to me that when people's circumstances demand actual instrumental beliefs their conduct is more pragmatic and scientifically informed than not, and this is scarcely because they want to be like idealized scientists. I think people in general are capable and sensible precisely to the extent that they have access to information and protection from duress. In other words, I really do think people are much more rational than they might seem to be when they report belief in superstitious nonsense and that they exhibit more rationality than they are often credited for in their actual instrumental conduct. It seems to me that many Singularitarians and Technocratic Elitists (perhaps Yudkowsky among them, given the above) have formed the opposite impressions, for whatever reasons, and to their cost.

Yudkowsky continues on:
"I encounter people who are quite willing to entertain the notion of dumber-than-human Artificial Intelligence, or even mildly smarter-than-human Artificial Intelligence. Introduce the notion of strongly superhuman Artificial Intelligence, and they'll suddenly decide it's 'pseudoscience.'"

I encounter people who are quite willing to entertain the notion that the streets are filled with actual bipeds, but who "suddenly" decide it's "pseudoscience" if you introduce the notion of perfectly bipedal angels into the discussion. Why, it's like a world gone mad!

You know, the world is littered with actually existing calculators and computers, many of which can already "outperform" normative exhibitions of human intelligence in some of its conventional registers. But, I hate to break it to Yudkowsky et al, there is nothing like an entitative post-biological superintelligence even remotely on the horizon, and so I think skeptics might be forgiven their "sudden" skepticism on this score in light of this alone. Surely it isn't exactly only "suddenly" that sensible people have suggested that worries about the imminent arrival of Robot Overlords might be a wee bit skewed and pseudo-scientific?

But, of course, things are much worse for our Singularitarians than that on my view: Not only are there common or garden variety computers everywhere but Robot Gods nowhere, there is also a long history of people who sound exactly like Eliezer Yudkowsky making fun of people like me (and much smarter and better informed than the likes of me) for doubting their hyperbolic pronouncements about the proximate arrival of world-transforming AI.

And, although you would never guess it from the withering contemptuousness Singularitarians sling at their skeptics, the AI guys are, so far, always, always wrong and the skeptics, so far, always, always right. While it's true that the endlessly reiterated failure of the Strong Program (let alone the, shall we say, Sooper-Strong Program Singularitarians have come somewhat perversely to prefer in the aftermath of these failures) doesn't earn anybody the deduction that the Program will always so fail (I don't hold that view myself, as it happens), one wonders why even caveats or a small measure of modest good humor fail to arrive after all this humiliation. This is especially perplexing given that Singularitarians seem to want to pass so desperately as serious scientists -- when scientists are, after all, among the most scrupulously caveating folks I know.

Beyond all this, as I have been reiterating repeatedly here for the last few days, the concepts to which Singularitarians make regular recourse in their discourse look deeply inadequate to me when they're not actually actively incoherent. I still await the sense that these folks take the particular embodiment of actually existing consciousness seriously, or register more of the actual diversity of capacities and performances that get described as "intelligence," or show more awareness of the histories of the value-discourses they appropriate when they start to go off on "friendliness," "unfriendliness," and the rest, not to mention demonstrating a little more awareness of widely understood and respected critiques of technological determinism, reductionsism, autonomism, unintended consequences, and so on. When they start barking about transcendental inevitabilities and number-crunching the Robot God Odds so solemnly I sometimes suspect I'll be left waiting for these little niceties forever.

About the skeptics, most of whom, we are assured, are sloppy-headed "literary types," Yudkowsky tosses his head with dismissive scorn:
"It's not that they think they have a theory of intelligence which lets them calculate a theoretical upper bound on the power of an optimization process. Rather, they associate strongly superhuman AI to the literary genre of apocalyptic literature."

It's simply breathtaking that Yudkowsky seems to think Singularity skeptics actually need to re-tool their understanding of intelligence to address the weird and wacky things Singularitarians claim in the name of their scarcely digested notions. The whole point is that if your discourse proposes that "optimization" spits out a Robot Overlord you are not making a claim that is exclusively or even primarily located under the heading of "computer science" -- it is doing work for you that is more like the work of apocalyptic literature. Network security discourse can cope with recursivity without becoming a cult or keeping a charismatic (?) guru in gloves and fans. It is the investment of projected (in more than one sense of that word) malware with entitative motives that takes us into the realm of quasi-religiosity, of collective dread and wish-fulfillment, stealthed as scientific objectivity. This latter investment renders Singularitarian discourse vulnerable to Superlative Critiques which recognize the cultural iconography that is getting mobilized to whatever ends and such critiques expose this mobilization. The confusion of warranted consensus scientific description with cultural mythology is not ours when we discern it, but Superlative as its partisans depend on it.

Of course, in light of all this, the endless self-righteous attribution of ignorant fashionable nonsensicality to anybody who refuses to give in to the reductionist triumphalist bulldozer of scientism (which is not, remember, consensus science itself, but always an opportunistic appropriation of science for parochial political and cultural ends) of the Superlative Technocentrics is just adding insult to idiocy.

The fantasy that the silly humanities people can be browbeaten into silence because self-appointed high priests of science (by which, flabbergastingly-enough, Yudkowsky seems to mean people like himself) can scrawl their equations on a chalkboard all the while oblivious to the metaphors that do so much of the actual heavy lifting in their arguments, or the fantasy that self-appointed technocratic elites with weird skewed priorities can deny the diverse demands of the actual stakeholders to ongoing and emerging technoscientific change all for "their own good" are the real frauds that demand exposure at this time in my view. Sensible and informed "literary types" have a role to play in the exposure of these deranging technodevelopmental discourses as well as sensible and informed "science types."