Using Technology to Deepen Democracy, Using Democracy to Ensure Technology Benefits Us All

Friday, July 13, 2007

Pragmatic Science, Not Priestly Science

Upgraded and adapted from Comments:

Very good and long-standing Friend of Blog Martin Striz objects to my too-pragmatic characterization of scientific practice and belief-ascription. He begins by quoting this passage from an earlier post of mine:
[L]iterally every technoscientific and technodevelopmental outcome is historically specific, arriving on the scene through historically specific articulations (via disputation, social struggle, vicissitudes in matters of funding, and regulation, serendipities in the lab, eddies in communication, fashion, education, and so on) all of which are in some measure accidental and any of which could easily have been otherwise. These outcomes settle -- to the extent that they manage the feat -- into institutional, factual, normative, customary formations that are quite likely, however natural(ized) they seem for now, to become otherwise in the future through the very same sorts of articulative forces, as these incessantly sweep a world shared by free, intelligent, creative, expressive, provocative, problem-solving peers.


Martin objects: While it is true that social and political elements shape the kind of science that gets done, it is not true that these elements alter scientific truths -- at least not if science is done right.

But it seems to me that "science getting done" is social and political practices, which Martin quite properly concedes himself by the end of that very sentence of his. By "scientific truths" I presume Martin means beliefs justified by the protocols and criteria that have emerged over centuries of practice as the ones most apt to deliver powers of prediction and control to those who are guided by them. But that's what I would mean by "scientific truths," too. There is nothing proper to science that is threatened by my formulations.

However, temptations to rewrite warranted scientific belief in the image of a finality, a certainty, or a non-human autonomy to which humans are beholden -- what amounts in my view, in essence, to a rewriting of pragmatic consensus science in the image of a priestly religiosity we would all do better to leave behind us -- fare less well once one takes up my attitude. That's what I like about it. I prefer my science useful, rather than authoritative.

He continues on: [T]he history of science is replete with new truths that go against intuition and sociopolitical norms.

We are definitely in agreement there. As I said at the end of the passage he himself quoted: "[C]ustomary formations... are quite likely, however natural(ized) they seem for now, to become otherwise in the future through the very same sorts of articulative forces, as these incessantly sweep a world shared by free, intelligent, creative, expressive, provocative, problem-solving peers."

None of that denies the force of conservative resistances and the costs they exact on innovators, of course. Thomas Kuhn provided some of the classic theoretical and ethnographic discussions of this sort of thing (not that I agree with everything he says, I'm just assuming most folks interested in these questions will know Kuhn well enough to slot in his insights here, so that I don't have to).

Later, Martin proposed an amplification of this point (about which he seems to think my own pragmatic characterization of science forces me into a position of disagreement with him):

We know [science] works when we are surprised. If you were never surprised by a discovery, if a discovery never challenged your expectations, if every discovery simply validated the accepted paradigm, then science wouldn't be doing its job. It would just be another avenue to validate our biases. But science has continually surprised us.

When people are free they will surprise us. That is because human beings are different from one another, responsive to one another, and endlessly inventive.

But Martin proposes a different explanation for such surprises: As long as they stuck to the data, the truths would be the same. Good science is sticking to the data.

Now, as far as I can see, one can easily account for human inventiveness without making any recourse at all to what looks to me like the rather curious notion that the Universe has preferences in the matter of the words humans use to describe it, imagined "preferences" that cause some of us to describe some words not just as the best justified candidates for instrumental belief presently on offer but as mysterious incarnations of some suprajustificatory externality that they will call, for want of anything better on hand, "the data" (NB: the data -- even if, as so often happens, the description identified with this exclusive the is subsequently supplanted by another better justified claim because our knowledge changes or our priorities change).

I'm pretty sure that this takes us to the heart of our dispute here. Martin writes: [T]here is an objective external reality which preceded the arrival of humans, and by necessity, any sociopolitical bias.

This claim seems to me pretty harmless, as far as it goes, but also completely uninteresting. I'll cheerfully agree with it, for whatever it's worth. But it seems to me that the moment one tries to use this claim as a hat-hook on which to hang an "explanation" as to why the criteria we use to justify scientific beliefs actually deliver the goods of prediction and control with which we entrust them, well, then this claim begins to do untold mischief as far as I'm concerned. That's when an otherwise innocuous claim becomes a crow bar that opens the door and invites the Priests in to spoil the party.

I don't see how one could deny that there is an ineradicable gap between the world and the words that describe it without losing one's competence as a language user. Which means that of course I agree with Martin and everybody else who uses language on the utterly noncontroversial "question" of the existence of "external reality." But, not to put too fine a point on it, I don't believe that anybody on earth, outside a few lunatics and substance experimentalists on a real bender, possibly, really does deny this truism. This is so, I'm afraid, even when people are offering up theoretical accounts of justification that enrage epistemological realists.

But neither do I see any difference between those who would use this grammatical truism to invest declarations by proper scientists with sociocultural autonomy or with a decisive authority (prioritized, say, over ongoing democratic stakeholder disputes over the actual diversity of desired outcomes), and those who would invest declarations by Priestly authorities about the existence of supernatural beings and their presumed wants in respect to human conduct and historical outcomes with a similar priority over democratic contestation.

Martin offers up as an olive brance of sorts, that, whatever the solid stolid "truths" disinterred by science, [t]here is always room for interpretation.

But, for me, that's what scientific practices of warranted description are. I agree that we properly distinguish scientific or instrumental modes of reasonable belief-ascription from other modes (moral, ethical, esthetic, political, and so on). Indeed, offering up such distinctions was right there at the heart of the discussion to which Martin was objecting in the first place. But just as strongly as I agree that we rightly distinguish the forms, ends, and warrants of instrumental rationality from other modes, I disagree that we rightly prioritize any one mode of rationality over the others or seek to reduce the terms of any one indispensable mode to the others. It is just such a project of hierarchization or reductionism which is likely afoot when those who prioritize instrumentality like to distinguish "it" from interpretation, or propose that "it" can be socioculturally biased or, somehow, "not" in some enormously fraught sense.

Martin proposes that: Nuclear physics can be used to power cities or blow them up. That doesn't change the truths of nuclear physics.

But certainly our understanding of these differences does indeed change at least some "truths" of nuclear physics, if only (at the very least) because people's attitudes to blowing up cities (or, as it happens, recommendations that we "power" them in pointlessly dangerous and poisonous ways even if better, renewable alternatives are available for such purposes) will inspire programs of funding, regulation, publication, education, and research that will nudge scientists in different directions than they otherwise would with the consequence that the candidates for belief that they propose and which subsequently will pass justificatory muster will differ from one another.

But quite apart from all that, the actual thrust of my argument in the original post that inspired Martin's marvelous interventions was less to discuss the social, cultural, and political articulation of scientific and justificatory practices at the general level that has preoccupied this particular, and very rewarding, series of exchanges, but just to insist that technoscientific progress is articulated by such factors. I daresay he would probably agree with that point even if he doesn't particularly like my insistently historicized and pragmatic characterization of justified scientific belief. But if I had focused on just that aspect, I wouldn't have had occasion to delve into these other wonderfully interesting topics!

Tuesday, July 10, 2007

"The Singularity Won't Save Your Ass"

Musing on a somber topic (the fraught "intersection of crisis-response thinking and transformational-future thinking"), but in a playful mood, Friend of Blog Jamais Cascio has proposed a bumper sticker that pithily captures an attitude I endorse heartily myself: "Singularity is not a Sustainability Strategy." He intends this as a jokey-serious technocentric analogue to the better known bumper sticker "The Rapture Is Not an Exit Strategy."

"Singularity," for those who don't know about it, is a term that refers to a constellation of overlapping Superlative Technology theses, almost always taking on an apocalyptic or transcendentalizing coloration, in which we are told that technodevelopment is accelerating (or even that this acceleration is itself accelerating) in ways that demand the circumvention of democratic intuitions about the usefulness of public deliberation, the value of precautionary recommendations, the necessity of conventional regulatory oversight, or a proper developmental responsiveness to the actual diversity of stakeholders to development. (Elsewhere, I have described such technocentric and futurological acceleration fixations as "Accelerationalism.")

Usually, for "Singularitarians," these claims about acceleration are tightly coupled to claims about the imminence of artificial intelligence and likewise, immediately thereafter, the imminence of artificial superintelligence. "Singularity" is conventionally used to describe the imagined Event when "post-biological superintelligence" arrives on the scene, although it sometimes is used to describe the aftermath of a basic historical discontinuity (usually directly connected to some version or other of the "imminent post-biological superintelligence" claim), beyond which it is impossible to make reasonable predictions because technoscientific change is happening too quickly and too radically for "mere human" intelligence to grasp.

Qualms about the anti-democratic entailments of Singularitarian variations of the Accelerationality Thesis are usually addressed either by investing the imagined "post-biological superintelligence" with Salvational properties or by insisting that the "urgency" of the threat of "post-biological superintelligence" justifies "reluctant" elite technocratic decision-making in the "interests of all" as they see it for now, which amounts to investing the Singularitarians themselves with Salvational properties.

In the very interesting comments occasioned by Jamais's post, Friend of Blog Michael Anissimov bravely makes the Singularitarian case to a skeptical audience, suggesting that
smarter beings would think up better ways to run a sustainable civilization: using byproduct-free manufacturing, space-based solar panels, fusion power, etc. Being smarter, they'd also be able to invent and implement such things much faster than the most competent humans would, and also discover technologies we cannot yet even imagine. That's the power of increased intelligence.

Needless to say, we already have the intelligence to do such things, even without Robot Gods to pray to, especially when we realize the power of interpersonal collaboration to solve shared problems (a power that is renewed and reinvigorated by planetary peer-to-peer networked formations). But also, and one would expect this to be just as needless to say, it is not the lack of intelligence but the impediment of the heartless, greedy, short-term, anti-democratic politics of incumbency that stands between humanity and the solution of many of our conspicuous shared problems. We have intelligence already, and "more intelligence" (especially not the too-reductively instrumental vision of intelligence Singularitarians tend to confine themselves to, a tendency among True Believers in the Strong Program of AI that I deride as "Artificial Imbecillence") is not going to break the impasse of diverse stakeholder politics in a shared and finite world. Technology is not "neutral" nor "autonomous," and technoscientific developments, properly so-called, are always articulated by politics and culture.

Without good democratic politics even Robot Gods would not "save us." With better democratic politics, human ingenuity and benevolence could be marshaled further in the service of shared ends, so that we no longer feel the need to "be saved" in the first place.

Another comment, from "Kim," pointed out that since Singularitarians, like most people beguiled by Rapture rhetorics, are responding to deep fears and fantasies, passions that are not entirely rational when all is said and done, it is probably counterproductive to point out to them that they are being unreasonable or to patiently ennumerate more reasonable alternatives. This may be true, but I do think it is important to add that the brand of irrationality peddled by Singularitarians has powerful resonances with the intuitions of neoliberals and neoconservatives. Some neoliberals and neoconservatives have already started to drift in a broadly Singularitarian, or at any rate technocentric, direction to save their anti-democratic agenda in the face of its current catastrophic culmination (Thomas Friedman, Glenn Reynolds, and William Safire are pretty good examples of this in my view), and it is hard for me to see how the majority of neoliberals and neoconservatives could long resist the lure of Singularitarian arguments that
[1] provide a rationale for the circumvention of democratic politics
[2] provide a rationale for increased investment in military R&D
[3] make recourse to tried and true strategies of fearmongering
[4] appeal to Old School conservative intuitions about the special Destiny of the West
[5] appeal to Old School conservative intuitions about the indispensability of elite Gatekeepers of the True Knowledge
[6] appeal to more newfangled conservative intuitions about "spontaneous order" and "natural(ized) markets."

Given all this, it seems to me there is every reason to expose the unreasonableness and even ridiculousness of Singularitarian doctrine, even if its more passionate partisans will likely turn a deaf ear. The danger is not so much the True Believers among Singularitarians themselves (who would, of course, be properly jailed for terrorism or scarily hired by the military were they to edge even a nanometer in the direction of actually creating the silly Robot Army that so preoccupies their fancy), but the cynical incumbent interests and corporate-militarist formations that are desperately scouting about these days for a new rhetoric to bamboozle people with as they continue their reckless crime spree.

By the way, the title for this post is also taken from the Comments to Jamais's post. It is an alternate Bumper Sticker to Jamais's own suggestion proposed by "Stefan Jones." It cracked me up.

Friday, July 06, 2007

Accelerationality

Give them a minute or two, and you can almost always count on contemporary technophiliacs to find their way these days to the topic of "acceleration." Catch post-Vingean and Kurzweilian Singularitarians in a full froth of starry-eyed pontificating and you might even get them to carry on about the "acceleration of acceleration" itself. Where once they might have inclined to enthuse about "Future Shock," our own sublime technophiliacs are now more apt to fixate on its kissing cousin, accelerating change.

Now, nothing could be more obvious than the fact that "technology" isn't actually monolithically doing anything at all, not accelerating, converging, transcending, flatlining, line dancing, or any such thing. Quite apart from the sticky issue of the conspicuous historical constructedness of what will count from moment to moment or from culture to culture as technology in the first place (pacemakers? eyeglasses? writing? language? posture? the familiar? the unfamiliar? the unfamiliar as it gradually becomes familiar?), the simple fact is that even conventional technodevelopmental trajectories (weaponry, medicine, couture, communication, transportation, and so on), and their even more complex subcategories, all change and "develop" along jittery, complex, weirdly interimplicated pathways, devices building upon prior discoveries and discourses, subject to geographically and historically diverse normative, infrastructural, economic, and regulatory pressures, morphing, vanishing from use completely, some developments accelerating breathtakingly, indeed, but others just as conspicuously decelerating, stalling altogether, and so on.

What is it that gives so many technophiliacs the cocksure certainty right about now that ours is an era of technoscientific acceleration? And by what sleight of handwaving do technophiliacs leap from factual (and hence, presumably, falsifiable) claims about accelerating rates of technoscientific change to what looks like a re-emergence of the somewhat disreputable Old School faith in a Providential "natural" progress, but this time with eschatology's foot on the accelerator pedal?

And how can technophiliacs square their impressions with the equal assurance of so many technophobes who, looking upon the very same planetary scene, instead fret, precisely to the contrary, that humanity is careening ever more speedily toward a stymieing cliff's edge of catastrophic climate change, Peak Oil, currency collapse, idiotically reinvigorated arms races, water wars, sparring feudalisms and incommensurable fundamentalist faiths, and so on, discerning in every sociocultural corner symptoms of exhaustion, dashed hopes, failed imagination, political backtracking, sprawling monoculture, and broken technological promises? My point is certainly not to affirm the dark visions of catastrophists over the triumphalists, especially to the extent that disasterbatory discourse functions so often to facilitate what Al Gore has decried as the doubly defeatist leap directly "from denial to despair," with the common denominator of passive acquiescence where what is needed is urgent action. But given this clash of epic handwaving and doomsaying aren't our technophiliacs even given pause in their faith in an acceleration without pause?

It is troubling to observe with what regularity accelerationalism seems to figure itself as not just change but as change with a direction, a trajectory, and hence an acceleration metaphorically gifted with a kind of momentum, with a limit shattering, objection-bulldozing urgency all its own, a spectacle of acceleration culminating curiously often in the proposal, offered up in the tonalities of ecstatic pleasure, that "humanity" is revving up to some kind of "escape velocity" or New Age "transcendence." All this, rather than simply discerning a deepening disruption of customary formations and attitudes and lifeways under technoconstituted pressures of global trade practices, information and communication networks, weapons proliferation, climate change, and so on, let's say, in which one finds a prompt to institutionally facilitate deliberation, regulation, accountability, or collaborative problem-solving.

Of course, among the first casualties of an accelerationalist cast on these complexities is the "nostalgic" "romantic" "sentimental" notion that "we" (the ones with our hands on the buttons) have time to consult with everyone affected by change in the face of the racing pulse of that change, as it hyperbolically accelerates. And to any qualms that may arise in the faithful from this glib circumvention of democratic ideals accelerationalism is quick to assume the face of a fatality, at once a tidal wave too brutal to brook consultation as well as a Providential thread promising ends that justify the means (nothing nostalgic, romantic, or sentimental in that move).

There is, to be sure, a palpable fingernails on a chalkboard ugliness in the fact that these frantic futurists seem too well pleased to permit a vanishingly small minority of actual humans to stand for "humanity at large" in these angelic aspirations of theirs. But this is, after all, one of the oldest tricks in the humanist book, inasmuch as the universal rights and values of humanism have rarely extended to all humanity, and rarely even to the humanists' own servants -- and so I suppose it is unfair to hold this sort of parochialism against folks who are likely to insist quite happily that they are posthumanists after all. Nevertheless, it does seem that it might repay scrutiny to ponder why such urgent and recurrent escapist imagery issues from folks otherwise so firmly convinced of their this-worldly hyperrealism. At what point do we begin to wonder whether accelerationalist discourse isn't just another secular upwelling out of America's deep puritanical, misanthropic, apocalyptic psychic archive but, you know, this time with robots? Be that as it may, it's hard to shake the sense that the ecstatic partisans accelerationality might just be the last stubborn holdouts caught up in the dot-eyed irrational exuberance of the Long Boom rhetoric of the late 1990s. It's as if they're still under the impression that the Concorde is still taking cocktail orders and flying at supersonic speeds, that the ISS is right on schedule and Moonbase Alpha around the corner, that HAL really truly is about to make his softspoken appearance on the scene, that even now a bottle of safe cheap super rejuvenation pills is on its way to their doorstep via FedEx, and that those quintessential "California Ideologues" the Extropians really were right to promise us back in the day that Heinleinian anarcho-capitalists in labcoats would deliver us all from death and taxes any minute now.

I can't help but wonder whether many privileged technophiliacs aren't simply mistaking as this "acceleration" they keep going on about what amounts in fact to the increased economic volatility brought about by the ongoing financialization of nearly all commerce in North Atlantic societies over three decades of neoliberal policy prescription. The rhetorical project of neoliberalism ("free market" ideology), to which so many technophiliacs remain wedded to this day, amounts after all to a systematic redescription of conditions of general insecurity conjoined with elite wealth concentration as though it represented the desirable condition of individual liberty. And it is easy to see how the attribution to the increasing volatility and stress of neoliberal societies of the progressive directionality of "acceleration" would nicely comport with such a project of redescription, just as it is easy to see how the attribution of a "creative destruction" that unleashes "spontaneous order" to what is in fact the neoliberal dismantlement and privatization of publicly accountable social welfare programs would likewise be a boon to this sort of neoliberal PR.

Needless to say, all of these moves are bleakly familiar ones by now across the canon of mainstream futurist literature. To be sure, volatility might very well look like an accelerating thrill ride to its relative beneficiaries (or to those who are adequately insulated by privilege to identify with the beneficiaries whether they number among them or not, strictly speaking). But what this volatility looks like to the overabundant majority of people on earth is, of course, better known as precarity.

Thursday, July 05, 2007

Technoprogressive Discourses As Against Superlative Technology Discourses

(Continued, after a fashion, from my last post)

I describe my politics as technoprogressive, which means quite simply that I am a progressive (that is to say, a person of the democratic left) who focuses quite a large amount of specific attention on the problems and promises of ongoing and upcoming technoscientific change and on the current and emerging state of global technodevelopmental social struggle.

A technoprogressive vantage differs from the usual technocentric vantages (for example, conventional technophilic or technophobic vantages) in its insistence that the instrumental dimension of technoscientific progress (the accumulation of warranted scientific discoveries and useful applications) is inextricable from social, cultural, and political dimensions (which variously facilitate and frustrate the practice of science which eventuates in discovery and warrant through funding, regulation, inducement, education and which distributes the costs, risks, and benefits of technodevelopmental change in ways that variously reflect or not the interests of the diversity of stakeholders to that change).

A technoprogressive vantage differs from the usual progressive vantages (for example, conventional varieties of democratic left politics) in its assumption that however desirable and necessary the defense of and fight for greater democracy, rights, social justice and nonviolent alternatives for the resolution of interpersonal and institutional conflicts, these struggles are inadequate by themselves to confront the actually existing quandaries of contemporary technological societies unless and until they are accompanied by further technoscientific discovery and a wider, fairer distribution of its useful applications to support and implement these values. In a phrase, technology needs democracy, democracy needs technology.

Given my avowed technoprogressivity, for whatever that's worth, some of my loyal technocentric readers will have been surprised to see that technoscience questions fail to figure particularly prominently among the urgent political priorities that I catalogued in the blog post just before this one.

To be fair, you can find glimpses of a concretely technoprogressive (as opposed to just conventionally progressive) agenda in some areas of my current priorities list. I do worry more than one usually finds among progressives about what I take to be the neoliberal perversion of the rhetoric of technoscientific progress. That is to say, I discern a strong and terribly worrying tendency in neoliberalism to figure technodevelopment as culturally autonomous, socially indifferent, and apolitical (even anti-political) in a way that is analogous to, and likely codependent with, the asserted and palpably false spontaneism of its "market" naturalism, which connects to its anti-democratic hostility to any form of social expression that is not subsumed under already constituted exchange protocols, and which encourages scientisms and reductionisms that impoverish the intellectual reach of culture and unnecessarily exacerbates, to the cost of us all, the ongoing crisis of incommensurability between pragmatic/scientific vocabularies of reasonable warrant and "humanistic"/normative vocabularies of reasonable warrant (roughly, Snow's famous "Two Cultures").

Beyond all that, there are other tantalizingly technoprogressive glimpses here and there among the priorities I laundry-listed the day before yesterday. I insisted on the subsidization of research and development and adoption of decentralizing renewable energy sources, I put quite a bit of stress on the need to mandate fact-based science education in matters of sex and drug education to better ensure the scene of informed, nonduressed consent to desired prosthetic practices, I prioritized aspects of the technoprogressive copyfight and a2k (access-to-knowledge) agendas, and I included concerns about cognitive liberty and access to A(rtifical) R(eproductive) T(echnologies)s and safe abortion among my priorities. And as always, there is my ongoing enthusiasm for emerging p2p (peer-to-peer) formations like the people-powered politics of the critical, collaborative left blogosphere and the organizational energies of the Netroots.

But it remains true that these sorts of technoprogressive concerns are, for the most part, couched in that post in legible mainstream democratic-left vocabularies and are "subordinated" to (it would be better to say they are articulated primarily through recourse to) legible democratic-left priorities.

What this will mean to the "transhumanists" and "futurists" among my regular readership is that you would likely never guess from a glance at my diagnoses of the contemporary sociocultural terrain that my politics were inspired (as they were) in any measure by the tradition of radical left technoscience writing (including Marx and Bookchin), and utopian left science fiction (like Kim Stanley Robinson).

This is not because I advocate a "stealth" technoprogressive agenda, as some of my critics like to presume, but because I know (as some of them seem not to do) that technoprogressive concerns are grounded absolutely in democratic left politics as they respond to current threats and as they opportunistically take up current openings for promising change.

Not to put too fine a point on it: There is nothing technoprogressive about abstract commitments to nonviolence or social justice that are indifferent to actually existing violence and injustice, just as there is nothing technoprogressive about a focus on distant futures over actually existing problems. This is so because (for one thing among others) the actual futures we will find our way to will be articulated entirely by and through our actual responses to the actual present rather than by abstract commitments to or identifications with imagined futures.

These are the concerns that lead me to the heart of my topic today. There are many technophiles who seem to me to be entranced by what I call "Superlative Technology Discourse." To get at what this claim means to me let me offer up a rudimentary map.

I would distinguish what are sometimes described as "bioconservative" as against "transhumanist" outlooks as the equally undercritical (and in some cases outright perniciously uncritical), broadly technophobic and technophilic responses to bioethical questions. It seems to me that the "bioconservative" versus "transhumanist" distinction is coming now to settle into a broadly "anti-" versus "pro-" antagonism on questions of what is sometimes described as "enhancement" medicine. These attitudes often, but need not, depend on a prior assumption of a more general "anti-" versus "pro-" antagonism on questions of "technology" in an even broader construal that is probably better described as straightforward technophobia versus technophilia.

It is useful to pause here for a moment and point out that the things that get called "technology" are far too complex and their effects on the actually existing diversity of stakeholders to technoscientific change likewise far too complex to properly justify the assumption of any generalized attitude of "anti-" or "pro-" if what is wanted is to clarify the issues at hand where questions of technodevelopmental politics are concerned.

Indeed, I would go so far as to say that assuming either a generalized "pro-tech" or "anti-tech" perspective is literally unintelligible, so much so that it is difficult not to suspect that technophobic and technophilic discourses both benefit from the obfuscations they produce in some way.

My own sense is that whatever their differences, both technophobia and technophilia induce an undercritical and hence anti-democratizing attitude toward the complex interplay of instrumental and normative factors that articulate the vicissitudes of ongoing technoscientific change over the surface of the planet and over the course of history.

More specifically, I would propose that both technophobia and technophilia comport all too well with a politics of the natural that tends to conduce especially to the benefit of incumbent elites: Technophobia will tend to reject novel interventions it denotes as "technology" into customary lifeways it denotes as "nature" (especially those customs and lifeways that correspond to the interests of incumbent elites); Meanwhile, technophilia will tend to champion novel interventions into customary lifeways, indifferent to the expressed interests of those affected by these interventions, in the name of a progress the idealized end point of which will be said to actualize or more consistently express some deeper "nature" (of humanity, rationality, culture, freedom, or what have you) toward which development is now always only partially obtaining, a "nature" in which, all too typically, once again, one tends to find a reproduction of especially those customs and lifeways that correspond to the interests of incumbent elites.

The distinction of Superlative Technology discourses as against Technoprogressive discourses will resonate with these antagonisms of technophilia as against technophobia, of transhumanisms as against bioconservatisms, but it is not reducible to them: Much Transhumanist rhetoric is Superlative, but not all. Many so-called transhumanists are uncritical technophiles, but not all (the often indispensable socialist-feminist technology writer James Hughes is the farthest thing from an uncritical technophile, for example, despite his unfortunate transhumanist-identification). Nevertheless, I do think it is often immensely clarifying to recognize the tendency (again, it is not an inevitability) of technophilia, superlativity, and transhumanism to enjoin one another, and to apply this insight when one is struggling to make sense of particular perplexing claims made by particular perplexing technophiles.

Superlative Technology discourse invests technodevelopmental change with an almost Providential significance, and contemplates the prospect of technoscientific change in the tonalities of transcendence rather than of ongoing historical transformation.

There are many variations and flavors of Superlative Technology discourse, but they will tend to share certain traits, preoccupations, organizing conceits, and rhetorical gestures in common:

(First) A tendency to overestimate our theoretical grasp of some environmental functionality that will presumably be captured or exceeded by a developmentally proximate human-made technology
(a) Artificial Intelligence is the obvious example here, an achievement whose predicted immanence has been so insistently and indefatigably reiterated by more than a half century's worth of technophiles that one must begin to suspect that a kind of Artificial Imbecillence seizes those who take up the Faith of the Strong Program. (This Imbecillence observation connects to, but does not reduce to, the important charge by Jaron Lanier that one of the chief real world impacts of the Faith in AI is never the arrival of AI in fact, but a culture among coders that eventuates in so much software that disrespects the actual intelligence of its users in the name of intelligent "functionality.")

My objection to the endlessly frustrated but never daunted Strong Programmites will be taken by many of the AI Faithful themselves to amount to a claim on my part that intelligence must then be some kind of "supernatural" essence, but this reaction itself symptomizes the deeper derangement imposed by a Superlative Technology Discourse. Just because one easily and even eagerly accepts that intelligence is an evolved, altogether material feature exhibited by actually existing organisms in the actually existing environment one has not arrived thereby at acceptance of the Superlative proposition that, therefore, intelligence can be engineered by humans, that desired traits currently associated with intelligence (and not necessarily rightly so) can be optimized in this human-engineered intelligence, or that any of these hypothesized engineering feats are likely to arrive any time soon, given our current understanding of organismic intelligence and the computational state of the art.

(b) One discerns here the pattern that is oft-repeated in Superlative Technology Discourse more generally. Enthusiasts for "nanotechnology" inspired by the popular technology writings of K. Eric Drexler (whose books I have enjoyed myself, even if I am not particularly impressed by many of his fans) will habitually refer to the fact that biology uses molecular machines like ribosomes that partake of nanoscale structures to do all sorts of constructive business in warm, wet physiological environments as a way of "proving" that human beings know now or will know soon enough how to make programmable machines that partake of nanoscale structures to do fantastically more sorts of constructive business in a fantastically wider range of environments. Like the gap between the recognition that intelligence is probably not supernatural (whatever that is supposed to mean) and the belief that we humans are on the verge of crafting non-biological superintelligence, the gap between the recognition of what marvelous things ribosomes can do and the belief that we humans are on the verge of crafting molecular-scaled self-replicating general-purpose robots is, to say the least, considerably wider than one would think to hear the True Believers tell it (I'll grant in advance that one can quibble endlessly about exactly how best to essentially characterize what Superlative Nanotechnology would look like, since the width of the gap in question is usually wide enough for all such characterizations to support my point).

(c) Technological Immortalists do this handwaving away of the gap between capacities exhibited by biology and capacities proximately engineerable and improvable by human beings one better still, by handwaving away the gap between an essentially theological concept exhibited by nothing on earth and a presumably proximately engineerable outcome, an overcoming of organismic aging and death. Since even most "Technological Immortalists" themselves will grant that were we to achieve a postulated "superlongevity" through therapeutic intervention we (and this is a "we," one should add, that can only denote those lucky few likely to have access to such hypothesized techniques in the first place, with all that this implies) will no doubt remain vulnerable to some illnesses, or to violent, accidental death nonetheless, it is clarifying to our understanding of Superlative Technology Discourse more generally to think what on earth it is that makes it attractive for some to figure the desired therapeutic accomplishment of human longevity gains through the rhetoric of "immortality" in the first place.

I am quite intrigued and somewhat enthusiastic about some of the work of the current patron saint of the Technological Immortalists, Aubrey de Grey, for example, but must admit that I am completely perplexed by the regular recourse he makes himself to the Superlative Technology Discourse of the Technological Immortalists. It seems to me that the resistance to de Grey's SENS research program and its "engineering" focus on what he calls the Seven Deadly Things in some quarters of biogerontological orthodoxy looks to be pretty well described in classical Kuhnian terms of incumbent resistance to scientific paradigm shifts. What is curious to me, however, is that at the level of rhetoric it seems to me were one to embrace the "bioconservative" Hayfleckian ideal of a medical practice conferring on everybody on earth a healthy three-score and ten years or even the 120-years some lucky few humans may have enjoyed this would be little distinguishable in the therapeutic effects it would actually likely facilitate (as a spur to funding, publication, and so on) from those facilitated by the "transhumanist" ideal of technological immortality. Either way, one sponsors research and development into therapeutic interventions into the mechanisms and diseases of aging that are likely to transform customary expectations about human life-span and the effects of aging on human capacities, but neither way does one find one's way to anything remotely like immortality, invulnerability, or all the rest of the theological paraphernalia of superlongevity discourse. Certainly, looking at the concrete costs, risks, and benefits of particular therapeutic interventions through an immortalist lens confers no clarity or practical guidance whatsoever here and now in the world of actually mortal and vulnerable human beings seeking health, wellbeing, and an amelioration of suffering. The superlativity that gauges a stem-cell therapy either against a dream of immortality or a nightmare of clone armies or Designer Baby genocide seems to me, once again, to leap a gap between actually possible as against remotely possible engineering that seems far more likely to activate deep psychic resources of unreasoning dread and wish-fulfillment than to clarify our understanding of actual stakeholder risks and benefits that now and may soon confront us.

I leave to the side here for now as more coo-coo bananas than even all the above the curious digital camp of the Technological Immortalists, who metaphorically "spiritualize" digital information and then pretend not to notice that this poetic leap isn't exactly a scientific move, though clearly it's got a good beat that some people like to dance to, and then conjoin their "Uploaded" poem to the Strong Programmatic faith in AI I discussed above and use this wooly discursive cocktail to overcome what often looks like a plain common or garden variety hysterical denial of death. (And, no, such a denial of death is not at all the same thing as loving life, it is not at all the same thing as championing healthcare, it is not at all the same thing as wanting to live as long and as well as one can manage, so spare me the weird robot-cult accusations that I am a "Deathist" just because I do fully expect to die and yet somehow still think life is worth living and coming to meaningful terms with in a way that registers this expectation. By the way, guys, just because you're not "Deathists" don't make the mistake of imagining you're not going to die yourselves. You are. Deal with it, and then turn your desperately needed attentions to helping ensure research and universal access to life-saving and life-extending healthcare practices -- including informed, nonduressed consensual recourse to desired non-normativizing therapies -- to all, please.)

And so, to this (First) tendency to overestimate our current theoretical grasp of some environmental functionality captured and then exceeded by a developmentally proximate human-made technology, usually in consequence of some glib overgeneralization from basic biology, a tendency I claim to be exhibited in most varieties of Superlative Technology discourse, I can add a few more that you can glean from the discussion of the first tendency, in some of the examples above:

(Second) A tendency to underestimate the extreme bumpiness we should expect along the developmental pathways from which the relevant technologies could arrive.

(Third) A tendency to assume that these technologies, upon arrival, would function more smoothly than technologies almost ever do.

And to these three tendencies of Superlative Technology Discourse (which might be summarized by the recognition that warranted consensus science tends to be caveated in ways that pseudoscientific hype tends not to be) I will add a fourth tendency of a somewhat different character, but one that is especially damning from a technoprogressive standpoint:

(Fourth) A tendency to exhibit a rather stark obliviousness about the extent to which what we call technological development is articulated in fact not just by the spontaneous accumulation of technical accomplishments but by always actually contentious social, cultural, and political factors as well, with the consequence that Superlative Discourse rarely takes these factors adequately into account at all. This tendency is obviously connected to what Langdon Winner once described as the rhetoric of "autonomous technology."

Actually, it would be better to say that this sort of obliviousness to the interimplication of technoscientific development and technodevelopmental social struggle inspires a political discourse masquerading as a non-political one, provoking as it does all sorts of antidemocratic expressions of hostility about the "ignorance of the masses," or expressions about the "need" of the "truly knowledgeable" to "oh-so reluctantly circumvent public deliberation in the face of urgent technoscientific expediencies," or simply expressions of exhaustion from or distaste about the "meddling interference of political considerations" over technoscientific "advance" (a concept that itself inevitably stealthily accords with any number of disavowed political values, typically values accepted uncritically and actively insulated from criticism by the very gesture of apoliticism in which they are couched, and all too often values which turn out, upon actual inspection, to preferentially express and benefit the customs and privileges of incumbent elites). Notice that I am proposing here not only that technocentric apoliticism and antipoliticism is actually a politics, but more specifically, that this highly political "apoliticism" will tend structurally to conduce always to the benefit of conservative and reactionary politics. This is no surprise since the essence of democratic politics is the embrace of the ongoing contestation of desired outcomes by the diverse stakeholders of public decisions, while the essence of conservative politics is to remove outcomes from contention whenever this threatens incumbent interests.

Quite apart from the ways in which Superlative Technology Discourse often incubates this kind of reactionary (retro)futurist anti-politicism it is also, in its worrisome proximity to faithful True Belief, just as apt to incubate outright authoritarian forms of the sub(cult)ural politics of marginal and defensive identity -- for much the same reasons that fundamentalist varieties of religiosity do. In these cases, Superlative Technophiliacs substitute for the vitally necessary politics of the ongoing democratic stakeholder contestation of technodevelopmental outcomes, a "politics" of "movement building" in which they struggle instead to corral together as many precisely like-minded individuals as they can in an effort to generate a consensus reality of shared belief sufficiently wide and deep to validate the "reality" (in the sense of a feeling more than an outcome) of the specific preferred futures with which they personally identify. Note that this is not anything like the practical politics that seeks to mobilize educational, agitational, and organizational energies to facilitate developmental outcomes with which it is sometimes equated by its partisans, but a politics that ultimately contents itself with the material (but moral, not political) edifications of membership, belonging, and identity.

From all of the above, you will notice that Superlative Technology Discourse likes to focus its attention on a more "distant" than proximate future, but it is crucial that this projected futural focus not be pitched to such a distance as to become the abstract impersonal future of Stapledonian or Vingean speculative opera. Rather, Superlative Technology Discourse fixes its gaze on "futures" just distant enough to fuzz away the historical messiness that will inevitably frustrate their ideal fruition while just proximate enough to nestle plausibly within arm's reach of our own lifespan's grasp, especially should one take up the faith that the storm-churn of ongoing technoscientific development is something we can take for granted. (And on this question it is key to recognize that there is literally no single word, no article of faith, more constantly on the lips of the faithful of the various Churches of superlative technology than that scientific development is "accelerating" -- one even regularly hears the arrant foolishness that "acceleration is accelerating" itself, the reductio ad absurdum of futurological accelerationalizations.)

All of this conveniently and edifyingly distant but not too distant focusing out-of-focus constitutes a quite unique form of glazing over of the gaze, since, like all faithfulness, it yields the manifold esthetic pleasures of bland wish-fulfillment and catharsis, but unlike conventional faithfulness, it can, for the technoscientifically underliterate at any rate, get away with billing its blinders as foresight, its unfocus as focus, its faith as superior scientificity. In an era of quarterly-horizoned future-forecasting and hype, this sleight of handwaving futurology is a kind of catnip, and in an era technoconstituted planetary disruption, danger, and despair it is, for some starry eyed technophiliacs and some bonfire-eyed luddites, well nigh irresistible.

Superlative Technology Discourse aspires in the direction of the omni-predicates of conventional theology (omnipotence, omniscience, omnibenevolence), and makes especially great play over its histrionic abhorrence of all "limits" (an utterly and straightforwardly incoherent notion, of course, but that's not the sort of trifle that superlative techies are apt to worry their pretty little soopergenius heads about) but this is a worldly theology whose incoherent platitudes are voiced in the harsh high-pressure tonalities of the Bible-salesman rather than those of the more modest curate. What superlative technology discourse is selling are the oldest Faustian frauds on the books: quite literally, immortality, fantastically superior knowledge, godlike (or, I should say, we're all nerds here, X-Men-like) superpowers, and wealth beyond the dreams of avarice.

H.P. LaLancette, author of the, in my opinion, always witty, usually right on, occasionally a bit frustrating blog Infeasible ("Refuting Transhumanism (So You Don't Have To)", has posted any number of incisive critiques (and perhaps a few less than incisive ones) against Superlative Technology Discourse as it is expressed in the public arguments of some transhumanist-identified technophiles. In one post, we are treated to this argument:
The way to attack Transhumanism is to show that it is infeasible, which is a lot different than impossible… The difference between impossible and infeasible is money and time. It is possible to build a 747 in your backyard, but it isn't feasible. Why not? Well for a number of boring reasons like: How could you afford it? Where will you get all the materials? How long will it take you? How will you lift the wing to attach it to the fuselage? Etcetera… No one will ever prove Drexler's and de Grey's ideas to be impossible. But it is possible to show that they are infeasible which means we simply don't need to take them seriously … .

I find a lot to sympathize with in this statement, but I want to focus instead on where I might disagree a little with LaLancette's focus (while remaining very much a sympathetic admirer). For me, the facile absurdities of Superlative Technology Discourse are not, on their own terms, sufficiently interesting to attract my sustained attention (the opportunities to skewer idiocy are rich and wide, after all, if that is the sort of thing that floats your boat). I care about Superlative Technology Discourses precisely because I care about the way they come so widely to substitute for or otherwise derange what looks to me to be perfectly reasonable and in fact incomparably urgently needed technoprogressive stakeholder discourses on actual and emerging quandaries of nanoscale toxicity, actual and emerging quandaries of molecular biotechnology, actual and emerging quandaries of network and software security, actual and emerging quandaries of genetic, prosthetic, cognitive, and longevity medicine, actual and emerging quandaries of accountability of elected representatives to warranted scientific consensus, and so on. I think that there are enormously useful contributions to be made by people like Mike Treder and Chris Phoenix at the Center for Responsible Nanotechnology: so long as they manage to disarticulate their project from the Superlative Technology Discourse of the Nanosantalogical admirers of Drexler who invest a phantasized immanent nanotechnology with the theological trappings of near-omnipotence or the utopian trappings of an effortless superabundance that will circumvent the political impasse of finite resources confronting the infinite desires of our planetary peers. I think that there are enormously useful contributions to be made by people who take projects like Aubrey de Grey's SENS program seriously: so long as they manage to disarticulate their work from the hyperbolizing and hystericizing discourses of Technological Immortalism, as, for example, many bioethicists who talk about the proximate benefits and costs of longevity medicine in terms like those of Jay Olshansky's "Longevity Dividend" are beginning to do.

In other words, it seems to me too quick to simply dismiss Drexler or de Grey as only infeasible, inasmuch as what these figures are up to or what they symptomize will differ according to whether or not one reads them through the lens of Superlative Technology or through the lens of technodevelopmental social struggle. There are two ways technocentric thinkers can help to ensure that Superlative Technology Discourse prevails to the cost of any democratizing politics of technodevelopmental social struggle: either to fail to provide the necessary critiques of these hyperbolizing, depoliticizing, obfuscatory Superlative Technology Discourses, or to relinquish the field of emerging and proximately upcoming technoscientific change to these Superlative Technology Discourses by failing to provide legitimately technoprogressive alternatives to them.

There are, to be sure, many variants of Superlative Technological Discourse to be found in the self-appointed Futurological Congress of corporate forecasters, digirati, fanboys, and smug technocrats. The three conspicuous, especially illustrative, and I think particularly damaging variations of Superlative Technology Discourse on which I have lavished my attentions today -- namely, the Technological Immortalists, the Singularitarians, and the Nanosantologists -- are in rich and abundant company. But it must be said that all technocentric discourses (among them, very much my own) seem to be prone to dip into and out of superlativity, every now and then, even when they are too sensible to stay there for long. A vulnerability to superlativity seems to be an occupational hazard of technocentricity, however technorealist it tries to be. Given the extent to which technodevelopmental discourse has been articulated hitherto almost entirely in light of the specific urgencies of neoliberal corporate-militarist competitiveness, it is hard to see how it could be otherwise. Grasping this vulnerability, and understanding its special stakes, seems to me to be an insight without which one is little likely to formulate truly useful, truly democratizing technoprogressive analyses or campaigns in the first place.