(Continued, after a fashion, from my last post)I describe my politics as
technoprogressive, which means quite simply that I am a progressive (that is to say, a person of the democratic left) who focuses quite a large amount of specific attention on the problems and promises of ongoing and upcoming technoscientific change and on the current and emerging state of global technodevelopmental social struggle.
A technoprogressive vantage differs from the usual
technocentric vantages (for example, conventional technophilic or technophobic vantages) in its insistence that the instrumental dimension of technoscientific progress (the accumulation of warranted scientific discoveries and useful applications) is inextricable from social, cultural, and political dimensions (which variously facilitate and frustrate the practice of science which eventuates in discovery and warrant through funding, regulation, inducement, education and which distributes the costs, risks, and benefits of technodevelopmental change in ways that variously reflect or not the interests of the diversity of stakeholders to that change).
A technoprogressive vantage differs from the usual
progressive vantages (for example, conventional varieties of democratic left politics) in its assumption that however desirable and necessary the defense of and fight for greater democracy, rights, social justice and nonviolent alternatives for the resolution of interpersonal and institutional conflicts, these struggles are inadequate by themselves to confront the actually existing quandaries of contemporary technological societies unless and until they are accompanied by further technoscientific discovery and a wider, fairer distribution of its useful applications to support and implement these values. In a phrase,
technology needs democracy, democracy needs technology.
Given my avowed technoprogressivity, for whatever that's worth, some of my loyal technocentric readers will have been surprised to see that technoscience questions fail to figure particularly prominently among the urgent political priorities that I
catalogued in the blog post just before this one.
To be fair, you can find glimpses of a concretely technoprogressive (as opposed to just conventionally progressive) agenda in some areas of my current priorities list. I do worry more than one usually finds among progressives about what I take to be the neoliberal perversion of the rhetoric of technoscientific progress. That is to say, I discern a strong and terribly worrying tendency in neoliberalism to figure technodevelopment as culturally autonomous, socially indifferent, and apolitical (even anti-political) in a way that is analogous to, and likely codependent with, the asserted and palpably false spontaneism of its "market" naturalism, which connects to its anti-democratic hostility to any form of social expression that is not subsumed under already constituted exchange protocols, and which encourages scientisms and reductionisms that impoverish the intellectual reach of culture and unnecessarily exacerbates, to the cost of us all, the ongoing crisis of incommensurability between pragmatic/scientific vocabularies of reasonable warrant and "humanistic"/normative vocabularies of reasonable warrant (roughly, Snow's famous "Two Cultures").
Beyond all that, there are other tantalizingly technoprogressive glimpses here and there among the priorities I laundry-listed the day before yesterday. I insisted on the subsidization of research and development and adoption of decentralizing renewable energy sources, I put quite a bit of stress on the need to mandate fact-based science education in matters of sex and drug education to better ensure the scene of informed, nonduressed consent to desired prosthetic practices, I prioritized aspects of the technoprogressive copyfight and a2k (access-to-knowledge) agendas, and I included concerns about cognitive liberty and access to A(rtifical) R(eproductive) T(echnologies)s and safe abortion among my priorities. And as always, there is my ongoing enthusiasm for emerging p2p (peer-to-peer) formations like the
people-powered politics of the critical, collaborative left blogosphere and the organizational energies of the Netroots.
But it remains true that these sorts of technoprogressive concerns are, for the most part, couched in that post in legible mainstream democratic-left vocabularies and are "subordinated" to (it would be better to say they are articulated primarily through recourse to) legible democratic-left priorities.
What this will mean to the "transhumanists" and "futurists" among my regular readership is that you would likely never guess from a glance at my diagnoses of the contemporary sociocultural terrain that my politics were inspired (as they were) in any measure by the tradition of radical left technoscience writing (including Marx and Bookchin), and utopian left science fiction (like Kim Stanley Robinson).
This is not because I advocate a "stealth" technoprogressive agenda, as some of my critics like to presume, but because I know (as some of them seem not to do) that technoprogressive concerns are grounded absolutely in democratic left politics as they respond to current threats and as they opportunistically take up current openings for promising change.
Not to put too fine a point on it: There is nothing technoprogressive about abstract commitments to nonviolence or social justice that are indifferent to actually existing violence and injustice, just as there is nothing technoprogressive about a focus on distant futures over actually existing problems. This is so because (for one thing among others) the actual futures we will find our way to will be articulated entirely by and through our actual responses to the actual present rather than by abstract commitments to or identifications with imagined futures.
These are the concerns that lead me to the heart of my topic today. There are many technophiles who seem to me to be entranced by what I call "Superlative Technology Discourse." To get at what this claim means to me let me offer up a rudimentary map.
I would distinguish what are sometimes described as "bioconservative" as against "transhumanist" outlooks as the equally undercritical (and in some cases outright perniciously uncritical), broadly technophobic and technophilic responses to bioethical questions. It seems to me that the "bioconservative" versus "transhumanist" distinction is coming now to settle into a broadly "anti-" versus "pro-" antagonism on questions of what is sometimes described as "
enhancement" medicine. These attitudes often, but need not, depend on a prior assumption of a more general "anti-" versus "pro-" antagonism on questions of "technology" in an even broader construal that is probably better described as straightforward technophobia versus technophilia.
It is useful to pause here for a moment and point out that the things that get called "technology" are far too complex and their effects on the actually existing diversity of stakeholders to technoscientific change likewise far too complex to properly justify the assumption of any generalized attitude of "anti-" or "pro-" if what is wanted is to clarify the issues at hand where questions of technodevelopmental politics are concerned.
Indeed, I would go so far as to say that assuming either a generalized "pro-tech" or "anti-tech" perspective is literally unintelligible, so much so that it is difficult not to suspect that technophobic and technophilic discourses both benefit from the obfuscations they produce in some way.
My own sense is that whatever their differences, both technophobia and technophilia induce an undercritical and hence anti-democratizing attitude toward the complex interplay of instrumental and normative factors that articulate the vicissitudes of ongoing technoscientific change over the surface of the planet and over the course of history.
More specifically, I would propose that both technophobia and technophilia comport all too well with a politics of the natural that tends to conduce especially to the benefit of incumbent elites:
Technophobia will tend to reject novel interventions it denotes as "technology" into customary lifeways it denotes as "nature" (especially those customs and lifeways that correspond to the interests of incumbent elites); Meanwhile,
technophilia will tend to champion novel interventions into customary lifeways, indifferent to the expressed interests of those affected by these interventions, in the name of a progress the idealized end point of which will be said to actualize or more consistently express some deeper "nature" (of humanity, rationality, culture, freedom, or what have you) toward which development is now always only partially obtaining, a "nature" in which, all too typically, once again, one tends to find a reproduction of especially those customs and lifeways that correspond to the interests of incumbent elites.
The distinction of Superlative Technology discourses as against Technoprogressive discourses will resonate with these antagonisms of technophilia as against technophobia, of transhumanisms as against bioconservatisms, but it is not reducible to them: Much Transhumanist rhetoric is Superlative, but not all. Many so-called transhumanists are uncritical technophiles, but not all (the often indispensable socialist-feminist technology writer James Hughes is the farthest thing from an uncritical technophile, for example, despite his unfortunate transhumanist-identification). Nevertheless, I do think it is often immensely clarifying to recognize the tendency (again, it is not an inevitability) of technophilia, superlativity, and transhumanism to enjoin one another, and to apply this insight when one is struggling to make sense of particular perplexing claims made by particular perplexing technophiles.
Superlative Technology discourse invests technodevelopmental change with an almost Providential significance, and contemplates the prospect of technoscientific change in the tonalities of
transcendence rather than of ongoing historical transformation.
There are many variations and flavors of Superlative Technology discourse, but they will tend to share certain traits, preoccupations, organizing conceits, and rhetorical gestures in common:
(First) A tendency to overestimate our theoretical grasp of some environmental functionality that will presumably be captured or exceeded by a developmentally proximate human-made technology
(a) Artificial Intelligence is the obvious example here, an achievement whose predicted immanence has been so insistently and indefatigably reiterated by more than a half century's worth of technophiles that one must begin to suspect that a kind of Artificial Imbecillence seizes those who take up the Faith of the Strong Program. (This Imbecillence observation connects to, but does not reduce to, the important charge by Jaron Lanier that one of the chief real world impacts of the Faith in AI is never the arrival of AI in fact, but a culture among coders that eventuates in so much software that disrespects the actual intelligence of its users in the name of intelligent "functionality.")
My objection to the endlessly frustrated but never daunted Strong Programmites will be taken by many of the AI Faithful themselves to amount to a claim on my part that intelligence must then be some kind of "supernatural" essence, but this reaction itself symptomizes the deeper derangement imposed by a Superlative Technology Discourse. Just because one easily and even eagerly accepts that intelligence is an evolved, altogether material feature exhibited by actually existing organisms in the actually existing environment one has not arrived thereby at acceptance of the Superlative proposition that, therefore, intelligence can be engineered by humans, that desired traits currently associated with intelligence (and not necessarily rightly so) can be optimized in this human-engineered intelligence, or that any of these hypothesized engineering feats are likely to arrive any time soon, given our current understanding of organismic intelligence and the computational state of the art.
(b) One discerns here the pattern that is oft-repeated in Superlative Technology Discourse more generally. Enthusiasts for "nanotechnology" inspired by the popular technology writings of K. Eric Drexler (whose books I have enjoyed myself, even if I am not particularly impressed by many of his fans) will habitually refer to the fact that biology uses molecular machines like ribosomes that partake of nanoscale structures to do all sorts of constructive business in warm, wet physiological environments as a way of "proving" that human beings know now or will know soon enough how to make programmable machines that partake of nanoscale structures to do fantastically more sorts of constructive business in a fantastically wider range of environments. Like the gap between the recognition that intelligence is probably not supernatural (whatever that is supposed to mean) and the belief that we humans are on the verge of crafting non-biological superintelligence, the gap between the recognition of what marvelous things ribosomes can do and the belief that we humans are on the verge of crafting molecular-scaled self-replicating general-purpose robots is, to say the least, considerably wider than one would think to hear the True Believers tell it (I'll grant in advance that one can quibble endlessly about exactly how best to essentially characterize what Superlative Nanotechnology would look like, since the width of the gap in question is usually wide enough for all such characterizations to support my point).
(c) Technological Immortalists do this handwaving away of the gap between capacities exhibited by biology and capacities proximately engineerable and improvable by human beings one better still, by handwaving away the gap between an essentially theological concept exhibited by nothing on earth and a presumably proximately engineerable outcome, an overcoming of organismic aging and death. Since even most "Technological Immortalists" themselves will grant that were we to achieve a postulated "superlongevity" through therapeutic intervention we (and this is a "we," one should add, that can only denote those lucky few likely to have access to such hypothesized techniques in the first place, with all that this implies) will no doubt remain vulnerable to some illnesses, or to violent, accidental death nonetheless, it is clarifying to our understanding of Superlative Technology Discourse more generally to think what on earth it is that makes it attractive for some to figure the desired therapeutic accomplishment of human longevity gains through the rhetoric of "immortality" in the first place.
I am quite intrigued and somewhat enthusiastic about some of the work of the current patron saint of the Technological Immortalists, Aubrey de Grey, for example, but must admit that I am completely perplexed by the regular recourse he makes himself to the Superlative Technology Discourse of the Technological Immortalists. It seems to me that the resistance to de Grey's SENS research program and its "engineering" focus on what he calls the Seven Deadly Things in some quarters of biogerontological orthodoxy looks to be pretty well described in classical Kuhnian terms of incumbent resistance to scientific paradigm shifts. What is curious to me, however, is that at the level of rhetoric it seems to me were one to embrace the "bioconservative" Hayfleckian ideal of a medical practice conferring on everybody on earth a healthy three-score and ten years or even the 120-years some lucky few humans may have enjoyed this would be little distinguishable in the therapeutic effects it would actually likely facilitate (as a spur to funding, publication, and so on) from those facilitated by the "transhumanist" ideal of technological immortality. Either way, one sponsors research and development into therapeutic interventions into the mechanisms and diseases of aging that are likely to transform customary expectations about human life-span and the effects of aging on human capacities, but neither way does one find one's way to anything remotely like immortality, invulnerability, or all the rest of the theological paraphernalia of superlongevity discourse. Certainly, looking at the concrete costs, risks, and benefits of particular therapeutic interventions through an immortalist lens confers no clarity or practical guidance whatsoever here and now in the world of actually mortal and vulnerable human beings seeking health, wellbeing, and an amelioration of suffering. The superlativity that gauges a stem-cell therapy either against a dream of immortality or a nightmare of clone armies or Designer Baby genocide seems to me, once again, to leap a gap between actually possible as against remotely possible engineering that seems far more likely to activate deep psychic resources of unreasoning dread and wish-fulfillment than to clarify our understanding of actual stakeholder risks and benefits that now and may soon confront us.
I leave to the side here for now as more coo-coo bananas than even all the above the curious digital camp of the Technological Immortalists, who metaphorically "spiritualize" digital information and then pretend not to notice that this poetic leap isn't exactly a scientific move, though clearly it's got a good beat that some people like to dance to, and then conjoin their "Uploaded" poem to the Strong Programmatic faith in AI I discussed above and use this wooly discursive cocktail to overcome what often looks like a plain common or garden variety hysterical denial of death. (And, no, such a denial of death is not at all the same thing as loving life, it is not at all the same thing as championing healthcare, it is not at all the same thing as wanting to live as long and as well as one can manage, so spare me the weird robot-cult accusations that I am a "Deathist" just because I do fully expect to die and yet somehow still think life is worth living and coming to meaningful terms with in a way that registers this expectation. By the way, guys, just because you're not "Deathists" don't make the mistake of imagining you're not going to die yourselves. You are. Deal with it, and then turn your desperately needed attentions to helping ensure research and universal access to life-saving and life-extending healthcare practices -- including informed, nonduressed consensual recourse to desired non-normativizing therapies -- to all, please.)
And so, to this (First) tendency to overestimate our current theoretical grasp of some environmental functionality captured and then exceeded by a developmentally proximate human-made technology, usually in consequence of some glib overgeneralization from basic biology, a tendency I claim to be exhibited in most varieties of Superlative Technology discourse, I can add a few more that you can glean from the discussion of the first tendency, in some of the examples above:
(Second) A tendency to underestimate the extreme bumpiness we should expect along the developmental pathways from which the relevant technologies could arrive.
(Third) A tendency to assume that these technologies, upon arrival, would function more smoothly than technologies almost ever do.
And to these three tendencies of Superlative Technology Discourse (which might be summarized by the recognition that warranted consensus science tends to be caveated in ways that pseudoscientific hype tends not to be) I will add a fourth tendency of a somewhat different character, but one that is especially damning from a
technoprogressive standpoint:
(Fourth) A tendency to exhibit a rather stark obliviousness about the extent to which what we call technological development is articulated in fact not just by the spontaneous accumulation of technical accomplishments but by always actually contentious social, cultural, and political factors as well, with the consequence that Superlative Discourse rarely takes these factors adequately into account at all. This tendency is obviously connected to what Langdon Winner once described as the rhetoric of "autonomous technology."
Actually, it would be better to say that this sort of obliviousness to the interimplication of technoscientific development and technodevelopmental social struggle inspires a
political discourse masquerading as a
non-political one, provoking as it does all sorts of antidemocratic expressions of hostility about the "ignorance of the masses," or expressions about the "need" of the "truly knowledgeable" to "oh-so reluctantly circumvent public deliberation in the face of urgent technoscientific expediencies," or simply expressions of exhaustion from or distaste about the "meddling interference of political considerations" over technoscientific "advance" (a concept that itself inevitably stealthily accords with any number of disavowed political values, typically values accepted uncritically and actively insulated from criticism by the very gesture of apoliticism in which they are couched, and all too often values which turn out, upon actual inspection, to preferentially express and benefit the customs and privileges of incumbent elites). Notice that I am proposing here not only that technocentric apoliticism and antipoliticism is actually a politics, but more specifically, that this highly political "apoliticism" will tend structurally to conduce always to the benefit of conservative and reactionary politics. This is no surprise since the essence of democratic politics is the embrace of the ongoing contestation of desired outcomes by the diverse stakeholders of public decisions, while the essence of conservative politics is to remove outcomes from contention whenever this threatens incumbent interests.
Quite apart from the ways in which Superlative Technology Discourse often incubates this kind of reactionary (retro)futurist anti-politicism it is also, in its worrisome proximity to faithful True Belief, just as apt to incubate outright authoritarian forms of the sub(cult)ural politics of marginal and defensive identity -- for much the same reasons that fundamentalist varieties of religiosity do. In these cases, Superlative Technophiliacs substitute for the vitally necessary politics of the ongoing democratic stakeholder contestation of technodevelopmental outcomes, a "politics" of "movement building" in which they struggle instead to corral together as many precisely like-minded individuals as they can in an effort to generate a consensus reality of shared belief sufficiently wide and deep to validate the "reality" (in the sense of a
feeling more than an
outcome) of the specific preferred futures with which they personally identify. Note that this is not anything like the practical politics that seeks to mobilize educational, agitational, and organizational energies to facilitate developmental outcomes with which it is sometimes equated by its partisans, but a politics that ultimately contents itself with the material (but moral, not political) edifications of membership, belonging, and identity.
From all of the above, you will notice that Superlative Technology Discourse likes to focus its attention on a more "distant" than proximate future, but it is crucial that this projected futural focus not be pitched to such a distance as to become the abstract impersonal future of Stapledonian or Vingean speculative opera. Rather, Superlative Technology Discourse fixes its gaze on "futures" just distant enough to fuzz away the historical messiness that will inevitably frustrate their ideal fruition while just proximate enough to nestle plausibly within arm's reach of our own lifespan's grasp, especially should one take up the faith that the storm-churn of ongoing technoscientific development is something we can take for granted. (And on this question it is key to recognize that there is literally no single word, no article of faith, more constantly on the lips of the faithful of the various Churches of superlative technology than that scientific development is "accelerating" -- one even regularly hears the arrant foolishness that "acceleration is accelerating" itself, the
reductio ad absurdum of futurological accelerationalizations.)
All of this conveniently and edifyingly distant but not too distant focusing out-of-focus constitutes a quite unique form of glazing over of the gaze, since, like all faithfulness, it yields the manifold esthetic pleasures of bland wish-fulfillment and catharsis, but unlike conventional faithfulness, it can, for the technoscientifically underliterate at any rate, get away with billing its blinders as foresight, its unfocus as focus, its faith as superior scientificity. In an era of quarterly-horizoned future-forecasting and hype, this sleight of handwaving futurology is a kind of catnip, and in an era technoconstituted planetary disruption, danger, and despair it is, for some starry eyed technophiliacs and some bonfire-eyed luddites, well nigh irresistible.
Superlative Technology Discourse aspires in the direction of the omni-predicates of conventional theology (omnipotence, omniscience, omnibenevolence), and makes especially great play over its histrionic abhorrence of all "limits" (an utterly and straightforwardly incoherent notion, of course, but that's not the sort of trifle that superlative techies are apt to worry their pretty little soopergenius heads about) but this is a worldly theology whose incoherent platitudes are voiced in the harsh high-pressure tonalities of the Bible-salesman rather than those of the more modest curate. What superlative technology discourse is selling are the oldest Faustian frauds on the books: quite literally, immortality, fantastically superior knowledge, godlike (or, I should say, we're all nerds here, X-Men-like) superpowers, and wealth beyond the dreams of avarice.
H.P. LaLancette, author of the, in my opinion, always witty, usually right on, occasionally a bit frustrating blog
Infeasible (
"Refuting Transhumanism (So You Don't Have To)", has posted any number of incisive critiques (and perhaps a few less than incisive ones) against Superlative Technology Discourse as it is expressed in the public arguments of some transhumanist-identified technophiles. In one post, we are treated to this argument:
The way to attack Transhumanism is to show that it is infeasible, which is a lot different than impossible… The difference between impossible and infeasible is money and time. It is possible to build a 747 in your backyard, but it isn't feasible. Why not? Well for a number of boring reasons like: How could you afford it? Where will you get all the materials? How long will it take you? How will you lift the wing to attach it to the fuselage? Etcetera… No one will ever prove Drexler's and de Grey's ideas to be impossible. But it is possible to show that they are infeasible which means we simply don't need to take them seriously … .
I find a lot to sympathize with in this statement, but I want to focus instead on where I might disagree a little with LaLancette's focus (while remaining very much a sympathetic admirer). For me, the facile absurdities of Superlative Technology Discourse are not, on their own terms, sufficiently interesting to attract my sustained attention (the opportunities to skewer idiocy are rich and wide, after all, if that is the sort of thing that floats your boat). I care about Superlative Technology Discourses precisely because I care about the way they come so widely to substitute for or otherwise derange what looks to me to be perfectly reasonable and in fact incomparably urgently needed technoprogressive stakeholder discourses on actual and emerging quandaries of nanoscale toxicity, actual and emerging quandaries of molecular biotechnology, actual and emerging quandaries of network and software security, actual and emerging quandaries of genetic, prosthetic, cognitive, and longevity medicine, actual and emerging quandaries of accountability of elected representatives to warranted scientific consensus, and so on. I think that there are enormously useful contributions to be made by people like Mike Treder and Chris Phoenix at the
Center for Responsible Nanotechnology: so long as they manage to disarticulate their project from the Superlative Technology Discourse of the Nanosantalogical admirers of Drexler who invest a phantasized immanent nanotechnology with the theological trappings of near-omnipotence or the utopian trappings of an effortless superabundance that will circumvent the political impasse of finite resources confronting the infinite desires of our planetary peers. I think that there are enormously useful contributions to be made by people who take projects like Aubrey de Grey's
SENS program seriously: so long as they manage to disarticulate their work from the hyperbolizing and hystericizing discourses of Technological Immortalism, as, for example, many bioethicists who talk about the proximate benefits and costs of longevity medicine in terms like those of Jay Olshansky's "Longevity Dividend" are beginning to do.
In other words, it seems to me too quick to simply dismiss Drexler or de Grey as only infeasible, inasmuch as what these figures are up to or what they symptomize will differ according to whether or not one reads them through the lens of Superlative Technology or through the lens of technodevelopmental social struggle. There are two ways technocentric thinkers can help to ensure that Superlative Technology Discourse prevails to the cost of any democratizing politics of technodevelopmental social struggle: either to fail to provide the necessary critiques of these hyperbolizing, depoliticizing, obfuscatory Superlative Technology Discourses, or to relinquish the field of emerging and proximately upcoming technoscientific change to these Superlative Technology Discourses by failing to provide legitimately technoprogressive alternatives to them.
There are, to be sure, many variants of Superlative Technological Discourse to be found in the self-appointed Futurological Congress of corporate forecasters, digirati, fanboys, and smug technocrats. The three conspicuous, especially illustrative, and I think particularly damaging variations of Superlative Technology Discourse on which I have lavished my attentions today -- namely, the Technological Immortalists, the Singularitarians, and the Nanosantologists -- are in rich and abundant company. But it must be said that all technocentric discourses (among them, very much my own) seem to be prone to dip into and out of superlativity, every now and then, even when they are too sensible to stay there for long. A vulnerability to superlativity seems to be an occupational hazard of technocentricity, however technorealist it tries to be. Given the extent to which technodevelopmental discourse has been articulated hitherto almost entirely in light of the specific urgencies of neoliberal corporate-militarist competitiveness, it is hard to see how it could be otherwise. Grasping this vulnerability, and understanding its special stakes, seems to me to be an insight without which one is little likely to formulate truly useful, truly democratizing technoprogressive analyses or campaigns in the first place.