I describe my politics as technoprogressive, which means quite simply that I am a progressive (that is to say, a person of the democratic left) who focuses quite a large amount of specific attention on the problems and promises of ongoing and upcoming technoscientific change and on the current and emerging state of global technodevelopmental social struggle.
A technoprogressive vantage differs from the usual technocentric vantages (for example, conventional technophilic or technophobic vantages) in its insistence that the instrumental dimension of technoscientific progress (the accumulation of warranted scientific discoveries and useful applications) is inextricable from social, cultural, and political dimensions (which variously facilitate and frustrate the practice of science which eventuates in discovery and warrant through funding, regulation, inducement, education and which distributes the costs, risks, and benefits of technodevelopmental change in ways that variously reflect or not the interests of the diversity of stakeholders to that change).
A technoprogressive vantage differs from the usual progressive vantages (for example, conventional varieties of democratic left politics) in its assumption that however desirable and necessary the defense of and fight for greater democracy, rights, social justice and nonviolent alternatives for the resolution of interpersonal and institutional conflicts, these struggles are inadequate by themselves to confront the actually existing quandaries of contemporary technological societies unless and until they are accompanied by further technoscientific discovery and a wider, fairer distribution of its useful applications to support and implement these values. In a phrase, technology needs democracy, democracy needs technology.
Given my avowed technoprogressivity, for whatever that's worth, some of my loyal technocentric readers will have been surprised to see that technoscience questions fail to figure particularly prominently among the urgent political priorities that I catalogued in the blog post just before this one.
To be fair, you can find glimpses of a concretely technoprogressive (as opposed to just conventionally progressive) agenda in some areas of my current priorities list. I do worry more than one usually finds among progressives about what I take to be the neoliberal perversion of the rhetoric of technoscientific progress. That is to say, I discern a strong and terribly worrying tendency in neoliberalism to figure technodevelopment as culturally autonomous, socially indifferent, and apolitical (even anti-political) in a way that is analogous to, and likely codependent with, the asserted and palpably false spontaneism of its "market" naturalism, which connects to its anti-democratic hostility to any form of social expression that is not subsumed under already constituted exchange protocols, and which encourages scientisms and reductionisms that impoverish the intellectual reach of culture and unnecessarily exacerbates, to the cost of us all, the ongoing crisis of incommensurability between pragmatic/scientific vocabularies of reasonable warrant and "humanistic"/normative vocabularies of reasonable warrant (roughly, Snow's famous "Two Cultures").
Beyond all that, there are other tantalizingly technoprogressive glimpses here and there among the priorities I laundry-listed the day before yesterday. I insisted on the subsidization of research and development and adoption of decentralizing renewable energy sources, I put quite a bit of stress on the need to mandate fact-based science education in matters of sex and drug education to better ensure the scene of informed, nonduressed consent to desired prosthetic practices, I prioritized aspects of the technoprogressive copyfight and a2k (access-to-knowledge) agendas, and I included concerns about cognitive liberty and access to A(rtifical) R(eproductive) T(echnologies)s and safe abortion among my priorities. And as always, there is my ongoing enthusiasm for emerging p2p (peer-to-peer) formations like the people-powered politics of the critical, collaborative left blogosphere and the organizational energies of the Netroots.
But it remains true that these sorts of technoprogressive concerns are, for the most part, couched in that post in legible mainstream democratic-left vocabularies and are "subordinated" to (it would be better to say they are articulated primarily through recourse to) legible democratic-left priorities.
What this will mean to the "transhumanists" and "futurists" among my regular readership is that you would likely never guess from a glance at my diagnoses of the contemporary sociocultural terrain that my politics were inspired (as they were) in any measure by the tradition of radical left technoscience writing (including Marx and Bookchin), and utopian left science fiction (like Kim Stanley Robinson).
This is not because I advocate a "stealth" technoprogressive agenda, as some of my critics like to presume, but because I know (as some of them seem not to do) that technoprogressive concerns are grounded absolutely in democratic left politics as they respond to current threats and as they opportunistically take up current openings for promising change.
Not to put too fine a point on it: There is nothing technoprogressive about abstract commitments to nonviolence or social justice that are indifferent to actually existing violence and injustice, just as there is nothing technoprogressive about a focus on distant futures over actually existing problems. This is so because (for one thing among others) the actual futures we will find our way to will be articulated entirely by and through our actual responses to the actual present rather than by abstract commitments to or identifications with imagined futures.
These are the concerns that lead me to the heart of my topic today. There are many technophiles who seem to me to be entranced by what I call "Superlative Technology Discourse." To get at what this claim means to me let me offer up a rudimentary map.
I would distinguish what are sometimes described as "bioconservative" as against "transhumanist" outlooks as the equally undercritical (and in some cases outright perniciously uncritical), broadly technophobic and technophilic responses to bioethical questions. It seems to me that the "bioconservative" versus "transhumanist" distinction is coming now to settle into a broadly "anti-" versus "pro-" antagonism on questions of what is sometimes described as "enhancement" medicine. These attitudes often, but need not, depend on a prior assumption of a more general "anti-" versus "pro-" antagonism on questions of "technology" in an even broader construal that is probably better described as straightforward technophobia versus technophilia.
It is useful to pause here for a moment and point out that the things that get called "technology" are far too complex and their effects on the actually existing diversity of stakeholders to technoscientific change likewise far too complex to properly justify the assumption of any generalized attitude of "anti-" or "pro-" if what is wanted is to clarify the issues at hand where questions of technodevelopmental politics are concerned.
Indeed, I would go so far as to say that assuming either a generalized "pro-tech" or "anti-tech" perspective is literally unintelligible, so much so that it is difficult not to suspect that technophobic and technophilic discourses both benefit from the obfuscations they produce in some way.
My own sense is that whatever their differences, both technophobia and technophilia induce an undercritical and hence anti-democratizing attitude toward the complex interplay of instrumental and normative factors that articulate the vicissitudes of ongoing technoscientific change over the surface of the planet and over the course of history.
More specifically, I would propose that both technophobia and technophilia comport all too well with a politics of the natural that tends to conduce especially to the benefit of incumbent elites: Technophobia will tend to reject novel interventions it denotes as "technology" into customary lifeways it denotes as "nature" (especially those customs and lifeways that correspond to the interests of incumbent elites); Meanwhile, technophilia will tend to champion novel interventions into customary lifeways, indifferent to the expressed interests of those affected by these interventions, in the name of a progress the idealized end point of which will be said to actualize or more consistently express some deeper "nature" (of humanity, rationality, culture, freedom, or what have you) toward which development is now always only partially obtaining, a "nature" in which, all too typically, once again, one tends to find a reproduction of especially those customs and lifeways that correspond to the interests of incumbent elites.
The distinction of Superlative Technology discourses as against Technoprogressive discourses will resonate with these antagonisms of technophilia as against technophobia, of transhumanisms as against bioconservatisms, but it is not reducible to them: Much Transhumanist rhetoric is Superlative, but not all. Many so-called transhumanists are uncritical technophiles, but not all (the often indispensable socialist-feminist technology writer James Hughes is the farthest thing from an uncritical technophile, for example, despite his unfortunate transhumanist-identification). Nevertheless, I do think it is often immensely clarifying to recognize the tendency (again, it is not an inevitability) of technophilia, superlativity, and transhumanism to enjoin one another, and to apply this insight when one is struggling to make sense of particular perplexing claims made by particular perplexing technophiles.
Superlative Technology discourse invests technodevelopmental change with an almost Providential significance, and contemplates the prospect of technoscientific change in the tonalities of transcendence rather than of ongoing historical transformation.
There are many variations and flavors of Superlative Technology discourse, but they will tend to share certain traits, preoccupations, organizing conceits, and rhetorical gestures in common:
(First) A tendency to overestimate our theoretical grasp of some environmental functionality that will presumably be captured or exceeded by a developmentally proximate human-made technology
(a) Artificial Intelligence is the obvious example here, an achievement whose predicted immanence has been so insistently and indefatigably reiterated by more than a half century's worth of technophiles that one must begin to suspect that a kind of Artificial Imbecillence seizes those who take up the Faith of the Strong Program. (This Imbecillence observation connects to, but does not reduce to, the important charge by Jaron Lanier that one of the chief real world impacts of the Faith in AI is never the arrival of AI in fact, but a culture among coders that eventuates in so much software that disrespects the actual intelligence of its users in the name of intelligent "functionality.")
My objection to the endlessly frustrated but never daunted Strong Programmites will be taken by many of the AI Faithful themselves to amount to a claim on my part that intelligence must then be some kind of "supernatural" essence, but this reaction itself symptomizes the deeper derangement imposed by a Superlative Technology Discourse. Just because one easily and even eagerly accepts that intelligence is an evolved, altogether material feature exhibited by actually existing organisms in the actually existing environment one has not arrived thereby at acceptance of the Superlative proposition that, therefore, intelligence can be engineered by humans, that desired traits currently associated with intelligence (and not necessarily rightly so) can be optimized in this human-engineered intelligence, or that any of these hypothesized engineering feats are likely to arrive any time soon, given our current understanding of organismic intelligence and the computational state of the art.
(b) One discerns here the pattern that is oft-repeated in Superlative Technology Discourse more generally. Enthusiasts for "nanotechnology" inspired by the popular technology writings of K. Eric Drexler (whose books I have enjoyed myself, even if I am not particularly impressed by many of his fans) will habitually refer to the fact that biology uses molecular machines like ribosomes that partake of nanoscale structures to do all sorts of constructive business in warm, wet physiological environments as a way of "proving" that human beings know now or will know soon enough how to make programmable machines that partake of nanoscale structures to do fantastically more sorts of constructive business in a fantastically wider range of environments. Like the gap between the recognition that intelligence is probably not supernatural (whatever that is supposed to mean) and the belief that we humans are on the verge of crafting non-biological superintelligence, the gap between the recognition of what marvelous things ribosomes can do and the belief that we humans are on the verge of crafting molecular-scaled self-replicating general-purpose robots is, to say the least, considerably wider than one would think to hear the True Believers tell it (I'll grant in advance that one can quibble endlessly about exactly how best to essentially characterize what Superlative Nanotechnology would look like, since the width of the gap in question is usually wide enough for all such characterizations to support my point).
(c) Technological Immortalists do this handwaving away of the gap between capacities exhibited by biology and capacities proximately engineerable and improvable by human beings one better still, by handwaving away the gap between an essentially theological concept exhibited by nothing on earth and a presumably proximately engineerable outcome, an overcoming of organismic aging and death. Since even most "Technological Immortalists" themselves will grant that were we to achieve a postulated "superlongevity" through therapeutic intervention we (and this is a "we," one should add, that can only denote those lucky few likely to have access to such hypothesized techniques in the first place, with all that this implies) will no doubt remain vulnerable to some illnesses, or to violent, accidental death nonetheless, it is clarifying to our understanding of Superlative Technology Discourse more generally to think what on earth it is that makes it attractive for some to figure the desired therapeutic accomplishment of human longevity gains through the rhetoric of "immortality" in the first place.
I am quite intrigued and somewhat enthusiastic about some of the work of the current patron saint of the Technological Immortalists, Aubrey de Grey, for example, but must admit that I am completely perplexed by the regular recourse he makes himself to the Superlative Technology Discourse of the Technological Immortalists. It seems to me that the resistance to de Grey's SENS research program and its "engineering" focus on what he calls the Seven Deadly Things in some quarters of biogerontological orthodoxy looks to be pretty well described in classical Kuhnian terms of incumbent resistance to scientific paradigm shifts. What is curious to me, however, is that at the level of rhetoric it seems to me were one to embrace the "bioconservative" Hayfleckian ideal of a medical practice conferring on everybody on earth a healthy three-score and ten years or even the 120-years some lucky few humans may have enjoyed this would be little distinguishable in the therapeutic effects it would actually likely facilitate (as a spur to funding, publication, and so on) from those facilitated by the "transhumanist" ideal of technological immortality. Either way, one sponsors research and development into therapeutic interventions into the mechanisms and diseases of aging that are likely to transform customary expectations about human life-span and the effects of aging on human capacities, but neither way does one find one's way to anything remotely like immortality, invulnerability, or all the rest of the theological paraphernalia of superlongevity discourse. Certainly, looking at the concrete costs, risks, and benefits of particular therapeutic interventions through an immortalist lens confers no clarity or practical guidance whatsoever here and now in the world of actually mortal and vulnerable human beings seeking health, wellbeing, and an amelioration of suffering. The superlativity that gauges a stem-cell therapy either against a dream of immortality or a nightmare of clone armies or Designer Baby genocide seems to me, once again, to leap a gap between actually possible as against remotely possible engineering that seems far more likely to activate deep psychic resources of unreasoning dread and wish-fulfillment than to clarify our understanding of actual stakeholder risks and benefits that now and may soon confront us.
I leave to the side here for now as more coo-coo bananas than even all the above the curious digital camp of the Technological Immortalists, who metaphorically "spiritualize" digital information and then pretend not to notice that this poetic leap isn't exactly a scientific move, though clearly it's got a good beat that some people like to dance to, and then conjoin their "Uploaded" poem to the Strong Programmatic faith in AI I discussed above and use this wooly discursive cocktail to overcome what often looks like a plain common or garden variety hysterical denial of death. (And, no, such a denial of death is not at all the same thing as loving life, it is not at all the same thing as championing healthcare, it is not at all the same thing as wanting to live as long and as well as one can manage, so spare me the weird robot-cult accusations that I am a "Deathist" just because I do fully expect to die and yet somehow still think life is worth living and coming to meaningful terms with in a way that registers this expectation. By the way, guys, just because you're not "Deathists" don't make the mistake of imagining you're not going to die yourselves. You are. Deal with it, and then turn your desperately needed attentions to helping ensure research and universal access to life-saving and life-extending healthcare practices -- including informed, nonduressed consensual recourse to desired non-normativizing therapies -- to all, please.)
And so, to this (First) tendency to overestimate our current theoretical grasp of some environmental functionality captured and then exceeded by a developmentally proximate human-made technology, usually in consequence of some glib overgeneralization from basic biology, a tendency I claim to be exhibited in most varieties of Superlative Technology discourse, I can add a few more that you can glean from the discussion of the first tendency, in some of the examples above:
(Second) A tendency to underestimate the extreme bumpiness we should expect along the developmental pathways from which the relevant technologies could arrive.
(Third) A tendency to assume that these technologies, upon arrival, would function more smoothly than technologies almost ever do.
And to these three tendencies of Superlative Technology Discourse (which might be summarized by the recognition that warranted consensus science tends to be caveated in ways that pseudoscientific hype tends not to be) I will add a fourth tendency of a somewhat different character, but one that is especially damning from a technoprogressive standpoint:
(Fourth) A tendency to exhibit a rather stark obliviousness about the extent to which what we call technological development is articulated in fact not just by the spontaneous accumulation of technical accomplishments but by always actually contentious social, cultural, and political factors as well, with the consequence that Superlative Discourse rarely takes these factors adequately into account at all. This tendency is obviously connected to what Langdon Winner once described as the rhetoric of "autonomous technology."
Actually, it would be better to say that this sort of obliviousness to the interimplication of technoscientific development and technodevelopmental social struggle inspires a political discourse masquerading as a non-political one, provoking as it does all sorts of antidemocratic expressions of hostility about the "ignorance of the masses," or expressions about the "need" of the "truly knowledgeable" to "oh-so reluctantly circumvent public deliberation in the face of urgent technoscientific expediencies," or simply expressions of exhaustion from or distaste about the "meddling interference of political considerations" over technoscientific "advance" (a concept that itself inevitably stealthily accords with any number of disavowed political values, typically values accepted uncritically and actively insulated from criticism by the very gesture of apoliticism in which they are couched, and all too often values which turn out, upon actual inspection, to preferentially express and benefit the customs and privileges of incumbent elites). Notice that I am proposing here not only that technocentric apoliticism and antipoliticism is actually a politics, but more specifically, that this highly political "apoliticism" will tend structurally to conduce always to the benefit of conservative and reactionary politics. This is no surprise since the essence of democratic politics is the embrace of the ongoing contestation of desired outcomes by the diverse stakeholders of public decisions, while the essence of conservative politics is to remove outcomes from contention whenever this threatens incumbent interests.
Quite apart from the ways in which Superlative Technology Discourse often incubates this kind of reactionary (retro)futurist anti-politicism it is also, in its worrisome proximity to faithful True Belief, just as apt to incubate outright authoritarian forms of the sub(cult)ural politics of marginal and defensive identity -- for much the same reasons that fundamentalist varieties of religiosity do. In these cases, Superlative Technophiliacs substitute for the vitally necessary politics of the ongoing democratic stakeholder contestation of technodevelopmental outcomes, a "politics" of "movement building" in which they struggle instead to corral together as many precisely like-minded individuals as they can in an effort to generate a consensus reality of shared belief sufficiently wide and deep to validate the "reality" (in the sense of a feeling more than an outcome) of the specific preferred futures with which they personally identify. Note that this is not anything like the practical politics that seeks to mobilize educational, agitational, and organizational energies to facilitate developmental outcomes with which it is sometimes equated by its partisans, but a politics that ultimately contents itself with the material (but moral, not political) edifications of membership, belonging, and identity.
From all of the above, you will notice that Superlative Technology Discourse likes to focus its attention on a more "distant" than proximate future, but it is crucial that this projected futural focus not be pitched to such a distance as to become the abstract impersonal future of Stapledonian or Vingean speculative opera. Rather, Superlative Technology Discourse fixes its gaze on "futures" just distant enough to fuzz away the historical messiness that will inevitably frustrate their ideal fruition while just proximate enough to nestle plausibly within arm's reach of our own lifespan's grasp, especially should one take up the faith that the storm-churn of ongoing technoscientific development is something we can take for granted. (And on this question it is key to recognize that there is literally no single word, no article of faith, more constantly on the lips of the faithful of the various Churches of superlative technology than that scientific development is "accelerating" -- one even regularly hears the arrant foolishness that "acceleration is accelerating" itself, the reductio ad absurdum of futurological accelerationalizations.)
All of this conveniently and edifyingly distant but not too distant focusing out-of-focus constitutes a quite unique form of glazing over of the gaze, since, like all faithfulness, it yields the manifold esthetic pleasures of bland wish-fulfillment and catharsis, but unlike conventional faithfulness, it can, for the technoscientifically underliterate at any rate, get away with billing its blinders as foresight, its unfocus as focus, its faith as superior scientificity. In an era of quarterly-horizoned future-forecasting and hype, this sleight of handwaving futurology is a kind of catnip, and in an era technoconstituted planetary disruption, danger, and despair it is, for some starry eyed technophiliacs and some bonfire-eyed luddites, well nigh irresistible.
Superlative Technology Discourse aspires in the direction of the omni-predicates of conventional theology (omnipotence, omniscience, omnibenevolence), and makes especially great play over its histrionic abhorrence of all "limits" (an utterly and straightforwardly incoherent notion, of course, but that's not the sort of trifle that superlative techies are apt to worry their pretty little soopergenius heads about) but this is a worldly theology whose incoherent platitudes are voiced in the harsh high-pressure tonalities of the Bible-salesman rather than those of the more modest curate. What superlative technology discourse is selling are the oldest Faustian frauds on the books: quite literally, immortality, fantastically superior knowledge, godlike (or, I should say, we're all nerds here, X-Men-like) superpowers, and wealth beyond the dreams of avarice.
H.P. LaLancette, author of the, in my opinion, always witty, usually right on, occasionally a bit frustrating blog Infeasible ("Refuting Transhumanism (So You Don't Have To)", has posted any number of incisive critiques (and perhaps a few less than incisive ones) against Superlative Technology Discourse as it is expressed in the public arguments of some transhumanist-identified technophiles. In one post, we are treated to this argument:
The way to attack Transhumanism is to show that it is infeasible, which is a lot different than impossible… The difference between impossible and infeasible is money and time. It is possible to build a 747 in your backyard, but it isn't feasible. Why not? Well for a number of boring reasons like: How could you afford it? Where will you get all the materials? How long will it take you? How will you lift the wing to attach it to the fuselage? Etcetera… No one will ever prove Drexler's and de Grey's ideas to be impossible. But it is possible to show that they are infeasible which means we simply don't need to take them seriously … .
I find a lot to sympathize with in this statement, but I want to focus instead on where I might disagree a little with LaLancette's focus (while remaining very much a sympathetic admirer). For me, the facile absurdities of Superlative Technology Discourse are not, on their own terms, sufficiently interesting to attract my sustained attention (the opportunities to skewer idiocy are rich and wide, after all, if that is the sort of thing that floats your boat). I care about Superlative Technology Discourses precisely because I care about the way they come so widely to substitute for or otherwise derange what looks to me to be perfectly reasonable and in fact incomparably urgently needed technoprogressive stakeholder discourses on actual and emerging quandaries of nanoscale toxicity, actual and emerging quandaries of molecular biotechnology, actual and emerging quandaries of network and software security, actual and emerging quandaries of genetic, prosthetic, cognitive, and longevity medicine, actual and emerging quandaries of accountability of elected representatives to warranted scientific consensus, and so on. I think that there are enormously useful contributions to be made by people like Mike Treder and Chris Phoenix at the Center for Responsible Nanotechnology: so long as they manage to disarticulate their project from the Superlative Technology Discourse of the Nanosantalogical admirers of Drexler who invest a phantasized immanent nanotechnology with the theological trappings of near-omnipotence or the utopian trappings of an effortless superabundance that will circumvent the political impasse of finite resources confronting the infinite desires of our planetary peers. I think that there are enormously useful contributions to be made by people who take projects like Aubrey de Grey's SENS program seriously: so long as they manage to disarticulate their work from the hyperbolizing and hystericizing discourses of Technological Immortalism, as, for example, many bioethicists who talk about the proximate benefits and costs of longevity medicine in terms like those of Jay Olshansky's "Longevity Dividend" are beginning to do.
In other words, it seems to me too quick to simply dismiss Drexler or de Grey as only infeasible, inasmuch as what these figures are up to or what they symptomize will differ according to whether or not one reads them through the lens of Superlative Technology or through the lens of technodevelopmental social struggle. There are two ways technocentric thinkers can help to ensure that Superlative Technology Discourse prevails to the cost of any democratizing politics of technodevelopmental social struggle: either to fail to provide the necessary critiques of these hyperbolizing, depoliticizing, obfuscatory Superlative Technology Discourses, or to relinquish the field of emerging and proximately upcoming technoscientific change to these Superlative Technology Discourses by failing to provide legitimately technoprogressive alternatives to them.
There are, to be sure, many variants of Superlative Technological Discourse to be found in the self-appointed Futurological Congress of corporate forecasters, digirati, fanboys, and smug technocrats. The three conspicuous, especially illustrative, and I think particularly damaging variations of Superlative Technology Discourse on which I have lavished my attentions today -- namely, the Technological Immortalists, the Singularitarians, and the Nanosantologists -- are in rich and abundant company. But it must be said that all technocentric discourses (among them, very much my own) seem to be prone to dip into and out of superlativity, every now and then, even when they are too sensible to stay there for long. A vulnerability to superlativity seems to be an occupational hazard of technocentricity, however technorealist it tries to be. Given the extent to which technodevelopmental discourse has been articulated hitherto almost entirely in light of the specific urgencies of neoliberal corporate-militarist competitiveness, it is hard to see how it could be otherwise. Grasping this vulnerability, and understanding its special stakes, seems to me to be an insight without which one is little likely to formulate truly useful, truly democratizing technoprogressive analyses or campaigns in the first place.
14 comments:
I find myself taken in my superlative discourse fairly often, and I go back and forth about how I feel with regards to this. What you remind me of, however, is that even if superlative discourse is true, it's the technocritical engagement with real technologies and policies which is ultimately actually useful in achieving those goals, and it is the technocritical engagement with social issues that gives emerging technology a chance at being ethical.
Outside of this sort of engagement, the rhetoric of superlative technology often inspires me, gives me hope, but so do technocritical positions. And given a choice, I need to always choose the latter. I can hope for some of the former, but that's ultimately useless in terms of real world action.
even if superlative discourse is true...
But the point is that Superlative Discourse cannot be true. Nor is truth in the sense you mean its proper purpose, any more than the declarations of the faithful are assertions of truth, so much as of enthusiasm.
But, as you say, there is something inspiring in Superlativity, and I'm the last person to deny the value of that, in its right place. It's when Superlativity demands True Belief or deranges sense where sense is what's wanted most that the Superlative does its damage and endorses its frauds.
There's a place for enthusiasm, there's a place for utopianism, there's a place for pleasure. But democracy has its demands as well, and the fraught edifications of democracy are not the pleasures of Superlativity.
When we turn, as we must, to the political... then Superlativity is the language of the Priestly mouthpieces of authoritarian order. That's when one has to take care.
I'd add that one of the characteristics of the technologies of superlative discourse is their *imminence*. I don't simply mean the "acceleration" meme, but that the technologies in question are just around the corner, no matter how many theoretical or procedural hurdles may be left. For the hardcore advocates of strong AI/ molecular nanotech/ "immortality"/ the singularity, it's not enough that the futures they predict would be wholly transformative -- the futures must be ready to happen far sooner than anyone might think.
I think that's one of the important parallels between the singularitarians and the "rapture-ready" types, and is an interesting consequence of a short-term-thinking society. If the moment of transcendence wasn't due for another fifty years, it would be hard to get (and keep) people excited about its potential. Conversely, because the signs and portents (whether symbolic or technological) that must happen before the moment arises are ambiguous, the true believers can maintain that excitement about the any-day-now Rapture/Singularity for far longer than one might rationally expect.
Interestingly, in the case of many of the radical longevity folks (and, to a growing degree, the molecular nano community), the "it'll happen any minute" excitement has given way to a grudging "we're now getting a handle on how this might work, and still looks possible, but not for awhile yet."
Well, speaking of "fifty years"...
This is exactly what we find on the IEET about page: "In the next fifty years, artificial intelligence, nanotechnology, genetic engineering and cognitive science will allow human beings to transcend the limitations of the human body."
When I find myself needing to explain what the IEET does, I naturally refer to its own description as a guide. In my humble opinion, though, this particular fifty-year claim is more characteristic of what Paul Saffo calls "mythical seeing" than of responsible forecasting.
Incidentally, I'm not sure I agree, Jamais (if you don't mind my using your first name), that fifty years *seems* distant. Different people will respond differently to different timetables, of course. But the sorts of people I usually interact with would, I think, consider fifty years to be a remarkably short time period for the kind of transcendence the IEET advertises so visibly.
Jamais, I agree with everything you say here, especially including the very last part in which you point to the emergence of people who are seriously interested in longevity and nanoscale technologies in non-Superlative versions. I think that is very hopeful.
Your point about immanence cannot be streesed enough: you're right that it whomps up enthusiasm and money in a perniciously short term business cycle.
I will add (I'm a broken record on this point, I know) that it tends to provoke what I worry are the profoundly anti-democratic technocratic ideas that the demands of urgency circumvent the desire for stakeholder deliberation when it comes to these technologies.
Jonathan, as one of the non-transhumanist participants at IEET I will say that I would prefer a technoprogressive characterization of our brief, something about addressing the problems and promises of ongoing and upcoming technoscientific change from a perspective that values democracy over incumbency, that values open futures over settled customs mistaken for "nature," and values the scene of informed, nonduressed consent over elite prescriptions. I agree that the whole line on "transcending everything bodily any day now" seems to me rather more attuned to transhumanist subcultural idiosyncracies than a progressive technodevelopmental policy think tank possibly really needs to be, even if many transhumanist-identified folks number among its guiding lights.
With regards to Jamais' comments on timescales... speaking for myself, the timescales are not a huge issue. Doesn't make much of a difference to me whether these advances happen in 10 years or 100, as long as we continue working towards them safely and ethically in the now. I can also speak for other Singularitarians when I say "as long as it takes" tends to be the time forecast, not any particular short timespan. Even if indefinite life extension is not achieved by around the time I am scheduled to die of old age (~2065, not very likely), there's always cryonics.
It's also interesting, if you read all of CRN's site, you'll see that Chris Phoenix and Mike Treder are engaging in exactly the sort of "superlative" prognostications that Dale is disgusted by. This is because MNT factories, if invented, would have truly impressive qualities. I've been watching and communicating with CRN since their founding and know that their project is expressly NOT disarticulated from what Dale calls the "Superlative Technology Discourse". Treder and Phoenix are both serious transhumanists who want to live for a very long time as posthuman beings: quite in opposition to the anti-transhumanist diatribe laid out here.
I'm sorry that Michael Anissimov finds in this critique nothing but emotional abuse and denunciation. I daresay many careful readers will find some thoughtful analysis here and there in it, even if they are not persuaded by it. Thankfully (for me, though possibly not for Michael) not all people interested in technodevelopmental social struggle are robot cultists.
As for Mike Treder and Chris Phoenix, I would be the last person to speak for them and certainly I wouldn't want to put either of them on the spot -- but I stick to what I have said here in the piece. I think they have enormously useful things to say even when I do not always entirely agree with some of these things, I continue to read their work with profit and pleasure, but I still think they are all the more useful the more carefully they manage to disarticulate their formulations from the Superlative Technology Discourse of their sub(cult)urally transhumanist fans.
For the rest, two quick corrections and I'm done for now. I think it should go without saying that technoscientific developments can be plenty "impressive" without taking on the coloration of Superlative Technology Discourse in the sense in which I have articulated it here. And, finally, everybody should rest assured that I am not so much "disgusted" by Michael's own Superlative Technology Discourse as a little embarrassed for him by it.
Michael Anissimov wrote:
> Doesn't make much of a difference to me whether these advances
> happen in 10 years or 100, as long as we continue working towards
> them safely and ethically in the now.
Of course, that reflects the evolution of the Singularitarians' (read:
Eliezer Yudkowsky's) party line.
A while back, it was something like "2.67 human beings die each
second; look how many people SIAI will be saving by bringing
about the Singularity years sooner than it otherwise would have
happened".
Nowadays it's "One false move, and the Singularity will wipe out
the whole human race. Only SIAI can negotiate that knife
edge safely, however long it takes."
The bottom line in both cases is "The future of humanity,
or even the galaxy, or even the Universe, depends on SIAI
[on its Guru, most particularly -- everybody else is dispensable,
including **you**, Michael]."
One of the reasons your writing comes off as spindoctoring
and PR is that it tracks, so inerrantly, the latest
Yudkowskian Encyclicals. Why don't you try thinking on your
own, for a change? You've got the intelligence for it --
you've simply fallen under the spell of a narcissistic guru.
Don't be embarrassed, happens to the best of 'em, but please
take time out of your busy schedule as publicist to
investigate the situation. It's painful to face, but though
you may be momentarily sadder, you'll end up wiser.
I would fully agree that fifty years isn't a terribly long time, but in terms of broad public discourse, it may as well be forever. It's damnably hard to get people to pay attention to the long slow threat from climate disruption, and that's something that is demonstrably happening, not a likely but not as yet manifest possibility.
To nuance my response a bit, I'm far more concerned about organizations that say "X is coming, and coming far sooner than people realize, so if you're not on the team you'll be roadkill, but if you are on the team, you'll be a god/ a posthuman/ rich beyond avarice/ immortal" than I am about organizations that say "X is likely to happen this century, and may happen faster than we think -- even in the next decade -- so it's incumbent upon us to pay attention and to start thinking about policies and strategies to make sure that X happens in a safe and responsible fashion." In short, I have a great deal more comfort with organizations and people that argue that a transformative future mandates participatory agency than with those that argue that a transformative future is going to happen *to us*, whether we like it or not.
And while I won't speak for Mike & Chris, either, since I do work closely with them these days I can say with a great deal of certainty that their concepts of the implications of the emergence of molecular manufacturing bears little resemblance to what Dale characterizes as Superlative Discourse. Arguing that an ecosystem of inventions can have radically disruptive effects (both positive and negative) is not the same as arguing that said technologies will make us transcend all that has gone before, and make all that is solid melt into air (as it were).
Jamais, Dale,
It's true that CRN is treating the implications of molecular manufacturing (MM) as something that needs to be talked about, rather than something that must simply be submitted to.
On the other hand, our analysis of the *cause* of MM's implications includes several factors that are uncomfortably close to Superlative Discourse. We do assume that if a nanofactory is built, it'll be so powerful that a lot of people will use it. And we do assert that it'll happen significantly sooner than most people expect. And that the relevant science is understood well enough to enable all necessary engineering Any Day Now.
Dale could argue that this is simply an evolution of the (allegedly) broken Drexlerian message into something more palatable but still wrong. I'd argue that Drexler's basic science was never broken, and his engineering has evolved to the point where it's not broken either. I'd also argue that the observable and solidly-predictable progress in computers, biochemistry, computational chemistry, and microscopy will soon make MM not just plausible, but straightforward. And finally, I'd argue that a powerful general-purpose technology will be eagerly adopted for at least some applications, albeit probably not all the predicted applications.
But I don't think these arguments are enough to allow us to dismiss Dale's points out of hand. It may be that we're fundamentally underestimating the systemic resistance to MM-type things, and that there'll never be an abrupt change in manufacturing capability even though the science and engineering appear to support it. In that case, CRN's more dire projections would lose force.
And I won't deny that CRN's intelletual heritage comes from groups and ideas that are closer to Superlative Discourse than CRN is. Whether this means we've found a sensible middle ground, or that our argument is fundamentally flawed, is left as an exercise for the reader.
Dale, thanks for making me think.
Chris
Chris writes,
I'd argue that Drexler's basic science was never broken… [and] that the observable and solidly-predictable progress in computers, biochemistry, computational chemistry, and microscopy will soon make MM not just plausible, but straightforward.
When these claims are reasonably warranted and reasonably caveated I have no problem with them at all (even if possibly I am a little more skeptical here and there than you might be). I feel much the same about reasonably warranted and reasonably caveated claims by Aubrey de Grey, for example.
And finally, I'd argue that a powerful general-purpose technology will be eagerly adopted for at least some applications, albeit probably not all the predicted applications.
This is where the rubber hits the road in my view. (Of course, I'm a rhetorician and not a scientist, so that may help account for my emphasis!) The applications that actually will be adopted, as well as the actual distribution of costs, risks, and benefits of abilities that emerge along the developmental pathway toward what you would describe as mature molecular manufacturing will all be articulated by political, social, and cultural forces that stand in a (to put it mildly) complex relation to questions of "the science" in the sense you were talking about before.
I have called the variation of Superlative Technology Discourse that I connect with MM as the Nanosantalogical Variation. This is a rather arch designation to be sure, but one that stresses my sense that Superlativity in nanotech discourse tends to be a matter of political naivete as much or more than scientific recklessness. It is very specifically the dream that an abrupt near-term arrival of nanotechnological abundance will circumvent the impasse of a diversity of stakeholders in a shared and finite world, or the dream that such an arrival will effortlessly remedy the damage of centuries of extractive industry, and similar dreams that receive the bulk of my Superlativity critique when futurological talk turns to nanotech.
I will admit that I do also think that many MM enthusiasts make rather too sweeping, too immanent, and too uncaveated, Providential claims about nanotechnology in consequence of a popular Superlative framing of the discourse.
But it has always seemed to me that CRN has devoted considerable energy to nuancing and frustrating such Superlativity, even if occasionally it may contribute to it a bit in its moments of enthusiasm. Mike Treder has been very good in the CRN blog at insisting on the cultural contexts of technodevelopment and on the political complexities of technology diffusion in a global context. You and Mike together seem to me to provide complementary interventions against too Superlative a construal of MM (even if it does not seem to me that many of your fans can always be counted on to be quite so careful).
At the end of your comment you suggest that perhaps CRN is a "middle ground"… but I think it is important to grasp that reasonableness is not a matter of striking a "balance" between Luddism and Superlativity.
Now, I'm not saying that what you guys say is always to my liking, since obviously I am probably quite a bit more skeptical than you are about developmental timescales, likelihood of smooth function, and expectations of progressive outcomes in technology diffusion (without strong democratic organizing to insist on this), but who cares?
To be fair, even so strong a critic of Superlativity as I am surely contributes to such Superlativity myself, in my own occasional flights of enthusiasm. As I say in the piece, a vulnerability to Superlativity is an occupational hazard of a technocentric perspective, one we should always be on the lookout for once we grasp its dangers.
It may be that we're fundamentally underestimating the systemic resistance to MM-type things, and that there'll never be an abrupt change in manufacturing capability even though the science and engineering appear to support it.
I would caution against too monolithic a characterization of resistance to and support for what you are calling "MM-type things." There are many applications of MM I would resist and many I would support, there are many applications the actual plausibility of which will be determined by political resistance and support more than by the "logical possibilities" available in principle to the state of the art. You will misconstrue the character of both my resistances and my support if you attribute it to too broad-stroked a stance of "pro-" or "anti-" technology of the kind that tends to freight both Luddism and Superlativity.
forgive the late post (although I'm glad I came back! There's a lot of good conversation here), but this one has been cooking in my mind, as many posts do.
"Notice that I am proposing here not only that technocentric apoliticism and antipoliticism is actually a politics, but more specifically, that this highly political "apoliticism" will tend structurally to conduce always to the benefit of conservative and reactionary politics."
This is a point that's new to me, and I am keen to get a real thorough grasp of the argument (and you thought you couldn't teach me much ;).
I could not help hearing a rather disturbing echo of "Either you're with us, or against us" that I can't say sits very well. How would you respond to that similarity?
I could not help hearing a rather disturbing echo of "Either you're with us, or against us"... How would you respond to that similarity?
Who is the "us" and who is the "them" in this echo?
I was just trying to make the point that literally every technoscientific and developmental outcome is historically specific, arriving through historically specific articulations (via disputation, social struggle, vicissitudes in matters of funding, and regulation, serendipidies in the lab, eddies in communication, fashion, education, and so on) all of which are in some measure accidental, and any of which could easily have been otherwise. These outcomes settle -- to the extent that they manage the feat -- into institutional, factual, normative, customary formations that are quite likely, however natural(ized) they seem for now, to become otherwise in the future through the same articulative forces as these incessantly sweep a world shared by free, intelligent, creative, expressive, provocative, problem-solving peers.
Given this historical specificity and given this contingency, it stands to reason that when technocrats or Superlative Technology enthusiasts, or even, sometimes, common or garden variety scientists claim to rise "above the political fray," or propose that they have assumed an "apolitical" vantage from which to assess some desired technodevelopmental outcome (actual, historical, or conjectural), that the apoliticism or even anti-politicism associated with this gesture (usually loudly, often triumphally, poor dears) is an entirely rhetorical production.
That "apoliticism" is very much a politics, not to put too fine a point on it.
Now, the gesture of disavowing political considerations in the name of a "neutrality" (interestingly enough, "urgency" provides the pretext for the obverse move in this vein, structurally connected with the first) that goes on to do conspicuously political work is a gesture that can serve literally any political end. (This may be the force of the intervention you're making?)
However, I do think we can note that even if anyone, of any political persuasion, any "us" to any "them," can opportunistically take up this falsifying political protestation to apoliticism, it is also true that this is a gesture that will conduce especially to the benefit of conservative-elitist over progressive-democratic politics.
The reason I say this is found in the sentences that follow the one you quoted: "[T]he essence of democratic politics is the embrace of the ongoing contestation of desired outcomes by the diverse stakeholders of public decisions, while the essence of conservative politics is to remove outcomes from contention whenever this threatens incumbent interests."
This tells you quite a lot about my perspective on politics. I am moved to advocate for social justice and the amelioration of unwanted suffering on the basis of moral and esthetic (a word I use where many others would use the word "religious") considerations, and these considerations provide a personal moral and esthetic rationale for my democratic politics.
But from a political perspective, fairness, security, satisfaction, and the rest are means to the end of ensuring to the best widest deepest extent possible that people have a say in the public decisions that affect them. Fairness, security, satisfaction inculcate the stake and amplify the say of democratic peers, they bolster the scene of informed, nonduressed consent on which democratic contestation relies for its continued play through history.
The gesture of apoliticism seems to me to conduce to conservatism as a basic structural matter, since it tends to function to withdraw from contestation some settled outcome or what are taken to be a desired outcome's constitutive supports. Although this is a tactic that can easily be taken up opportunistically by particular "progressive" campaigns as easily by "conservative" ones in principle, the politics of the gesture itself are structurally conservative, in that they express a politics of depoliticization. And depoliticization always props up the status quo in some measure, as repoliticization always threatens to undermine it by exposing and expressing its contingency.
Democratic-Progressives would be foolish indeed to make recourse to such a strategy of depoliticization with any regularity (if ever), else they will find themselves duped by incumbent interests in no time at all. Indeed, if we broaden the terms of this gesture, we find ourselves soon enough on the familiar territory of the mechanism of "selling out."
Once we ascend to a kind of meta-political level, though, it no longer is clear to me how "us" and "them" are meant to be functioning in your worries about my claim that apoliticism is structurally more conducive to conservative politics. If the "us" versus "them" is supposed to translate to something like "democrats" versus "anti-democrats" it seems to me the trouble is that most people will tend to be more democratic in some aspects of their lives than others, depending on the what privileges they benefit from, what satisfactions they have come to imagine indispensable to the coherence of their narrative selfhood, and their awareness of conditions under which these privileges and satisfactions are secured and distributed. I daresay we all of us have democrats and anti-democrats within our own souls. Indeed, a recognition of what we ourselves are capable of when we feel insecure or ill-treated is probably a precondition for any reliable avowal of democratic over anti-democratic values. My point is, once we ascend to this level, I personally find it hard to figure out who "us" and "them" ultimately amount to.
It's not that I don't recognize that anti-democratic forces in the Republican Party in the USA, or among neoliberal and neoconservative and corporate-militarist partisans around the globe, have faces and names. Obviously I call them out here on Amor Mundi all the time.
My point is just to insist that the deep question of democratizing as against authoritarian politics is not properly captured in a tribalist evocation of "us" vs. "them" (Carl Schmitt famously -- and my view absolutely falsely -- grounded his political philosophy in just such a friend-foe distinction). The indispensable satisfactions of membership (recognition, support, and so on) depend on the practices of identification and, crucially, of disidentification at the heart of moral life -- moral from mores, what Wilfred Sellars called "we-intentions." But with democratic politics we shift into the normative sphere of ethics, of formal judgments that solicit (even if they never achieve) universal assent. Rights discourse, properly speaking, is an ethical rather than a moral discourse on my terms, for example. Democracy is the idea that people should have a say in the public decisions that affect them. This "should" is a destination at which you can arrive through many routes, a conception of human dignity connected to consent, pragmatic insights about ways to guard against social instability, to undermine corruption in authoritative institutions, to provide nonviolent alternatives for the legitimate settlement of disoutes, to facilitate problem-solving collaboration, whatever.
Once one arrives at the democratic vantage, though, one has found one's way to an ethical point of view, not a moral one. Even the "theys" who contingently oppose some democratizing campaign can be construed as participants in the contestation of which democracy literally consists in the here and now. Again, this does not diminish the democrat's capacity to distinguish allies from foes, nor to diminish her capacity to assess tedencies, attitudes, outcomes as democratizing or anti-democratizing. But the pleasures and powers and knowledges of political and ethical judgment and action are not, properly so-called, moral pleasures, "us" versus "them" pleasures.
Does this answer you question, Nato, or have I only managed to make things more confused? For me, this is a question that takes us into the white-hot center of my own philosophical preoccupations.
Post a Comment