What follows is the (somewhat edited) text of my reply to friend of blog Michael Anissimov in an ongoing exchange prompted by the discussion a few days ago of Superlative Technology Discourse
. Michael posted a very considered reply to my post and I have been meaning to recommend folks to read his critique and the exchanges it inspired
. Whatever our differences I do want to say that I find the conversation usually enjoyable and occasionally illuminating and to thank Michael for soldiering so gallantly on.
Michael says that “Many people identify with the vision put forth by Eric Drexler [who clarified and popularized the idea of molecular manufacturing].” And at the end of his original response to me he claimed also to “strongly identify as what Dale would call a Superlative Technologist.”
Finding a vision appealing or provisionally agreeing with an argument is not identification
with it in my view. What, politically, then, is the force of insistent claims of strong identification
with particular futurist scenarios in the context of technodevelopmental social struggle?
Let me put it this way: There should be in my view no such thing as “the future” with which some “we” identifies the better to fight to unilaterally implement it on an identity politics model. Mine would be a politics of open futurity and technodevelopmental advocacy for provisionally preferred outcomes in the context of more planetary democracy, rather than any politics of an idealized Future with which I identify in the present and mean to bulldoze my way through to with a few like-minded tribe-members.
Futures, like the present, properly belong to everybody, and will reflect the struggles of everybody, including the “theys” outside our various “we’s.” This is not a facile denial that one properly has goals and organizes to facilitate them, but an insistence that a democratizing technodevelopmental politics will fight above all for open futures
, not “optimal futures” with which a parochial subculture presently identifies.
Before I get accused of quibbling here, let me just say that politically what I want is what Jamais Cascio describes as an Open Future, above and beyond any particular technodevelopmental outcomes I might presently be advocating for as safer, more fair, more emancipatory, and so on. Sub(cult)ural futurologies, like Superlative Technocentrisms, tend to substitute highly linear and monological technodevelopmental trajectories and targets for what more technoprogressive perspectives recognize instead as the ongoing and finally unpredictable technodevelopmental social struggle of ineradicably diverse stakeholders to distribute the costs, risks, and benefits of technoscientific change as fairly as they can by their lights.
Enormous numbers of practical implications flow from sub(cult)ural futurisms, by the way: special vulnerabilities to hype, tendencies to naive technological determinism, reductionisms and other oversimplifications of developmental dynamisms, disdain for developmental aspirations alien to one's own, and so on.
And there is also, of course, the unfortunate tendency of those who identify with particular futures rather than provisionally advocating for technodevelopmental outcomes to confuse disagreement with defamation.
In his initial response Michael exhorts me to attack ideas and not people -- without realizing that his strong expressed identification with particular futurological scenarios makes it well nigh inevitable that attacking certain ideas will always be perceived as attacks on the people who claim a sub(cult)ural identity organized by their shared investment in such futurological idealizations.
Now, look, almost everybody gets a bit defensive in argumentative contexts, me included, and my obvious personal enjoyment of acerbic back-and-forth is more than usually provocative of this -— but I do think Sub(cult)ural Futurisms structurally exacerbate
this problem beyond bound.
Michael continues: The superlativity aspect isn’t symptomatic of whatever crazy pathology you try to project upon the advocates of these technologies.
Well, first, in some partisans for Superlativity I am quite sure that personal pathology is in play. Sorry, but I do think this is so and I do think it matters. More to the point, I think the Sub(cult)ural modes of political organizing to achieve ends within the complex dynamism of technodevelopmental social struggle will attract
certain pathologies and exacerbate them. Superlative rhetoric risks doing the same, not only symptomizing but organizing and facilitating undercritical, hyperbolized, linearized formulations of our technodevelopmental terrain. Ongoing and upcoming technodevelopmental transformation is and looks to continue to be deep, sweeping, intense, radical, unpredictable, deranging of custom and assumption, and easily as threatening as it is promising. Superlative Technology Discourses and the organizations that traffic in them are saturated with apocalyptic and transcendental imagery, unearned certainties and unexaminable pieties, littered with gurus and would-be priests, exactly as one would expect and exactly as endless critics have and will continue to point out. You can pout and stamp your feet about it or you can try instead to better understand why radical technodevelopmental discourse is likely to activate irrational passions and actually seek to address these problems (hint: identity politics may well be the worst imaginable mode of political organization possible under such circumstances in my view).
Second, I still don’t think Michael and many others completely grasp the nature of discourse-analysis and rhetorical critique. No doubt you all understand well how propositions (stated and implied) exhibit relations of logical entailment that can be analyzed to better understand the workings, strengths, and limitations of an argument.
But there are many other dimensions to which discourse is also susceptible of useful analysis. The overabundant majority of discursive formulations make constant recourse to figures (metaphors and the like) to render their abstractness more concrete, and there is a kind of entailment that obtains from the way we know the furniture of the world operates when it becomes part of the figurative picture an argument paints. When MLK calls justice a river, for example, it is because we know how rivers behave that we know he thinks justice is a powerful force, possibly irresistible, that it is natural, that it may seem violent, that it can be damned, diverted, and obstructed, and so on —- but not all of these analogies may obtain and we often need to disentangle the conclusions one’s clarifying figures have earned from the unearned ones, and sometimes we realize that claims or even different metaphors appear that contradict this picture and recognizing these differences often connects us to deeper perplexities or problems in the argument itself.
There are also implications that can be disinterred from the etymological examination of a discourse’s definitive or recurring terms, or from an argument’s citation of conventional topoi (these are genres of topical debate the give-and-take within which can sometimes seem as ritualized as a minuet if you know what to look for), or from familiar framings of certain ideas that might bear a family resemblance to others, or to habits of association one can discern through acquainting oneself with idiolectical, dialectical idiosyncracies, jargon, habitual citations, customary associations, ritualized subcultural signaling and so on.
The intuitive force of certain ideas, concepts, formulations will derive as much from these figurative and citational practices as from logic, and when I try to discern these sorts of relations it is because I am trying to understand what makes a discourse tick, what makes it compelling to some (even if it remains, for precisely the same reasons, alienating to most), trying to locate its vulnerabilities the better to attack it when it is mobilized in the service of outcomes I disapprove of or the better to shore it up when it is mobilized in the service of outcomes I approve of. It’s as simple as that.
Third, it is simply the case that what we say or do means more in the world than we intend it to (just as it is also true that we rarely fully grasp the scope of our own intentions anyway —- as evidenced by the fact that we will often retroactively assign to past conduct or utterances a different intention than the one we would have honestly reported at the time we initiated it), since actions always have unintended consequences and since utterances depend for their force on the contexts that are never completely understood. Constantly taking offense at my so-called attribution to them of malign or pathological intentions whenever I try to understand the structural, logical, figurative, etymological, topological, tropological, citational, performative entailments of their discourse could not be more beside the point in most cases.
Michael: For instance, nanofactories would benefit greatly from scaling laws, allowing them to have greater throughput than macroscale factories. I didn’t make up the fact that scaling laws are extremely beneficial to manufacturing because of “worries about finitude, mortality, control, the force of chance in human lives, the demands of diversity, and so on.” And you’re hinting that I and others do, which is ludicrous.
Michael didn’t make up scaling laws, but the significance
he attributes to his application of them to the very particular idealized outcome with which he identifies (by his own admission) when he talks about “nanotechnology” is a choice that is articulated by factors that are scarcely given in the scaling laws themselves. Rest assured, I am glad to hear that Michael personally is not anxious about human finitude or mortality, developmental disruptions of self-control, the demands of chance and diversity (even if such anxieties are rather extraordinarily widespread in my view), and that the irrational passions, the fears and fantasies of agency, that are often so activated by technology discourse in light of such anxieties and which seem almost embarrassingly conspicuous in Superlative Technological variations are not a factor in play in his own life or rhetoric. He will no doubt be much the happier for it. If one can believe his protestations to such Olympian detachment, that is to say.
But to protest as he does that it is ludicrous
for me to discern the trace of these anxieties in the discourse more generally is, to put it lightly, a hard sell. However, to reduce my discourse analyses to ad hominem
attacks or armchair psychologizing as he also does is probably a more fruitful rhetorical avenue -— even if it is an utterly wrongheaded and superficial move, it is likely to seem plausible, especially to True Believers already sympathetic to his outlook.
Michael: If molecular manufacturing is possible, it could be used to make a solid block of diamond 10 meters tall out of nothing but acetylene feedstock and solar energy.
Now, the above statement can be considered a technological statement, or it can be considered a science fictional BS statement based on some special psychological obsession I have with extremely large diamonds. A smart person looks at it as a technological claim, not a psychological obsession.
This is an awfully disheartening response from Michael, I must say. First of all, there are surely indefinitely many projected outcomes that, should they come to pass according to some particular scenario, would have practical implications compatible with our current understanding of the laws of physics. But it will not be these laws of physics that cause just that one projected outcome to be the one that captures one's attention, or the popular imagination, or becomes a focus of collective dread or desire. Those factors will indeed be psychological, cultural, social, and political as much as anything else. I try to talk about these questions in a way that is actually sensitive to that reality.
Michael: But you seem to look at many technological claims about the feasibility or projected capabilities of future technologies as science fictional BS statements when they are actually about certain physical properties in the projection.
Not all science fiction is BS. I am an avid reader of it, and an avid reader of much of the specific sf that preoccupies the transhumanist, singularitarian, and Superlative imaginaries in fact -— a temperamental contiguity that makes the Superlative and Sub(cult)ural Technocentrics a long-abiding source of fascination to me, from a psychological and ethnographic standpoint at the very least.
It pays to remember that while sf sometimes likes to bill itself as an extrapolative or projective literary genre, there is a widely held alternate view that the force and meaning of much of the greatest sf literature derives from the way it functions as a kind of allegorical commentary of contemporary problems.
I think an enormous amount of presumably non-fictional futurological scenario making solicits the same kinds of identificatory and disidentificatory energies, functions as a kind of surrogate critique of the present in the form of a futural projection. I think contemporary cultural anxieties and political quandaries are regularly displaced onto the safer ground of projected futures.
This will be palpable to transhumanist-types in the example of fear-mongering bioconservative discourse about clone armies, designer super babies, and chimeras and the like, which are often as much about scarcely disguised reactionary hysteria about the political demands of younger generations, about the threat to their position and comfortable attitudes posed by racial and sexual diversity and so on, as they are sensible regulatoary quandaries. Superlative discourses play out similar anxieties, certainly, but these will be harder to see clearly for those who are on the inside.
Michael castigates: If you want to argue against the feasibility of a technology, discuss the technology.
But as I have said over and over again, the feasibility of particular technodevelopmental scenarios isn’t the only or even always the primary thing that interests me. Nor do I honestly think that feasibility is really
always at the center of their attention when technocentrics engage in what they themselves would think of as discussions of such “feasibility.” Stealthed beneath the surface of discussions of engineering and feasibility typically all sorts of parochial preoccupations are getting aired symptomatically, all sorts of normative assumptions are getting sedimented. What I am interested in above all else is what makes certain logically possible technodevelopmental outcomes personally and collectively compelling, and the connection of these compelling and provocative discourses to current progressive and technoprogressive politics and policy.
Again, my first aspiration is to facilitate open futures and the democratization of technodevelopmental social struggle. I do not identify even with those scenarios and outcomes that seem to me at present the most emancipatory ones, nor do I have any interest in or pin practical hopes on tribal identification with others who might happen to agree with my assessments of which scenarios and outcomes are the most emancipatory ones for now. I think such identification incubates technocratic elitism in people of the left and the right, and that it endorses assumptions conducive to the corporate-militarist status quo (a paradoxical entailment that leads me sometimes to speak of retro-futurism).
The fact is that consensual modification medical techniques, nanoscale manipulation, sophisticated malware, decentralizing and renewable energy and service provision, p2p networked formations are all part of the technodevelopmental terrain that preoccupies my own attention and in which I invest many of my own provisional emancipatory hopes. These preoccupations are very close to many of the ones that get pointlessly (and yes sometimes pathologically) transcendentalized, hyperbolized, oversimplified in Superlative Technocentrisms in my view. This frustrating but tantalizing proximity is, no doubt, the source of much of my fascination with these discourses and their advocates’ ongoing interest in mine. I do think that the Superlative derangement of otherwise mainstreamable technoprogressive formulations is more than interestingly symptomatic or wrongheaded, though, but actively pernicious inasmuch as it substitutes less-democratizing for more-democratizing formulations and —- again, I’ll say it, even knowing how you dislike this part of my critique —- activates irrational passions at precisely the worst possible time, a time when democracy without technology will fail, and technology without democracy will destroy the world.
Michael continues: Don’t put its advocates down on the couch and try (poorly) to psychoanalyze them. I can imagine you in the days before nuclear-tipped ballistic missiles, saying that those who believe such a technology is possible are suffering from phallic fantasies derived from a hyper-masculine military-aggressive complex.
I am hoping Michael can see by now why this is a flabbergastingly facile and inapt understanding of my critique.
Michael: By looking at everything through the lens of cultural theory and rhetoric, you’re missing huge pieces of the puzzle.
I agree with that, of course, which is why I try not to do what Michael is accusing me of here. But it remains the case that my interests are my interests, my training is my training, and I make no apologies for the fact that my analyses focus on different dimensions of these issues in a different language than is customary for your readers. I am quite sure that I have a contribution to make on my own terms -— a contribution in areas that are enormously neglected among you —- but I am also content that many who read me will disagree with that, and some even decide that my different style and focus betokens my idiocy, insanity, superficiality, or fraudulence. This sort of thing goes with the territory, surely?
Michael: I do think the cultural theory and rhetoric lenses have a place, but to apply them indiscriminately
?to every piece of text that enters your field of vision leads inevitably to false inferences.
Well, I don’t do this, and my contribution is my contribution. But, the general point is a worthy one, and it might not hurt for me to take it to heart. Let's all take Michael's wise words here to heart.
Michael writes: I am willing and able to look at technological changes through a social and cultural lens, but also an economics lens, an ethics lens, a scientific lens, a military lens, and more.
Michael, to be blunt, I don’t think you are exhibiting much self-awareness here.
He asserts: I am not a technological determinist
Even though this is something he said earlier on in this very response I am currently addressing:The superlativity aspect isn’t symptomatic of whatever crazy pathology you try to project upon the advocates of these technologies. They fall out of the specs of the technologies themselves.
I’m sorry, but there isn’t a dime thin difference between this attitude and that of technological determinism, which provides the context in which one must read his own continution of his protestation:I am not a technological determinist, although I do believe that inherent characteristics of a technology can strongly influence the way it is treated.
Can? Sure, who’s denying that?
The social, cultural, and political forces that articulate technoscientific change (funding, invention, testing, publication, regulation, education, marketing, appropriation, distribution and so on) don’t enable technique to trump physics. But technological determinism is defined by the confusion or even insistent foregrounding of physics over these social, cultural, and political forces in one’s accounting of the vicissitudes of technoscientific change.
Michael continues, with panache: Since I look at these changes through many different lenses, I am multi-faceted where you are being narrow.
Whatever gets you through the night, guy.
Michael: You incorrectly identify discussions of highly advanced technologies with libertarian egoism, which to me, is the silliest thing in the world.
I correlate Sub(cult)ural and Superlative variations of Technology Discourse with the rhetorical idiosyncracies of neoliberal and American market libertarian rhetoric, and I correlate the justifications for elitism, the naturalization of market and corporatist assumptions, the shared preoccupations with security, terror, and disaster, the shared disdain for popular input, and any number of other features of Superlativity with the rhetoric through which the neoliberal project continues to market and justify itself.
Sorry if some folks think that is silly. I’m quite sure I am right.
Michael: Hundreds of scientists and futurists of all political orientations have discussed MNT and human-level AI.
No shit, Sherlock. But irrelevant to my point.
On a different note, Michael says: Thanks for clarifying your views on uploading. If you are a functionalist, then you must believe intelligence can be characterized as a series of data flows.
I said I was a materialist about mind and intelligence, and I daresay there are some characterizations of functionalism that I might ascribe to, but his definition here isn’t one I would agree to at all.
Information is always instantiated on a material carrier, the content of “data-flows” are non-negligibly constituted by their actual materialization, and so one can easily grant that mind is material and not supernatural, and that intelligence may well be incarnated on a pluralty of possible material substrates, while remaining skeptical that this
materialized intelligence is translatable
to a different one. I have no investment in being called a “functionalist” or not on whatever construal Michael has seized on, and I don’t think I need to have a position on that to explain my skepticism about the notion of mind uploading. Human intelligence is embodied, and is likely to be radically impoverished or utterly distorted by disembodiment -— and my point is a materialist and naturalist one, no souls or miracles required.
Michael: As for the nanotechnology issue, only a minority of advocates want to see it as being used to sweep away politics and democracy.
It is commonplace to invest the projected arrival of nanotechnology in something like its particular Drexlerian variation with the arrival of an abundance that will circumvent the impasse of stakeholder diversity and provide a technical fix for problems that look to me ineradicably social, cultural, and political. The point isn’t to call people closeted totalitarians, but to delineate structural entailments. Perfectly nice, civic-minded, concerned people can nonetheless feel the appealing tug of anti-democratizing discourses without becoming conscious anti-democrats. If only technodevelopmental abjection were so simple as Michael seems to think I think it is.
Michael continues: This may have been a more predominant view in the mid-90s, where libertarianism was more in vogue due to the dot com boom, but now I think 90% of nanotechnology advocates are way more level-headed.
The ethnographic point about 90s technophilia is certainly true, although I think Michael wildly overestimate the percentage of explicitly libertopian technophiles from that era who have learned any lessons at all from the last disastrous near-decade of corporate-militarist privatizations, deregulations, and "free trade" pieties. Again, technocratic elitism, reductionism, naturalization of market and corporatist assumptions, and so on yield anti-democratizing effects and are especially helpful to the neoliberal project. That obviously doesn’t mean that all the people who maintain the one discourse explicitly intend these effects (although I think more of them do than Michael may feel comfortable admitting), since I think too few people who maintain these discourses have given much thought to the sorts of connections I am talking about.
Michael wags a finger: But I believe you will continue to describe the situation as if the majority sees nanotech in the way that you fear, because it makes for interesting writing.
Thanks, Doc, I hope you’ll forgive me if I don’t feel ready to send you a check to compensate your diagnostic efforts just yet.
Michael: If you’re skeptical about AI, I understand. But because you are skeptical, your criticisms of discussions among people who accept the premise that AI is possible in the near term are necessarily biased.
’s certainly convenient is you’re lucky enough not to be an AI skeptic then.
Michael: But if you shared their premise, would what they are talking about, hard takeoffs from human-equivalent AI to superintelligence in days or weeks, for instance, be all that implausible?
I think if this is something you lose sleep over you have wildly skewed priorities.
Michael: You probably can’t answer that question, because you don’t, in fact, believe human-equivalent AI is possible in less than a century or something.
Intelligence is a short-hand for a constellation of capacities. There are obviously devices that already surpass some of the normative human performance in some of the dimensions regularly subsumed under the term intelligence. Collective intelligence, and especially peer-to-peer networked organizing, expressivity, and problem-solving may well constitute a mode of artificial superintelligence on some understandings, and it is one of my own preoccupations. If one is talking about dangerous malware, including replicative and recursive malware, I agree this is enormously important -— already, here and now, and not in a way that is particularly illuminated by projections or idealizations. If you are talking about entitative post-biological (super)intelligence with intelligible intentions, malign or not, and deserving of rights and such, well, I think that is not even on the radar screen, and I think that a fixation on it as a presumably urgent political matter is, to put it as kindly as possible, a skewed preoccupation.
Michael: Maybe these technologies are inherently anti-democratizing themselves, because they make possible the consolidation of godlike power into single entities?
I don’t think technologies "themselves" are ever inherently anti-democratizing or emancipatory. It is the way we organize through and in the face of them, it is in our distribution of the costs, risks, and benefits associated with them that they become anti-democratizing or emancipatory forces.
Michael's point about instrumental asymmetry is important, though —- rather in the way that it is important to address the ways that mediation can facilitate killing through its apparent abstraction —- but I still think the focus should not be “technological” per se, but inspire an awareness that we must lessen the authority of elites and flatten hierarchical authoritarian structures we have tolerated too long lest they avail themselves opportunistically of such unprecedented technical capacities in the service of domination, exploitation, and confiscatory wealth concentration.
Michael: If these technologies are inherently anti-democratizing, then it makes it all the more difficult to keep the world as democratic as possible in spite of that.
Things are looking mighty bleak for democracy, then. Don’t everybody cry all at once, I guess. Look, I simply disagree. Democracy is never easy but it is certainly possible and desirable, and emancipation is possible and desirable, and I simply won’t eagerly or “reluctantly” concede its demise in the face of technodevelopmental “inevitabilities.”
Michael: It’s also possible, in some circumstances, that the majority opinion is wrong, i.e., democracy breaks down and makes things worse.
Democracy doesn’t mean mob rule, and it doesn’t require one believe that majorities are always right (which is one of the reasons most notionally democratic societies also have rights and guarantees that are institutionally safeguarded from easy contestation). Democracy is just the idea that people should have a say in the public decisions that affect them. The stakeholders with whom we share the world testify to a diversity of aspirations and histories and perspectives and one needs to persuade them one is right where we differ disputatiously from them or compensate them in ways that they are reasonably happy with on their terms where one manages provisionally to prevail over them.
You can go on about how envious, or irrational, or unserious you think everybody is if that’s what happens next in this particular conversational dance-floor turn, but I have to say I think so-called elites are no less susceptible to irrationality and evil than popular majorities are, and I’ll take my chances with democratic contestation, thank you very much. Whatever my annoyance with some popular attitudes, I do trust people to testify to their actual interests better than profit-mongers or priests will (whether of the conventional or scientistic varieties -— and I say this as a staunch defender of consensus science), and I trust popular deliberation over elite imposition for the facilitation of most robust, most sensible, most fair, most representative public decisions.
In something of a surprise move, Michael end thus: For instance, the book “Silent Spring” made a huge deal of DDT, which led to its being banned in hundreds of countries worldwide, subsequently leading to millions of deaths from malaria due to the lack of cheap mosquito repellent. To say that democracy (majority opinion) is correct at all times and under all circumstances is naive.
Uh, okay. Well, Rachel Carson is a hero in my book, and she isn’t responsible for the deaths of millions of people due to malaria (as if only the delirious application of toxic DDT can save the interminably overexploited “underdeveloped” regions of the world), and Michael needs to understand better that he simply can’t say idiotic things like this on some occasions and then whine to me later (as he has regularly done, check the archive) about what a progressive democrat he actually is and how his feelings have been hurt by some impersonal discursive analysis I've posted here and there showing that quite a lot of public technophilia gets spouted by reactionaries in the service of reactionary causes. This is an awful note to end an otherwise congenial exchange, but, honestly, what odd things these Superlative and Sub(cult)ural Technocentrics do sometimes say.