Using Technology to Deepen Democracy, Using Democracy to Ensure Technology Benefits Us All
Sunday, November 11, 2007
Is Rationality Always Instrumental?
Subcultures notoriously like to cast themselves in the role of exemplars of rationality and outsiders as cast out by their "irrationality" -- as the murderous machineries of racism daily attest -- and technocentric subcultures (engineers, coders, geeks, whatever) are surely not much less prone to this sort of thing than anybody else is, as perhaps Snow's "Two Cultures" reminds us best.
Given the amount of guff I receive from some of my commentors about my fashionable nonsensicality, my effeteness, eliteness, and aestheteness, and so on, it is weird to admit that I actually think of myself as something of a hokey defender of rationality. And so, when I teach critical thinking, close reading, and argument to undergraduates I actually have in mind that I am helping my students become better democratic citizens, helping them to protect themselves against the marketing misinformation of corporations and politicians, providing them with tools to help them adjudicate difficult disputes and so on.
But it does seem to me that people practice rationality in multiple dimensions or modes in their lives -- in my own account instrumental, moral, aesthetic, ethical, and political modes (and I'm sure even more multivalent accounts are available) -- and that no one mode is more dispensable than the others, no one mode more supreme than the others, except on a case by case basis. On this account, rationality consists not only of affirming beliefs only when the conditions for warranted assertability for the relevant mode are met, but also recognizing just which mode of rationality is the apt one given one's context.
When some technocentrics seem to decide in advance that science is the one and only paradigmatic practice of rationality, and that a truly rational person will manage to shoehorn every proper belief-ascription into something that at any rate superficially passes for proper scientific form, it becomes enormously difficult to direct their attention to any detail at all that can't be reduced to conventionally instrumental terms (as the sacrifices we make for the legibility of belonging often cannot be, as an idiosyncratic assertion that a thing is beautiful often cannot be, as the faith that we will risk disadvantage in an effort at reconciliation with those to whom we seem irreconcilable for now often cannot be, and so on), and one finds oneself accused of emotionalism, irrationality, relativism, and who knows what else when one makes the effort at all.
Of course, from my own perspective, it is exactly as irrational and exactly as destructive to the proper practice and status of science to try to tear and stretch it to accommodate dimensions of human experience to which it is not well suited, as it would be to deny its indispensability in matters of prediction and control. What is curious to me is that those who would make of science a kind of godly summit, end-all, be-all (with themselves as its Priestly mouthpieces more often than not) are precisely the ones who claim their clumsy hyperbole amounts to a Championing of Science, while even technoscientifically literate advocates for a more modest accounting of science's role in the practical fabric of rationality and sociability are often pilloried by such Champions for their irrationality.
What it is key to understand here is that this does not look to me like an equivalent exchange: It is not just that the Scientist decries the Humanist's irrationality (think of logical positivism pooh-poohing the lack of philosophical "progress" -- as if this category necessarily applies to the project of philosophy as a valuable enterprise -- or distinguishing the content of fact from the "emotivism" of value), then the Humanist turns about and decries the Scientist's irrationality for good measure (pointing out that science lacks the conceptual resources to answer the question should an experiment be done? should an outcome be pursued among others? and so on).
The reason this is not as equivalent an exchange of charges as it might initially seem is because it seems to me that scientific rationality is easily affirmed and championed by those who might affirm and champion nonetheless other available modes of rationality as more apt to our circumstances. If I am right to say that we rationally affirm instrumental, moral, aesthetic, ethical, and political beliefs; and if I am right to say that these beliefs are warranted according to different practices and yield different edifications; then there is a great difference between the position of one who would deny the existence of all but one of these modes or denigrate all but one of these modes or subsume all these modes under just one of them in the name of rationality, and the position of one who would affirm the different value and dignity of them all in their proper measure in the name of rationality, including the mode valorized by the reductionist position.
The one who demands exclusion and reduction in the name of purity and optimality is making a radically different sort of argument than the one who pleas for inclusion and expansion in the name of diversity and consent. It is profoundly misleading to equate these two positions, whatever their superficial symmetry.
Let me be clear, I am not just claiming that there is a place for morals, aesthetics, ethics, and politics, in a world that properly respects scientific rationality, I am saying that all of these are modes of rationality -- if the warrants that differently govern the assertability of moral or aesthetic beliefs are not matters of rationality (if they are matters of, say, propriety, instead), then exactly the same thing applies in my view to the warrants that govern assertability in matters of instrumental belief. And let me be even clearer still, I do absolutely agree that the criteria for warranted assertability hacked out over centuries of scientific practice -- falsifiability, testing, publication, coherence, saving the phenomena, elegance, and so on -- do indeed provide a marvelous, incomparable institutional recourse for acquiring good beliefs concerning matters of prediction and control.
Looking to the Scientist to provide guidance in matters for which she is no more qualified than anybody else, one citizen among citizens, one peer among peers, one organism among organisms, has nothing to do with science. Going from there to invest the idealized figure of the Scientist or of his Works with hyperbolic or even transcendental significances has nothing to do with science either.
Given the amount of guff I receive from some of my commentors about my fashionable nonsensicality, my effeteness, eliteness, and aestheteness, and so on, it is weird to admit that I actually think of myself as something of a hokey defender of rationality. And so, when I teach critical thinking, close reading, and argument to undergraduates I actually have in mind that I am helping my students become better democratic citizens, helping them to protect themselves against the marketing misinformation of corporations and politicians, providing them with tools to help them adjudicate difficult disputes and so on.
But it does seem to me that people practice rationality in multiple dimensions or modes in their lives -- in my own account instrumental, moral, aesthetic, ethical, and political modes (and I'm sure even more multivalent accounts are available) -- and that no one mode is more dispensable than the others, no one mode more supreme than the others, except on a case by case basis. On this account, rationality consists not only of affirming beliefs only when the conditions for warranted assertability for the relevant mode are met, but also recognizing just which mode of rationality is the apt one given one's context.
When some technocentrics seem to decide in advance that science is the one and only paradigmatic practice of rationality, and that a truly rational person will manage to shoehorn every proper belief-ascription into something that at any rate superficially passes for proper scientific form, it becomes enormously difficult to direct their attention to any detail at all that can't be reduced to conventionally instrumental terms (as the sacrifices we make for the legibility of belonging often cannot be, as an idiosyncratic assertion that a thing is beautiful often cannot be, as the faith that we will risk disadvantage in an effort at reconciliation with those to whom we seem irreconcilable for now often cannot be, and so on), and one finds oneself accused of emotionalism, irrationality, relativism, and who knows what else when one makes the effort at all.
Of course, from my own perspective, it is exactly as irrational and exactly as destructive to the proper practice and status of science to try to tear and stretch it to accommodate dimensions of human experience to which it is not well suited, as it would be to deny its indispensability in matters of prediction and control. What is curious to me is that those who would make of science a kind of godly summit, end-all, be-all (with themselves as its Priestly mouthpieces more often than not) are precisely the ones who claim their clumsy hyperbole amounts to a Championing of Science, while even technoscientifically literate advocates for a more modest accounting of science's role in the practical fabric of rationality and sociability are often pilloried by such Champions for their irrationality.
What it is key to understand here is that this does not look to me like an equivalent exchange: It is not just that the Scientist decries the Humanist's irrationality (think of logical positivism pooh-poohing the lack of philosophical "progress" -- as if this category necessarily applies to the project of philosophy as a valuable enterprise -- or distinguishing the content of fact from the "emotivism" of value), then the Humanist turns about and decries the Scientist's irrationality for good measure (pointing out that science lacks the conceptual resources to answer the question should an experiment be done? should an outcome be pursued among others? and so on).
The reason this is not as equivalent an exchange of charges as it might initially seem is because it seems to me that scientific rationality is easily affirmed and championed by those who might affirm and champion nonetheless other available modes of rationality as more apt to our circumstances. If I am right to say that we rationally affirm instrumental, moral, aesthetic, ethical, and political beliefs; and if I am right to say that these beliefs are warranted according to different practices and yield different edifications; then there is a great difference between the position of one who would deny the existence of all but one of these modes or denigrate all but one of these modes or subsume all these modes under just one of them in the name of rationality, and the position of one who would affirm the different value and dignity of them all in their proper measure in the name of rationality, including the mode valorized by the reductionist position.
The one who demands exclusion and reduction in the name of purity and optimality is making a radically different sort of argument than the one who pleas for inclusion and expansion in the name of diversity and consent. It is profoundly misleading to equate these two positions, whatever their superficial symmetry.
Let me be clear, I am not just claiming that there is a place for morals, aesthetics, ethics, and politics, in a world that properly respects scientific rationality, I am saying that all of these are modes of rationality -- if the warrants that differently govern the assertability of moral or aesthetic beliefs are not matters of rationality (if they are matters of, say, propriety, instead), then exactly the same thing applies in my view to the warrants that govern assertability in matters of instrumental belief. And let me be even clearer still, I do absolutely agree that the criteria for warranted assertability hacked out over centuries of scientific practice -- falsifiability, testing, publication, coherence, saving the phenomena, elegance, and so on -- do indeed provide a marvelous, incomparable institutional recourse for acquiring good beliefs concerning matters of prediction and control.
Looking to the Scientist to provide guidance in matters for which she is no more qualified than anybody else, one citizen among citizens, one peer among peers, one organism among organisms, has nothing to do with science. Going from there to invest the idealized figure of the Scientist or of his Works with hyperbolic or even transcendental significances has nothing to do with science either.
Saturday, November 10, 2007
On the Posthuman
Few can have failed to notice that, historically speaking, the so-called universal accomplishments celebrated under the banner of humanism from the Renaissance to the present day have rarely been enjoyed by more than a privileged group of men, and occasionally a few women, within strictly limited socioeconomic positions.
And even at its most capacious and inclusive, it is hard to shake the worry that any purely humanist and hence anthropocentric and hence human-racist grounding of ethics will likely stand perplexed in the face of the demand of Great Apes, dolphins, and other nonhuman animals (let alone trees, or for that matter biospheres) for some measure standing and respect.
Honestly, the celebrated category of "humanity" seems rarely to have provided much protective cover for even fully sane, mature, "exemplary" human beings caught up in the sometimes genocidal technoscientific dislocations of the modern era.
A number of "post-humanist" discourses have emerged to register these dissatisfactions with the limitations of the traditional humanist project.
It is important to recognize that the "post-human" does not have to conjure up the frightening or tragic spectacle of a posthumous humanity, an end to the best aspirations of human civilization, or even a repudiation of humanism itself, so much as a new effort emerging out of humanism, a moving on from humanism as a point of departure, a demanding of something new from humanism, perhaps a demand that humanism actually live up to its ethical and democratizing self-image for once.
To be sure, the “post-human” is not one kind of imaginary or idealized prostheticized person of the future, soliciting our identification in the present and facilitating our dis-identification with our peers. Nor is “post-humanism” a singular response to a particular current of prostheticized personhood -- whether involving digital network immersion, peer-to-peer Netroots democracy, post-Pill feminism, transsexual queerness, non-normalized post-"disabled" prosthetic different-enablements, open source biopunks and copyfighters, or what have you -- nor certainly is it a matter properly of the more fantastic identifications with robots, or eugenicized superhumans, or artificial intelligences, or aliens that seem to come up so often when “post-humanism” is discussed as a topic in hyperbolic popular futurism or sub(cult)ural technophilic discourses.
"Post-humanism," properly so-called, names the ethical encounters of humanism with itself, the confrontations of a universalism with its historical and practical limits and contradictions. And the ethical visions that emerge either out of ("post" in the sense of "after") or in resistance to ("post" in the sense of "over") that confrontation are themselves ethical terms.
This post was adapted from material excerpted from two longer pieces, one of them my Technoprogressivisms essay, the other Posthuman Terrains, in answer to a request from Vladimir de Thezier for a brief statement on Posthumanism as a keyword in contemporary critical theory.
And even at its most capacious and inclusive, it is hard to shake the worry that any purely humanist and hence anthropocentric and hence human-racist grounding of ethics will likely stand perplexed in the face of the demand of Great Apes, dolphins, and other nonhuman animals (let alone trees, or for that matter biospheres) for some measure standing and respect.
Honestly, the celebrated category of "humanity" seems rarely to have provided much protective cover for even fully sane, mature, "exemplary" human beings caught up in the sometimes genocidal technoscientific dislocations of the modern era.
A number of "post-humanist" discourses have emerged to register these dissatisfactions with the limitations of the traditional humanist project.
It is important to recognize that the "post-human" does not have to conjure up the frightening or tragic spectacle of a posthumous humanity, an end to the best aspirations of human civilization, or even a repudiation of humanism itself, so much as a new effort emerging out of humanism, a moving on from humanism as a point of departure, a demanding of something new from humanism, perhaps a demand that humanism actually live up to its ethical and democratizing self-image for once.
To be sure, the “post-human” is not one kind of imaginary or idealized prostheticized person of the future, soliciting our identification in the present and facilitating our dis-identification with our peers. Nor is “post-humanism” a singular response to a particular current of prostheticized personhood -- whether involving digital network immersion, peer-to-peer Netroots democracy, post-Pill feminism, transsexual queerness, non-normalized post-"disabled" prosthetic different-enablements, open source biopunks and copyfighters, or what have you -- nor certainly is it a matter properly of the more fantastic identifications with robots, or eugenicized superhumans, or artificial intelligences, or aliens that seem to come up so often when “post-humanism” is discussed as a topic in hyperbolic popular futurism or sub(cult)ural technophilic discourses.
"Post-humanism," properly so-called, names the ethical encounters of humanism with itself, the confrontations of a universalism with its historical and practical limits and contradictions. And the ethical visions that emerge either out of ("post" in the sense of "after") or in resistance to ("post" in the sense of "over") that confrontation are themselves ethical terms.
This post was adapted from material excerpted from two longer pieces, one of them my Technoprogressivisms essay, the other Posthuman Terrains, in answer to a request from Vladimir de Thezier for a brief statement on Posthumanism as a keyword in contemporary critical theory.
Wednesday, November 07, 2007
Depoliticized Technology, Repoliticizing Technology
"Technology" is a verb masquerading as a noun.
Behind every conjuration of a technological thing, always there are the vicissitudes of complex, fraught, unpredictable processes: of invention, of investment, of research, of testing, of publication, of education, of marketing, of application, of distribution, of appropriation.
"Technology" is always a shorthand, and it is crucial to translate that shorthand back into longhand before we affirm or resist particular claims or aspirations that depend on this term for their force. "Technology" as an idea has come to be radically depoliticized, so much so that even when people sometimes speak of politics and technology they will speak in terms of "the politics surrounding technology" or of a dangerous "politicizing of technology" as if politics were an invasive alien organism impinging on something that is inherently non-political.
When we use the word "technology" we need to speak of it and to mean by it something like the collaborative recourse to technique in the effort to solve shared problems and facilitate shared aspirations.
Too often people use the word "technology" to mean instead something like the use of implements by some (sometimes a small minority) to disregard, control, marginalize, oppress, exploit others (sometimes a large majority). This is what people often really mean when they claim to be "anti-technology" in a general way.
Too often people use the word "technology" to mean instead something like the use of technique to circumvent the difficult, contentious, time-consuming process of doing justice to the diversity of needs, perspectives, and aspirations of the diversity of stakeholders to shared concerns (through elite decision making by nonaccountable professionals, or through the mass-mediated manufacture of consent, and so on). People who "oppose technology" in a general way often mean by this opposition to decry this kind of elitism or conservatism. But it is interesting to note that this is also what many people often really seem to mean when they claim to be "pro-technology" in general way, usually because they think certain questions of general concern are too complicated or happening too quickly to be addressed by all of their actual stakeholders or sometimes simply because they are temperamentally averse to stakeholder politics and seek out what they imagine to be less contentious spheres governed by "facts" rather than "values."
In both of these cases, "technology" marks an effort at depoliticization: whether outright anti-political or assertively apolitical, "technology" discussions have come to function too often to direct our attention to the particularity of technique while removing the complexity of dissent from consideration, function too often to focus on the generality of promises while distracting us from the specificity of consequences, wider, longer-term, unintended impacts, or from actual distributions of cost, risk, and benefit.
We need to confront this depoliticization through the discourse of "technology" with an insistent repoliticization of "technology."
While it is not enough simply to repoliticize it to achieve a desired democratization of "technology," you can be sure that a depoliticized "technology" will never be a democratic one. Democracy is the idea that people should have a say in the public decisions that affect them, and depoliticization always functions to remove decisions made by some from contestation by the many who have a stake in them.
Behind every conjuration of a technological thing, remember, are the vicissitudes of complex, fraught, unpredictable processes: And always decisions, always decisions are being made. Who are the decision makers? What considerations preoccupy them? Who is impacted by these decisions?
Never permit the discourse of "technology" to lodge itself in the salesman's pitch, in the fetishized delineation of technical capacities or the promissory evocation of desired outcomes. Citizens do not settle for the status of customers when there are decisions being made on matters of concern that affect them.
Once again, when we use the word "technology" we really need to mean by the term something like collaborative recourse to technique in the effort to solve shared problems and facilitate shared aspirations.
Behind every conjuration of a technological thing, always there are the vicissitudes of complex, fraught, unpredictable processes: of invention, of investment, of research, of testing, of publication, of education, of marketing, of application, of distribution, of appropriation.
"Technology" is always a shorthand, and it is crucial to translate that shorthand back into longhand before we affirm or resist particular claims or aspirations that depend on this term for their force. "Technology" as an idea has come to be radically depoliticized, so much so that even when people sometimes speak of politics and technology they will speak in terms of "the politics surrounding technology" or of a dangerous "politicizing of technology" as if politics were an invasive alien organism impinging on something that is inherently non-political.
When we use the word "technology" we need to speak of it and to mean by it something like the collaborative recourse to technique in the effort to solve shared problems and facilitate shared aspirations.
Too often people use the word "technology" to mean instead something like the use of implements by some (sometimes a small minority) to disregard, control, marginalize, oppress, exploit others (sometimes a large majority). This is what people often really mean when they claim to be "anti-technology" in a general way.
Too often people use the word "technology" to mean instead something like the use of technique to circumvent the difficult, contentious, time-consuming process of doing justice to the diversity of needs, perspectives, and aspirations of the diversity of stakeholders to shared concerns (through elite decision making by nonaccountable professionals, or through the mass-mediated manufacture of consent, and so on). People who "oppose technology" in a general way often mean by this opposition to decry this kind of elitism or conservatism. But it is interesting to note that this is also what many people often really seem to mean when they claim to be "pro-technology" in general way, usually because they think certain questions of general concern are too complicated or happening too quickly to be addressed by all of their actual stakeholders or sometimes simply because they are temperamentally averse to stakeholder politics and seek out what they imagine to be less contentious spheres governed by "facts" rather than "values."
In both of these cases, "technology" marks an effort at depoliticization: whether outright anti-political or assertively apolitical, "technology" discussions have come to function too often to direct our attention to the particularity of technique while removing the complexity of dissent from consideration, function too often to focus on the generality of promises while distracting us from the specificity of consequences, wider, longer-term, unintended impacts, or from actual distributions of cost, risk, and benefit.
We need to confront this depoliticization through the discourse of "technology" with an insistent repoliticization of "technology."
While it is not enough simply to repoliticize it to achieve a desired democratization of "technology," you can be sure that a depoliticized "technology" will never be a democratic one. Democracy is the idea that people should have a say in the public decisions that affect them, and depoliticization always functions to remove decisions made by some from contestation by the many who have a stake in them.
Behind every conjuration of a technological thing, remember, are the vicissitudes of complex, fraught, unpredictable processes: And always decisions, always decisions are being made. Who are the decision makers? What considerations preoccupy them? Who is impacted by these decisions?
Never permit the discourse of "technology" to lodge itself in the salesman's pitch, in the fetishized delineation of technical capacities or the promissory evocation of desired outcomes. Citizens do not settle for the status of customers when there are decisions being made on matters of concern that affect them.
Once again, when we use the word "technology" we really need to mean by the term something like collaborative recourse to technique in the effort to solve shared problems and facilitate shared aspirations.
Sunday, November 04, 2007
"Technological Immortalism" As Superlativity Discourse
The two strands of Superlative Technology Discourse that have preoccupied my attention on Amor Mundi over the last few weeks have been connected primarily with claims about the Drexlerian vision of nanotechnology and the Singularitarian vision of Strong AI. These two strands amount in more sociocultural terms to visions of the reductively "technological" accomplishment of, on the one hand, a post-political superabundance and, on the other hand, a post-historical superintelligence.
The third strand of Superlativity Discourse that repays such analysis in my view is connected with the claims of so-called Technological Immortalism, which involve the vision of the reductively "technological" accomplishment of post-human superlongevity -- via unprecedented as yet imaginary radically efficacious genetic, prosthetic, and cognitive therapies or, more "radically" still, via the conceptually confused notion and even more imaginary "technique" of a "translation" of embodied selves into presumably eternal informational forms.
In a recent post over on Existence Is Wonderful, Friend of Blog Anne Corwin makes the following observations:
These observations are, of course, completely banal, and Corwin means for them to be. These are all completely mainstream attitudes that derive from a basic everyday commitment to the notion that healthcare is desirable in general, that longer, healthier lives are desirable in general, that relieving unnecessary suffering and supporting capacity where possible are desirable in general.
Corwin is restating these commonplace intuitions because she is making a political argument here for a less conventional aspiration, and wants to rely on the mainstream force of these familiar attitudes to lend comparable force to an unfamiliar one: "we already have longevity medicine to some extent." She expands the point here: "[W]hile some people squirm and balk at the notion of "radical life extension," practically nobody thinks that it would be a bad thing to have effective treatments for heart disease, Alzheimer's, etc. "
It seems to me that Corwin's point that most healthcare is already a matter of "life extension" if one really wants to apply a neologism where none is needed functions as the key intervention that punctures the Technological Immortalist varation of Superlative Technology Discourse. It is, in fact, precisely analogous to the sort of intervention that punctures the pretensions of Superlativity's other variations as well:
For example, Nanosantalogical "advocates" for an idealized technical "Drexlerian" accomplishment of superabundance will bemoan the failure of vision of "luddites" like me who would focus instead on the struggle for universal rights, international labor and welfare standards, subsidizing peer-to-peer formations, opening access to the archive of knowledge to all, and implementing steeply progressive income and property taxes to distribute technodevelopmental benefits, costs, and risks more fairly the better to facilitate the actually possible this-worldly abundance of commonwealth (a focus that is perfectly compatible with a concern with questions of funding useful research and regulating harmful impacts of technological interventions at the nanoscale). What matters to me here is that it is the latter focus that reveals the practical substance that the super-predicated notion of superabundance at once depends on and disavows for its force, a disavowal that in turn enables the super-predicated term to connect up with the far older omni-predicated term of transcendental discourse -- in this case, omnibenevolence, which has always strived to reconcile the fact of (God's) agency with the persistence of evil, a problem that translates under Superlativity into the anti-politics of a desired technical circumvention of the actual diversity of stakeholder aspirations in a finite world -- and so do the deeper work of psychic reassurance and sub(cult)ural cohesion that has always been the task of such pre-democratic discourse.
For another example, Singularitarian "advocates" for an idealized technical accomplishment of superintelligence via Strong Artificial Intelligence or self-"optimizing" software or human cognitive "enhancement" or what have you will likewise bemoan the failure of vision of "luddites" like me who would focus instead on providing lifelong education and desired retraining for all, encouraging a free and truly independent diverse media and press landscape, securing universal access to information via shortened copyright terms, liberalization of fair use provisions, limiting the propertization of public-funded research, demanding state, corporate, and academic transparency in matters of budgets and research results, subsidizing peer-to-peer formations and practices of peer production, the better to facilitate the actually possible this-worldly collaboration and contestation of multicultural commons (a focus that is perfectly compatible with a concern with questions of funding useful research and regulating harmful impacts of therapeutic modifications of mood and memory, monitoring and regulating automated weapons systems, asymmetrical surveillance and panoptic sorts, networked malware, infowar utilities, and so on). What matters to me here is that it is the latter focus that reveals the practical substance that the super-predicated notion of superintelligence at once depends on and disavows for its force, a disavowal that in turn enables the super-predicated term to connect up with the far older omni-predicated term of transcendental discourse -- in this case, omniscience, duly domesticated into an instrumental rationality sufficiently comparable and precedented to seem familiar but in fact invested with sufficient scope, speed, and efficacy to promise and threaten the incomparable, the unprecedented, the unspecifyable in the cadences of Priestly authority -- and so do the deeper work of psychic reassurance and sub(cult)ural cohesion that has always been the task of such pre-democratic discourse.
As I commented to Corwin in the Comments section of her post (a comment with which Anne seemed sympathetic, quite as I expected her to be):
The lesson one should draw from the banal realization that most healthcare is describable as longevity medicine and that, hence, almost everybody on earth supports a kind of longevity medicine is not that everybody therefore is some kind of confused or closeted Technological Immortalist, but that the discourse of Technological Immortalism has commandeered and deranged conventional intuitions about the desirability of healthcare providing longer, healthier lives. Superlativity opportunistically depends on (and as usual disavows) this substantial content in an effort to turn these intuitions to the service of more conventionally transcendental tasks to which they are finally ill-suited: mostly magical thinking and wish-fulfillment fantasies involving the individual acquisition of superhuman capacities and the denial of the fact of human mortality.
These Superlative derangements of healthcare discourse contribute to the pernicious pathologization of urgent technodevelopmental discourse concerning the obscene inequities in the treatment and neglect of already treatable diseases, the provision of basic services, the maintenance of basic infrastructure, sanitation, and nutrition, budgetary priorities in matters of research and development and distribution of promising emerging genetic, prosthetic, and cognitive therapies, sound scientific information about such emerging therapies, questions concerning consensual recourse to medical modification to facilitate or maintain non-normative morphologies and capacities (and the emerging tension between equally progressive intuitions about universal basic healthcare provision devoted to an "optimality" standard that both imposes and protects general standards in the service of the democratic value of equity and a "consensual' standard that risks becoming an alibi for exploitation and neglect but in the service of the democratic value of diversity), and so on.
Now, let me be as clear as possible about my sense of the terrain with which this technodevelopmental discourse is coping in fact. Already, today, the unprecedented susceptibility of organisms to medical intervention has transformed the status of "viability," "therapy," "normality," as stable measures of just when lives can properly be said to begin or to end, or as measures of the proper scope of healthcare practice. Meanwhile, neuroceutical interventions into memory, mood, and motivation (not to mention research into the impact of mass mediation, marketing, propaganda, and surveillance) deeply trouble our received intuitions about what enables and constitutes proper consent in the first place.
Consider the most conservatively therapeutic understanding of the "ultimate goals" or "regulative ideals" of medical science and treatment: Let's say that these would involve a kind of Hayfleckian utopia in which everybody on earth enjoys the robust health and intellectual capacity of the healthiest among us today as we presently perceive them, as well as lifespans prolonged for all to the extent of the century or so available only to the luckiest among us so far. It is crucial to grasp that the therapeutic accomplishment of this still intelligibly "conservative" therapeutic ideal would almost certainly set in motion a trajectory of scientific and technological development that would provoke at one and the same time unimaginable perplexities into the status of profound biological experiences such as pregnancy, sexual maturation, illness, aging and death.
In other words, even the most modest provision of basic and decent health care according to the terms and capacities of emerging and proximately upcoming genetic, prosthetic, and cognitive techniques -- and ever more so according to just how universally this basic healthcare is provided -- will transform, quite possibly beyond recognition, what will count as “basic,” “decent,” and “normal” in the way of our expectations about what bodies properly are and what they are capable of. It is this sort of profound quandary that activates the irrational transcendentalizing passions of Superlativity, the ancient (as old as recorded civilization) hankering after immortality, invulnerability, superpowers, and so on.
I have repeatedly pointed to the pernicious pathologization of technodevelopmental deliberation perpetrated by bioconservatives discourses that incessantly and hyperbolically conjure up spectacles of monstrous chimeras, clone armies, commodified super-babies, "perverse" sexual indulgences, imposed prostheticized monoculture and so on all as a way of combating modest, very widely desired therapeutic research and access to cure diseases, treat avoidable suffering, end unwanted pregnancies, facilitate wanted pregnancies, support diverse non-normative lifeways. Usually they do so out of a reactionary politics of social conservatism that recognizes the threat to incumbent interests of appropriate and appropriated technologies in the hands of the people. (For examples: read this and this and this and this and this.)
But it seems to me that Superlative discourses offer up precisely analogous hyperbolic spectacles (indeed, sometimes they offer up exactly the same spectacles as the bioconservatives, but in tonalities of desire rather than dread), producing precisely the same pernicious derangements of deliberation, often -- curiously enough given the militant atheism prevalent among the partisans of Superlativite Technology Discourses, especially in its Sub(cult)ural Formations -- to satisfy precisely the same sorts of religious aspirations, the consolations of faith in an often tragic universe, the quest for connection with a meaning greater than oneself, reassurance in the face of life's mortality and many betrayals, the ritual bonds of shared identification and dis-identification in moral and interpretative communities of affiliation, and so on.
The third strand of Superlativity Discourse that repays such analysis in my view is connected with the claims of so-called Technological Immortalism, which involve the vision of the reductively "technological" accomplishment of post-human superlongevity -- via unprecedented as yet imaginary radically efficacious genetic, prosthetic, and cognitive therapies or, more "radically" still, via the conceptually confused notion and even more imaginary "technique" of a "translation" of embodied selves into presumably eternal informational forms.
In a recent post over on Existence Is Wonderful, Friend of Blog Anne Corwin makes the following observations:
If a person has hypertension and manages to get it properly treated, it is quite likely that he or she will remain in better health longer than otherwise, because his or her body will not be experiencing as much in the way of accumulated damage.
If testing for (and treating) treating hypertension is basic health care for people in middle-age and beyond, there should be nothing too difficult about imagining eventually testing for (and treating) issues pertaining to cancer vulnerability, critical cell loss and atrophy, mitochondrial mutation, etc.
While the ongoing quest to achieve better health care for all persons is, and will ever remain, broadly applied and global in scope, it is well worth acknowledging that people get old everywhere in the world. This makes the drive to develop effective basic health care for older people of universal importance. Hypertension, cancer, atherosclerosis, etc., do not discriminate on the basis of race, creed, national origin, or economic status. And if we're going to consider hypertension treatment part of basic health care, why not other forms of maintenance care?
These observations are, of course, completely banal, and Corwin means for them to be. These are all completely mainstream attitudes that derive from a basic everyday commitment to the notion that healthcare is desirable in general, that longer, healthier lives are desirable in general, that relieving unnecessary suffering and supporting capacity where possible are desirable in general.
Corwin is restating these commonplace intuitions because she is making a political argument here for a less conventional aspiration, and wants to rely on the mainstream force of these familiar attitudes to lend comparable force to an unfamiliar one: "we already have longevity medicine to some extent." She expands the point here: "[W]hile some people squirm and balk at the notion of "radical life extension," practically nobody thinks that it would be a bad thing to have effective treatments for heart disease, Alzheimer's, etc. "
It seems to me that Corwin's point that most healthcare is already a matter of "life extension" if one really wants to apply a neologism where none is needed functions as the key intervention that punctures the Technological Immortalist varation of Superlative Technology Discourse. It is, in fact, precisely analogous to the sort of intervention that punctures the pretensions of Superlativity's other variations as well:
For example, Nanosantalogical "advocates" for an idealized technical "Drexlerian" accomplishment of superabundance will bemoan the failure of vision of "luddites" like me who would focus instead on the struggle for universal rights, international labor and welfare standards, subsidizing peer-to-peer formations, opening access to the archive of knowledge to all, and implementing steeply progressive income and property taxes to distribute technodevelopmental benefits, costs, and risks more fairly the better to facilitate the actually possible this-worldly abundance of commonwealth (a focus that is perfectly compatible with a concern with questions of funding useful research and regulating harmful impacts of technological interventions at the nanoscale). What matters to me here is that it is the latter focus that reveals the practical substance that the super-predicated notion of superabundance at once depends on and disavows for its force, a disavowal that in turn enables the super-predicated term to connect up with the far older omni-predicated term of transcendental discourse -- in this case, omnibenevolence, which has always strived to reconcile the fact of (God's) agency with the persistence of evil, a problem that translates under Superlativity into the anti-politics of a desired technical circumvention of the actual diversity of stakeholder aspirations in a finite world -- and so do the deeper work of psychic reassurance and sub(cult)ural cohesion that has always been the task of such pre-democratic discourse.
For another example, Singularitarian "advocates" for an idealized technical accomplishment of superintelligence via Strong Artificial Intelligence or self-"optimizing" software or human cognitive "enhancement" or what have you will likewise bemoan the failure of vision of "luddites" like me who would focus instead on providing lifelong education and desired retraining for all, encouraging a free and truly independent diverse media and press landscape, securing universal access to information via shortened copyright terms, liberalization of fair use provisions, limiting the propertization of public-funded research, demanding state, corporate, and academic transparency in matters of budgets and research results, subsidizing peer-to-peer formations and practices of peer production, the better to facilitate the actually possible this-worldly collaboration and contestation of multicultural commons (a focus that is perfectly compatible with a concern with questions of funding useful research and regulating harmful impacts of therapeutic modifications of mood and memory, monitoring and regulating automated weapons systems, asymmetrical surveillance and panoptic sorts, networked malware, infowar utilities, and so on). What matters to me here is that it is the latter focus that reveals the practical substance that the super-predicated notion of superintelligence at once depends on and disavows for its force, a disavowal that in turn enables the super-predicated term to connect up with the far older omni-predicated term of transcendental discourse -- in this case, omniscience, duly domesticated into an instrumental rationality sufficiently comparable and precedented to seem familiar but in fact invested with sufficient scope, speed, and efficacy to promise and threaten the incomparable, the unprecedented, the unspecifyable in the cadences of Priestly authority -- and so do the deeper work of psychic reassurance and sub(cult)ural cohesion that has always been the task of such pre-democratic discourse.
As I commented to Corwin in the Comments section of her post (a comment with which Anne seemed sympathetic, quite as I expected her to be):
It is important to stress to your readership that the conclusion one draws from this insight is not that somehow it is clarifying to redescribe the treatment of hypertension and such as part of an effort to "defeat aging," but that advocates for research and funding for longevity and rejuvenation medicine (or, heaven help us, "technological immortality") should instead be redescribing most of the things they presently associate with "defeating aging" as, simply, "healthcare" -- very much including the Seven Deadly Things at the heart of the SENS research program, and comparable formulations from research programs to come.
The lesson one should draw from the banal realization that most healthcare is describable as longevity medicine and that, hence, almost everybody on earth supports a kind of longevity medicine is not that everybody therefore is some kind of confused or closeted Technological Immortalist, but that the discourse of Technological Immortalism has commandeered and deranged conventional intuitions about the desirability of healthcare providing longer, healthier lives. Superlativity opportunistically depends on (and as usual disavows) this substantial content in an effort to turn these intuitions to the service of more conventionally transcendental tasks to which they are finally ill-suited: mostly magical thinking and wish-fulfillment fantasies involving the individual acquisition of superhuman capacities and the denial of the fact of human mortality.
These Superlative derangements of healthcare discourse contribute to the pernicious pathologization of urgent technodevelopmental discourse concerning the obscene inequities in the treatment and neglect of already treatable diseases, the provision of basic services, the maintenance of basic infrastructure, sanitation, and nutrition, budgetary priorities in matters of research and development and distribution of promising emerging genetic, prosthetic, and cognitive therapies, sound scientific information about such emerging therapies, questions concerning consensual recourse to medical modification to facilitate or maintain non-normative morphologies and capacities (and the emerging tension between equally progressive intuitions about universal basic healthcare provision devoted to an "optimality" standard that both imposes and protects general standards in the service of the democratic value of equity and a "consensual' standard that risks becoming an alibi for exploitation and neglect but in the service of the democratic value of diversity), and so on.
Now, let me be as clear as possible about my sense of the terrain with which this technodevelopmental discourse is coping in fact. Already, today, the unprecedented susceptibility of organisms to medical intervention has transformed the status of "viability," "therapy," "normality," as stable measures of just when lives can properly be said to begin or to end, or as measures of the proper scope of healthcare practice. Meanwhile, neuroceutical interventions into memory, mood, and motivation (not to mention research into the impact of mass mediation, marketing, propaganda, and surveillance) deeply trouble our received intuitions about what enables and constitutes proper consent in the first place.
Consider the most conservatively therapeutic understanding of the "ultimate goals" or "regulative ideals" of medical science and treatment: Let's say that these would involve a kind of Hayfleckian utopia in which everybody on earth enjoys the robust health and intellectual capacity of the healthiest among us today as we presently perceive them, as well as lifespans prolonged for all to the extent of the century or so available only to the luckiest among us so far. It is crucial to grasp that the therapeutic accomplishment of this still intelligibly "conservative" therapeutic ideal would almost certainly set in motion a trajectory of scientific and technological development that would provoke at one and the same time unimaginable perplexities into the status of profound biological experiences such as pregnancy, sexual maturation, illness, aging and death.
In other words, even the most modest provision of basic and decent health care according to the terms and capacities of emerging and proximately upcoming genetic, prosthetic, and cognitive techniques -- and ever more so according to just how universally this basic healthcare is provided -- will transform, quite possibly beyond recognition, what will count as “basic,” “decent,” and “normal” in the way of our expectations about what bodies properly are and what they are capable of. It is this sort of profound quandary that activates the irrational transcendentalizing passions of Superlativity, the ancient (as old as recorded civilization) hankering after immortality, invulnerability, superpowers, and so on.
I have repeatedly pointed to the pernicious pathologization of technodevelopmental deliberation perpetrated by bioconservatives discourses that incessantly and hyperbolically conjure up spectacles of monstrous chimeras, clone armies, commodified super-babies, "perverse" sexual indulgences, imposed prostheticized monoculture and so on all as a way of combating modest, very widely desired therapeutic research and access to cure diseases, treat avoidable suffering, end unwanted pregnancies, facilitate wanted pregnancies, support diverse non-normative lifeways. Usually they do so out of a reactionary politics of social conservatism that recognizes the threat to incumbent interests of appropriate and appropriated technologies in the hands of the people. (For examples: read this and this and this and this and this.)
But it seems to me that Superlative discourses offer up precisely analogous hyperbolic spectacles (indeed, sometimes they offer up exactly the same spectacles as the bioconservatives, but in tonalities of desire rather than dread), producing precisely the same pernicious derangements of deliberation, often -- curiously enough given the militant atheism prevalent among the partisans of Superlativite Technology Discourses, especially in its Sub(cult)ural Formations -- to satisfy precisely the same sorts of religious aspirations, the consolations of faith in an often tragic universe, the quest for connection with a meaning greater than oneself, reassurance in the face of life's mortality and many betrayals, the ritual bonds of shared identification and dis-identification in moral and interpretative communities of affiliation, and so on.
Friday, November 02, 2007
"I Am Fact Guy"
Oh, those wacky Singularitarians! A fellow named Brian Wang has taken special umbrage at some of the comments I have directed at Superlative and Sub(cult)ural Technocentrics in a long ongoing discussion taking place over at Michael Anissimov's place "Accelerating Future." For the substance of the actual critique that has so exercised poor Brian I recommend people nibble at the texts available in my Superlative Summary. For those who already have the substance down, but find themselves still craving something more, I give you, ladies and gentlemen, Brian Wang:
Just so we're all on the same page now, by "transhumanism" here, I assume Brian means to indicate his faith in the looming arrival within the lifetimes of many now living of superlative technologies delivering techno-utopian free market superabundance, medical or even "digitized" Immortality and a Singularitarian "End of History" via the appearance of an artificial superintelligent Robot God. I am not joking, I think he probably really literally does believe something along these lines. And it this set of Superlative outcomes that he is assessing as more "likely" than the re-instatement of more progressive taxes and the implementation of universal healthcare (or "taxes as slavery" and "socialized medicine" for those among you of the reactionary wingnut set who like to rub elbows among the transhumanists).
I like the part where Brian suggests my advocacy of progressive taxes makes me sound crazy and then follows with a long deep dose of sane, Brian Wang style.
In a past exchange in which Brian was smugly explaining to me why nuclear proliferation was an issue overblown by lefty literary types who lack his own command of the relevant facts of the matter I found myself so disgusted I told him he really ought to be posting elsewhere than here. Given my description of Amor Mundi as "scattered technoprogressive speculations from a social democratic secular feminist vegetarian post-natural green anti-militarist cyborg queer academic" it is rather difficult to imagine that such a reaction would seem surprising to Brian.
But I admit it would be unfair of me to spotlight Brian's intemperate remarks in this way and forbid him space to respond…
So I have changed my mind: Do feel welcome to go right ahead and comment to your heart's content, Brian Wang, should you want to do so.
Honestly. Let it out, guy.
I might even upgrade especially choice bits for the front page.
I know many things are incomprehensible to you. You are not very smart.
Why don’t you jettison your focus on high progressive taxes, guaranteed income and socialized medicine Dale? It is too long term and even more unlikely to happen in the United States than transhumanism.
Just so we're all on the same page now, by "transhumanism" here, I assume Brian means to indicate his faith in the looming arrival within the lifetimes of many now living of superlative technologies delivering techno-utopian free market superabundance, medical or even "digitized" Immortality and a Singularitarian "End of History" via the appearance of an artificial superintelligent Robot God. I am not joking, I think he probably really literally does believe something along these lines. And it this set of Superlative outcomes that he is assessing as more "likely" than the re-instatement of more progressive taxes and the implementation of universal healthcare (or "taxes as slavery" and "socialized medicine" for those among you of the reactionary wingnut set who like to rub elbows among the transhumanists).
Your relabeled extreme socialism makes you sound crazy.
Did you enjoy it when I freaked you out ? I know you did not. You told me yourself. Yet you did not learn anything from it. You are not a villain, but you are a pathetic immoral worm. A moral person who knows that it is bad for them to do something would stop doing it even if they enjoyed doing it. I enjoy rubbing your face in it. The reason it it is moral for me to do so is because you are not a moral person as you have shown and stated. I get to bully the bully. I own you.
I am fact guy who is immune to your manipulation. So if you want to have a survey of your explicitly offered reasons then you can dig them up and present them yourself.
I like the part where Brian suggests my advocacy of progressive taxes makes me sound crazy and then follows with a long deep dose of sane, Brian Wang style.
In a past exchange in which Brian was smugly explaining to me why nuclear proliferation was an issue overblown by lefty literary types who lack his own command of the relevant facts of the matter I found myself so disgusted I told him he really ought to be posting elsewhere than here. Given my description of Amor Mundi as "scattered technoprogressive speculations from a social democratic secular feminist vegetarian post-natural green anti-militarist cyborg queer academic" it is rather difficult to imagine that such a reaction would seem surprising to Brian.
But I admit it would be unfair of me to spotlight Brian's intemperate remarks in this way and forbid him space to respond…
So I have changed my mind: Do feel welcome to go right ahead and comment to your heart's content, Brian Wang, should you want to do so.
Honestly. Let it out, guy.
I might even upgrade especially choice bits for the front page.
Wednesday, October 31, 2007
Richard Jones on "The Uses and Abuses of Speculative Futurism"
Over at his blog Soft Machines, Richard Jones has expanded and clarified his own discussion of "Superlativity" and "Speculative Futurism" in response to some criticisms he has received (and some he has observed me receiving lately).
He expresses a perplexity that I have to admit feels very much like my own when he declares: "I think transhumanists genuinely don’t realise quite how few informed people outside their own circles think that the full, superlative version of the molecular manufacturing vision is plausible."
Later in the comments section he makes a comparable claim about the faith in an Artificial General Intelligence that organizes the Singularitarian sub(cult)ure: "[D]iscussions of likely futures… go well beyond making lists of plausible technologies to consider the socio-economic realities that determine whether technologies will actually be adopted. One also needs to recognise that some advances are going to need conceptual breakthroughs whose nature or timing simply cannot be predicted, not just technology development (I believe AGI to be in this category)."
Needless to say, I think that the claims of so-called "Technological Immortalists" that the personal lives of people now living might plausibly be immortalized by recourse to emerging genetic therapies, "uploading" selves into informational forms, or cryonic suspension also belong to this category.
Taken together, these three basic technodevelopmental derangements constitute what I have described elsewhere as the key super-predicated idealized "outcomes" that drive much contemporary Superlative Technology Discourse: Superintelligence, Superlongevity, Superabundance. Thus schematized, it isn't difficult to grasp that Superlative Technology Discourse will depend for much of its intuitive force on its citation and translation of the terms of the omni-predicated terms omniscience, omnipotence, omnibenevolence that have long "characterized" godhood for those who aspire to "know" it, but now updated into hyperbolic pseudo-scientific "predictions" for those who would prosthetically aspire to "achieve" it in their own persons.
It is interesting to note that these idealizations also organize the Sub(cult)ural Futurist formations to which I also direct much of my own Critique: In place of open futures arising out of the unpredictable contestation and collective effort of a diversity of stakeholders with whom we share and are building the world in the present that becomes future presents, Sub(cult)ural Futurists substitute idealized outcomes with which they have identified and which they seek to implement unilaterally in the world. This identificatory gesture in Sub(cult)ural Futurists tends to be founded on an active dis-identification with the present (and with futurity as future presents open in the way the political present is open) and identification with idealized futures in which they make their imaginative home, as well as on an active correlated dis-identification with that portion of the diversity of stakeholders with whom they share the world in fact and any actual futures available to that world who they take to oppose their implementation of that idealized future.
And so, Richard Jones writes: "The only explanation I can think of for the attachment of many transhumanists to the molecular manufacturing vision is that it is indeed a symptom of the coupling of group-think and wishful thinking." And it isn't surprising that one can reel off with utter documentary ease a host of curious marginal sub(cult)ural self-identifications affirmed by most of the prominent participants in Superlative Technology Discourse -- Extropians, Transhumanists, Singularitarians, Immortalists (it seems to me there are further connections to Randian Objectivism, and illuminating resonances one discerns in comparing them to Scientology, Raelians, and even Mormonism) -- nor surprising to stumble on conjurations of tribal "outsiders" against which Sub(cult)ural Futurists imagine themselves arrayed -- "Luddites," "Deathists," "Postmodernists," and so on.
To those who charge that his critique (and by extension, my own) amounts to a straightjacketing of speculative imagination, Jones offers up this nice response with which I have quite a lot of sympathy:
And as I have pointed out elsewhere myself, these Superlative and especially Sub(cult)ural Futurisms tend to have
Jones goes on to remind us, crucially, just how much "futurism is not, in fact, about the future at all -- it’s about the present and the hopes and fears that people have about the direction society seems to be taking now." This is why it can be so illuminating to treat futurological discourse generally as symptomatic rather than predictive and also it explains, when we make the mistake of taking it at "face value" as a straightforwardly predictive exercise, "precisely why futurism ages so badly, giving us the opportunity for all those cheap laughs about the non-arrival of flying cars and silvery jump-suits." When tech talk turns Superlative, I fear, we are relieved of the necessity to wait: the cheap laughs and groaners are abundantly available already in the present.
He expresses a perplexity that I have to admit feels very much like my own when he declares: "I think transhumanists genuinely don’t realise quite how few informed people outside their own circles think that the full, superlative version of the molecular manufacturing vision is plausible."
Later in the comments section he makes a comparable claim about the faith in an Artificial General Intelligence that organizes the Singularitarian sub(cult)ure: "[D]iscussions of likely futures… go well beyond making lists of plausible technologies to consider the socio-economic realities that determine whether technologies will actually be adopted. One also needs to recognise that some advances are going to need conceptual breakthroughs whose nature or timing simply cannot be predicted, not just technology development (I believe AGI to be in this category)."
Needless to say, I think that the claims of so-called "Technological Immortalists" that the personal lives of people now living might plausibly be immortalized by recourse to emerging genetic therapies, "uploading" selves into informational forms, or cryonic suspension also belong to this category.
Taken together, these three basic technodevelopmental derangements constitute what I have described elsewhere as the key super-predicated idealized "outcomes" that drive much contemporary Superlative Technology Discourse: Superintelligence, Superlongevity, Superabundance. Thus schematized, it isn't difficult to grasp that Superlative Technology Discourse will depend for much of its intuitive force on its citation and translation of the terms of the omni-predicated terms omniscience, omnipotence, omnibenevolence that have long "characterized" godhood for those who aspire to "know" it, but now updated into hyperbolic pseudo-scientific "predictions" for those who would prosthetically aspire to "achieve" it in their own persons.
It is interesting to note that these idealizations also organize the Sub(cult)ural Futurist formations to which I also direct much of my own Critique: In place of open futures arising out of the unpredictable contestation and collective effort of a diversity of stakeholders with whom we share and are building the world in the present that becomes future presents, Sub(cult)ural Futurists substitute idealized outcomes with which they have identified and which they seek to implement unilaterally in the world. This identificatory gesture in Sub(cult)ural Futurists tends to be founded on an active dis-identification with the present (and with futurity as future presents open in the way the political present is open) and identification with idealized futures in which they make their imaginative home, as well as on an active correlated dis-identification with that portion of the diversity of stakeholders with whom they share the world in fact and any actual futures available to that world who they take to oppose their implementation of that idealized future.
And so, Richard Jones writes: "The only explanation I can think of for the attachment of many transhumanists to the molecular manufacturing vision is that it is indeed a symptom of the coupling of group-think and wishful thinking." And it isn't surprising that one can reel off with utter documentary ease a host of curious marginal sub(cult)ural self-identifications affirmed by most of the prominent participants in Superlative Technology Discourse -- Extropians, Transhumanists, Singularitarians, Immortalists (it seems to me there are further connections to Randian Objectivism, and illuminating resonances one discerns in comparing them to Scientology, Raelians, and even Mormonism) -- nor surprising to stumble on conjurations of tribal "outsiders" against which Sub(cult)ural Futurists imagine themselves arrayed -- "Luddites," "Deathists," "Postmodernists," and so on.
To those who charge that his critique (and by extension, my own) amounts to a straightjacketing of speculative imagination, Jones offers up this nice response with which I have quite a lot of sympathy:
[M]y problem is not that I think that transhumanists have let their imaginations run wild. Precisely the opposite, in fact; I worry that transhumanists have just one fixed vision of the future, which is now beginning to show its age somewhat, and are demonstrating a failure of imagination in their inability to conceive of the many different futures that have the potential to unfold.
And as I have pointed out elsewhere myself, these Superlative and especially Sub(cult)ural Futurisms tend to have
a highly particular vision of what the future will look like, and [are] driven by an evangelical zeal to implement just that future. It is a future with a highly specific set of characteristics, involving particular construals of robotics, artificial intelligence, nanotechnology, and technological immortality (involving first genetic therapies but culminating in a techno-spiritualized "transcendence" of the body through digitality). These characteristics, furthermore, are described as likely to arrive within the lifetimes of lucky people now living and are described as inter-implicated or even converging outcomes, crystallizing in a singular momentous Event, the Singularity, an Event in which people can believe, about which they can claim superior knowledge as believers, which they go on to invest with conspicuously transcendental significance, and which they declare to be unimaginable in key details but at once perfectly understood in its essentials. [The] highly specific vision in Stiegler's story ["The Gentle Seduction"] is one and the same with the vision humorously documented in Ed Regis's rather ethnographic Great Mambo Chicken and the Transhuman Condition, published… in 1990, and in Damien Broderick's The Spike, published twelve years later, and, although the stresses shift here and there… sometimes emphasizing robotics and consciousness uploading (as in Hans Moravek's Mind Children…), sometimes emphasizing Drexlerian nanotechnology (as in Chris Peterson's Unbounding the Future), or longevity (as in Brian Alexander's Rapture), it is fairly astonishing to realize just how unchanging this vision is in its specificity, in its ethos, in its cocksure predictions, even in its cast of characters. Surely a vision of looming incomprehensible transformation should manage to be a bit less… static than transhumanism seems to be?
Jones goes on to remind us, crucially, just how much "futurism is not, in fact, about the future at all -- it’s about the present and the hopes and fears that people have about the direction society seems to be taking now." This is why it can be so illuminating to treat futurological discourse generally as symptomatic rather than predictive and also it explains, when we make the mistake of taking it at "face value" as a straightforwardly predictive exercise, "precisely why futurism ages so badly, giving us the opportunity for all those cheap laughs about the non-arrival of flying cars and silvery jump-suits." When tech talk turns Superlative, I fear, we are relieved of the necessity to wait: the cheap laughs and groaners are abundantly available already in the present.
Saturday, October 27, 2007
My Failures of Imagination
Upgraded and adapted from Comments. I fear my response to long-time Friend of Blog Giulio Prisco is a bit testy, but I've been told that my writing is clearer when I am in this temper, and I've also been told that my Superlative Critique is presented in terms that are too abstruse in general whatever its usefulness, so here goes:
I criticize your intolerance for those who, while basically agreeing with you on the points above, have ideas different from yours on other, unrelated things, and affirm their right to think with their own head.
I distinguish instrumental, moral, esthetic, ethical, and political modes of belief. (I spell out this point at greater length here, among other places.) Rationality, for me, consists not only in asserting beliefs that comport with the criteria of warrant appropriate to each mode, but also in applying to our different ends the mode actually appropriate to it.
I'm perfectly tolerant of estheticized or moralized expressions of religiosity, for example, but I keep making the point that religiosity (even in its scientistic variations) when misapplied to domains, ends, and situations for which it is categorically unsuited creates endless mischief.
Superlativity as a discourse consists of a complex of essentially moral and esthetic beliefs mistaking themselves for or overambitious to encompass other modes of belief.
This sort of thing is quite commonplace in fundamentalist formations, as it happens, and one of the reasons religiosity comes up so often in discussions of Superlativity is because many people already have a grasp of what happens when fundamentalist faiths are politicized or pseudo-scientized and so the analogy (while imperfect in some respects) can be a useful way to get at part of the point of the critique of Superlativity.
Because, my friend, you will never persuade me that one who finds intellectual or spiritual pleasure in contemplating nanosanta-robot god-superlative technology-etc. cannot be a worthy political, social and cultural activists.
This line is total bullshit, and I'm growing quite impatient with it. I don't know how else to say this, I feel like throwing up my hands. Look, I'm a big promiscuous fag, a theoryhead aesthete, and an experimentalist in matters of, well, experiences available at the, uh, extremes as these things are timidly reckoned among the charming bourgeoisie. Take your pleasures where you will, I say, and always have done. Laissez les bons temps rouler. I'm a champion of multiculture, experimentalism, and visionary imagination, and that isn't exactly a secret given what I write about endlessly here and elsewhere.
But -- now read this carefully, think about what I am saying before you reply -- if you pretend your religious ritual makes you a policy wonk expect me to call bullshit; if you demand that people mistake your aesthetic preferences and preoccupations for scientific truths expect me to call bullshit; if you go from pleasure in to proselytizing for your cultural and subcultural enthusiasms expect me to call bullshit; if you seek legitimacy for authoritarian circumventions of democracy in a marginal defensive hierarchical sub(cult)ural organization or as a way to address risks you think your cronies see more clearly than the other people in the world who share those risks and would be impacted by your decisions, all in the name of "tolerance," expect me to call bullshit.
"I can believe in Santa Claus and Eastern Bunny if I like, and still agree with you on political issues."
No shit, Sherlock. I've never said otherwise.
But -- If you form a Santa cult and claim Santa Science needs to be taught in schools instead of Darwin, or if you become a Santa True Believer who wants to impose his Santa worldview across the globe as the solution to all the world's problems, or you try to legitimize the Santalogy Cult by offering up "serious" policy papers on elf toymaking as the real solution to global poverty and then complain that those who expose this as made up bullshit are denying the vital role of visionaries and imagination and so on, well, then that's a problem. (Please don't lose yourself in the details of this off-the-cuff analogy drawn from your own comment, by the way, I'm sure there are plenty of nit-picky disanalogies here, I'm just making a broad point here that anybody with a brain can understand.)
Unless, of course, you persuade me that the two things are really incompatible.
I despair of the possibility of ever managing such a feat with you. (Irony impaired readership insert smiley here.)
I will gladly take the Robot God and Easter Bunny then.
Take Thor for all I care. None of them exist, and any priesthood that tries to shore up political authority by claiming to "represent" them in the world I will fight as a democrat opposed to elites -- whether aristocratic, priestly, technocratic, oligarchic, military, "meritocratic" or what have you. I can appreciate the pleasures and provocations of a path of private perfection organized through the gesture of affirming faith in a Robot God, Thor, or the Easter Bunny. I guess.
I have no "trouble" with spirituality, faith, aestheticism, moralism in their proper place, even where their expressions take forms that aren't my own cup of tea in the least. I've said this so many times by now that your stubborn obliviousness to the point is starting to look like the kind of conceptual impasse no amount of argument can circumvent between us.
Perhaps you guys are so scared of "superlative technology discourse" because you are afraid of falling back into the old religious patterns of thought, that perhaps you found difficult to shed.
I've been a cheerful nonjudgmental atheist for twenty-four years. It wasn't a particularly "difficult" transition for me, as it happens. Giving up pepperoni when I became a vegetarian was incomparably more difficult for me than doing without God ever was. And I'm not exactly sure what frame of mind you imagine I'm in when I delineate my Superlative Discourse Critiques when you say I'm "so scared." I think Superlativity is wrong, I think it is reckless, I think it is comports well with a politics of incumbency I abhor, I think it produces frames and formulations that derange technodevelopmental discourse at an historical moment when public deliberation on technoscientific questions urgently needs to be clear. It gives me cause for concern, it attracts my ethnographic and critical interest. But "so scared"? Don't flatter yourself.
Some of us, yours truly included, never gave much importance to religion. So we feel free to consider interesting ideas for their own sake, regardless of possible religious analogies.
You are constantly claiming to have a level of mastery over your conscious intentions and expression that seems to me almost flabbergastingly naïve or even deluded. It's very nice that you feel you have attained a level of enlightenment that places you in a position to consider ideas "for their own sake," unencumbered one presumes by the context of unconscious motives, unintended consequences, historical complexities, etymological sedimentations, figural entailments, and so on. I would propose, oh so modestly, that no one deserves to imagine themselves enlightened in any useful construal of the term who can't see the implausibility of the very idea of the state you seem so sure you have attained.
I criticize your intolerance for those who, while basically agreeing with you on the points above, have ideas different from yours on other, unrelated things, and affirm their right to think with their own head.
I distinguish instrumental, moral, esthetic, ethical, and political modes of belief. (I spell out this point at greater length here, among other places.) Rationality, for me, consists not only in asserting beliefs that comport with the criteria of warrant appropriate to each mode, but also in applying to our different ends the mode actually appropriate to it.
I'm perfectly tolerant of estheticized or moralized expressions of religiosity, for example, but I keep making the point that religiosity (even in its scientistic variations) when misapplied to domains, ends, and situations for which it is categorically unsuited creates endless mischief.
Superlativity as a discourse consists of a complex of essentially moral and esthetic beliefs mistaking themselves for or overambitious to encompass other modes of belief.
This sort of thing is quite commonplace in fundamentalist formations, as it happens, and one of the reasons religiosity comes up so often in discussions of Superlativity is because many people already have a grasp of what happens when fundamentalist faiths are politicized or pseudo-scientized and so the analogy (while imperfect in some respects) can be a useful way to get at part of the point of the critique of Superlativity.
Because, my friend, you will never persuade me that one who finds intellectual or spiritual pleasure in contemplating nanosanta-robot god-superlative technology-etc. cannot be a worthy political, social and cultural activists.
This line is total bullshit, and I'm growing quite impatient with it. I don't know how else to say this, I feel like throwing up my hands. Look, I'm a big promiscuous fag, a theoryhead aesthete, and an experimentalist in matters of, well, experiences available at the, uh, extremes as these things are timidly reckoned among the charming bourgeoisie. Take your pleasures where you will, I say, and always have done. Laissez les bons temps rouler. I'm a champion of multiculture, experimentalism, and visionary imagination, and that isn't exactly a secret given what I write about endlessly here and elsewhere.
But -- now read this carefully, think about what I am saying before you reply -- if you pretend your religious ritual makes you a policy wonk expect me to call bullshit; if you demand that people mistake your aesthetic preferences and preoccupations for scientific truths expect me to call bullshit; if you go from pleasure in to proselytizing for your cultural and subcultural enthusiasms expect me to call bullshit; if you seek legitimacy for authoritarian circumventions of democracy in a marginal defensive hierarchical sub(cult)ural organization or as a way to address risks you think your cronies see more clearly than the other people in the world who share those risks and would be impacted by your decisions, all in the name of "tolerance," expect me to call bullshit.
"I can believe in Santa Claus and Eastern Bunny if I like, and still agree with you on political issues."
No shit, Sherlock. I've never said otherwise.
But -- If you form a Santa cult and claim Santa Science needs to be taught in schools instead of Darwin, or if you become a Santa True Believer who wants to impose his Santa worldview across the globe as the solution to all the world's problems, or you try to legitimize the Santalogy Cult by offering up "serious" policy papers on elf toymaking as the real solution to global poverty and then complain that those who expose this as made up bullshit are denying the vital role of visionaries and imagination and so on, well, then that's a problem. (Please don't lose yourself in the details of this off-the-cuff analogy drawn from your own comment, by the way, I'm sure there are plenty of nit-picky disanalogies here, I'm just making a broad point here that anybody with a brain can understand.)
Unless, of course, you persuade me that the two things are really incompatible.
I despair of the possibility of ever managing such a feat with you. (Irony impaired readership insert smiley here.)
I will gladly take the Robot God and Easter Bunny then.
Take Thor for all I care. None of them exist, and any priesthood that tries to shore up political authority by claiming to "represent" them in the world I will fight as a democrat opposed to elites -- whether aristocratic, priestly, technocratic, oligarchic, military, "meritocratic" or what have you. I can appreciate the pleasures and provocations of a path of private perfection organized through the gesture of affirming faith in a Robot God, Thor, or the Easter Bunny. I guess.
I have no "trouble" with spirituality, faith, aestheticism, moralism in their proper place, even where their expressions take forms that aren't my own cup of tea in the least. I've said this so many times by now that your stubborn obliviousness to the point is starting to look like the kind of conceptual impasse no amount of argument can circumvent between us.
Perhaps you guys are so scared of "superlative technology discourse" because you are afraid of falling back into the old religious patterns of thought, that perhaps you found difficult to shed.
I've been a cheerful nonjudgmental atheist for twenty-four years. It wasn't a particularly "difficult" transition for me, as it happens. Giving up pepperoni when I became a vegetarian was incomparably more difficult for me than doing without God ever was. And I'm not exactly sure what frame of mind you imagine I'm in when I delineate my Superlative Discourse Critiques when you say I'm "so scared." I think Superlativity is wrong, I think it is reckless, I think it is comports well with a politics of incumbency I abhor, I think it produces frames and formulations that derange technodevelopmental discourse at an historical moment when public deliberation on technoscientific questions urgently needs to be clear. It gives me cause for concern, it attracts my ethnographic and critical interest. But "so scared"? Don't flatter yourself.
Some of us, yours truly included, never gave much importance to religion. So we feel free to consider interesting ideas for their own sake, regardless of possible religious analogies.
You are constantly claiming to have a level of mastery over your conscious intentions and expression that seems to me almost flabbergastingly naïve or even deluded. It's very nice that you feel you have attained a level of enlightenment that places you in a position to consider ideas "for their own sake," unencumbered one presumes by the context of unconscious motives, unintended consequences, historical complexities, etymological sedimentations, figural entailments, and so on. I would propose, oh so modestly, that no one deserves to imagine themselves enlightened in any useful construal of the term who can't see the implausibility of the very idea of the state you seem so sure you have attained.
Friday, October 26, 2007
A Superlative Schema
In the first piece I wrote critiquing Superlative Technology Discourse a few years ago, Transformation Not Transcendence," I wrote that
Of course, there is no question that no technology, however superlative, could deliver literally omni-predicated capacities, nor is it immediately clear even how these omni-predicates might function as regulative ideals given their basic incoherence (although this sort of incoherence hasn't seemed to keep "realists" from claiming interminably that vacuous word-world correspondences function as regulative ideals governing warranted assertions concerning instrumental truth, so who knows?). Rather like the facile faith of a child who seeks to reconcile belief with sense by imagining an unimaginable God as an old man with a long beard in a stone chair, Superlativity would reconcile the impossible omnipredicated ends at which it aspires with the terms of actual possibility through a comparable domestication: of Omniscience into "Superintelligence," of Omnipotence into "Supercapacitation" (especially in its "super-longevity" or techno-immortalizing variations), of Omnibenevolence into "Superabundance."
In such Superlative Technology Discourses, it will always be the disavowed discourse of the omni-predicated term that mobilizes the passion of Superlative Techno-fixations and Techno-transcendentalisms and organizes the shared identifications at the heart of Sub(cult)ural Futurisms and Futurists. Meanwhile, it will be the disavowed terms of worldly and practical discourses that provide all the substance on which these Superlative discourses finally depend for their actual sense: Superintelligence will have no actual substance apart from Consensus Science and other forms of warranted knowledge and belief, Supercapacitation (especially the superlongevity that is the eventual focus of so much "enhancement" talk) will have no actual substance apart from Consensual Healthcare and other forms of public policy administered by harm-reduction norms, Superabundance will have no actual substance apart from Commonwealth and other forms of public investment and private entrepreneurship in the context of general welfare. In each case a worldly substantial reality -- and a reality substantiated consensually, peer-to-peer, at that -- is instrumentalized, hyper-individualized, de-politicized via Superlativity in the service of a transcendental project re-activating the omni-predicates of the theological imaginary.
As with most fundamentalisms -- that is to say, as with all transcendental projects that redirect their energies to political ends to which they are categorically unsuited -- whenever Superlativity shows the world its Sub(cult)ural "organizational" face, it will be the face of moralizing it shows, driven by the confusion of the work of morals/mores with that of ethics/politics, a misbegotten effort to impose the terms of private-parochial moral or aesthetic perfection with the terms of public ethics (which formally solicits universal assent to normative prescriptions), politics (which seeks to reconcile the incompatible aspirations of a diversity of peers who share the world), and science (which provisionally attract consensus to instrumental descriptions).
Very Schematically, I am proposing these correlations:
OMNI-PREDICATED THEOLOGICAL / TRANSCENDENTAL DISCOURSE
Omniscience
Omnipotence
Omnnibenevolence
SUPER-PREDICATED SUPERLATIVE DISCOURSE
Superintelligence
Supercapacitation (often amounting to Superlongevity)
Superabundance
WORDLY SUBSTANTIAL (democratizing/p2p) DISCOURSE
Reasonableness -- that is to say, the work and accomplishments of Warranted Beliefs applied in their proper plural precincts, scientific, moral, aesthetic, ethical, political, legal, commercial, etc.
Civitas -- that is to say the work and accomplishments of Consensual Culture, where culture is presumed to be-extensive with the prosthetic, and health and harm reduction policy are construed as artifice.
Commonwealth -- that is to say, the work and accomplishments of collaborative problem-solving, public investment, and private entrepreneurship in the context of consensual civitas.
On one hand the Super-Predicated term in a Superlative Technology Discourse always deranges and usually disavows altogether -- but, crucially, while nonetheless depending on -- the collaboratively substantiated term in a Worldly Discourse correlated with it, while on the other hand activating the archive of figures, frames, irrational passions, and idealizations of the Omni-Predicated term in a Transcendental Discourse (usually religious or pan-ideological) correlated with it. The pernicious effects of these shifts are instrumental, ethical, and political in the main, but quite various in their specificities.
That complexity accounts for all the ramifying dimensions of the Superlativity Critique one finds in the texts collected in my Superlative Summary at this point. I would like to think one discerns in my own formulations some sense of what more technoscientifically literate and democratically invested worldly alternatives to Superlativity might look like. In these writings, I try to delineate a perspective organized by a belief in technoethical pluralism, on an insistence on a substantiated rather than vacuous scene of informed, nonduressed consent, on the consensualization of non-normative experimental medicine (as an elaboration of the commitment to a politics of Choice) and the diversity of lifeways arising from these consensual practices, on the ongoing implementation of sustainable, resilient, experimentalist, open, multicultural, cosmopolitan models of civilization, on the celebration and subsidization of peer-to-peer formations of expressivity, criticism, credentialization, and the collaborative solution of shared problems, and, through these values and for them, a deep commitment to the ongoing democratization of technodevelopmental social struggle -- using technology (including techniques of education, agitation, organization, legislation) to deepen democracy, while using democracy (the nonviolent adjudication of disputes, good accountable representative governance, legible consent to the terms of everyday commerce, collective problem-solving, peer-to-peer, ongoing criticism and creative expressivity) to ensure that technology benefits us all as Amor Mundi's signature slogan more pithily puts the point.
It should go without saying that there simply is no need to join a marginal Robot Cult as either a True Believer or would-be guru to participate in technodevelopmental social struggle peer-to-peer, nor to indulge in the popular consumer fandoms, digital plutocratic financial and developmental frauds, or pseudo-scientific pop-tech infomercial entertainment of more mainstream futurology. There is no need to assume the perspective of a would-be technocratic elite. There is nothing gained in identifying with an ideology that you hope will "sweep the world" or provide the "keys to history." There is nothing gained in claiming to be "pro-technology" or "anti-technology" at a level of generality at which no technologies actually exist. There is nothing gained in foreswearing the urgencies of today for an idealized and foreclosed "The Future" nor in dis-identifying with your human peers so as to better identify with imaginary post-human or transhuman ones. There is nothing gained in the consolations of faith when there is so much valuable, actual work to do, when there are so many basic needs to fulfill, when there is so much pleasure and danger in the world of our peers at hand. There is nothing gained by an alliance with incumbent interests to secure a place in the future when these incumbents are exposed now as having no power left but the power to destroy the world and the open futurity altogether.
The Superlative Technology Critique is not finally a critique about technology, after all, because it recognizes that "technology" is functioning as a surrogate term in these discourses it critiques, the evocation of "technology" functions symptomatically in these discourses and sub(cult)ures. The critique of Superlativity is driven first of all by commitments to democracy, diversity, equity, sustainability, and substantiated consent. I venture to add, it is driven by a commitment to basic sanity, sanity understood as a collectively substantiated worldly and present concern itself. The criticisms I seem to be getting are largely from people who would either deny the relevance of my own political, social, and cultural emphasis altogether (a denial that likely marks them as unserious as far as I'm concerned) or who disapprove of my political commitment to democracy, my social commitment to commons, and my cultural commitment to planetary polyculture (a disapproval that likely marks them as reactionaries as far as I'm concerned). There is much more for me to say in this vein, and of course I will continue to do so as best I can, and everyone is certainly free and welcome to contribute to or to disdain my project as you will, but I am quite content with the focus my Critique has assumed so far and especially by the enormously revealing responses it seems to generate.
It pays to recall that theologians never have been able comfortably to manage the reconciliation of the so-called omnipredicates of an infinite God. Just when they got a handle on the notion of omnipotence, they would find it impinging on omniscience. If nothing else, the capacity to do anything would seem to preclude the knowledge of everything in advance. And of course omnibenevolence never played well with the other predicates. How to reconcile the awful with the knowledge of it and the power to make things otherwise is far from an easy thing, after all… As with God, so too with a humanity become Godlike. Any “posthuman” conditions we should find ourselves in will certainly be, no less than the human ones we find ourselves in now, defined by their finitude. This matters, if for no other reason, because it reminds us that we will never transcend our need of one another.My point in saying this was to highlight the incoherence in principle of the superlative imaginary, to spotlight what looks to me like the deep fear of finitude and contingency (exacerbated, no doubt, by the general sense that we are all of us caught up in an especially unsettling and unpredictable technoscientific storm-churn) that drives this sort of hysterical transcendental turn, and to propose in its stead a deeper awareness and celebration of our social, political, and cultural inter-dependence with one another to cope with and find meaning in the midst of this change.
Of course, there is no question that no technology, however superlative, could deliver literally omni-predicated capacities, nor is it immediately clear even how these omni-predicates might function as regulative ideals given their basic incoherence (although this sort of incoherence hasn't seemed to keep "realists" from claiming interminably that vacuous word-world correspondences function as regulative ideals governing warranted assertions concerning instrumental truth, so who knows?). Rather like the facile faith of a child who seeks to reconcile belief with sense by imagining an unimaginable God as an old man with a long beard in a stone chair, Superlativity would reconcile the impossible omnipredicated ends at which it aspires with the terms of actual possibility through a comparable domestication: of Omniscience into "Superintelligence," of Omnipotence into "Supercapacitation" (especially in its "super-longevity" or techno-immortalizing variations), of Omnibenevolence into "Superabundance."
In such Superlative Technology Discourses, it will always be the disavowed discourse of the omni-predicated term that mobilizes the passion of Superlative Techno-fixations and Techno-transcendentalisms and organizes the shared identifications at the heart of Sub(cult)ural Futurisms and Futurists. Meanwhile, it will be the disavowed terms of worldly and practical discourses that provide all the substance on which these Superlative discourses finally depend for their actual sense: Superintelligence will have no actual substance apart from Consensus Science and other forms of warranted knowledge and belief, Supercapacitation (especially the superlongevity that is the eventual focus of so much "enhancement" talk) will have no actual substance apart from Consensual Healthcare and other forms of public policy administered by harm-reduction norms, Superabundance will have no actual substance apart from Commonwealth and other forms of public investment and private entrepreneurship in the context of general welfare. In each case a worldly substantial reality -- and a reality substantiated consensually, peer-to-peer, at that -- is instrumentalized, hyper-individualized, de-politicized via Superlativity in the service of a transcendental project re-activating the omni-predicates of the theological imaginary.
As with most fundamentalisms -- that is to say, as with all transcendental projects that redirect their energies to political ends to which they are categorically unsuited -- whenever Superlativity shows the world its Sub(cult)ural "organizational" face, it will be the face of moralizing it shows, driven by the confusion of the work of morals/mores with that of ethics/politics, a misbegotten effort to impose the terms of private-parochial moral or aesthetic perfection with the terms of public ethics (which formally solicits universal assent to normative prescriptions), politics (which seeks to reconcile the incompatible aspirations of a diversity of peers who share the world), and science (which provisionally attract consensus to instrumental descriptions).
Very Schematically, I am proposing these correlations:
OMNI-PREDICATED THEOLOGICAL / TRANSCENDENTAL DISCOURSE
Omniscience
Omnipotence
Omnnibenevolence
SUPER-PREDICATED SUPERLATIVE DISCOURSE
Superintelligence
Supercapacitation (often amounting to Superlongevity)
Superabundance
WORDLY SUBSTANTIAL (democratizing/p2p) DISCOURSE
Reasonableness -- that is to say, the work and accomplishments of Warranted Beliefs applied in their proper plural precincts, scientific, moral, aesthetic, ethical, political, legal, commercial, etc.
Civitas -- that is to say the work and accomplishments of Consensual Culture, where culture is presumed to be-extensive with the prosthetic, and health and harm reduction policy are construed as artifice.
Commonwealth -- that is to say, the work and accomplishments of collaborative problem-solving, public investment, and private entrepreneurship in the context of consensual civitas.
On one hand the Super-Predicated term in a Superlative Technology Discourse always deranges and usually disavows altogether -- but, crucially, while nonetheless depending on -- the collaboratively substantiated term in a Worldly Discourse correlated with it, while on the other hand activating the archive of figures, frames, irrational passions, and idealizations of the Omni-Predicated term in a Transcendental Discourse (usually religious or pan-ideological) correlated with it. The pernicious effects of these shifts are instrumental, ethical, and political in the main, but quite various in their specificities.
That complexity accounts for all the ramifying dimensions of the Superlativity Critique one finds in the texts collected in my Superlative Summary at this point. I would like to think one discerns in my own formulations some sense of what more technoscientifically literate and democratically invested worldly alternatives to Superlativity might look like. In these writings, I try to delineate a perspective organized by a belief in technoethical pluralism, on an insistence on a substantiated rather than vacuous scene of informed, nonduressed consent, on the consensualization of non-normative experimental medicine (as an elaboration of the commitment to a politics of Choice) and the diversity of lifeways arising from these consensual practices, on the ongoing implementation of sustainable, resilient, experimentalist, open, multicultural, cosmopolitan models of civilization, on the celebration and subsidization of peer-to-peer formations of expressivity, criticism, credentialization, and the collaborative solution of shared problems, and, through these values and for them, a deep commitment to the ongoing democratization of technodevelopmental social struggle -- using technology (including techniques of education, agitation, organization, legislation) to deepen democracy, while using democracy (the nonviolent adjudication of disputes, good accountable representative governance, legible consent to the terms of everyday commerce, collective problem-solving, peer-to-peer, ongoing criticism and creative expressivity) to ensure that technology benefits us all as Amor Mundi's signature slogan more pithily puts the point.
It should go without saying that there simply is no need to join a marginal Robot Cult as either a True Believer or would-be guru to participate in technodevelopmental social struggle peer-to-peer, nor to indulge in the popular consumer fandoms, digital plutocratic financial and developmental frauds, or pseudo-scientific pop-tech infomercial entertainment of more mainstream futurology. There is no need to assume the perspective of a would-be technocratic elite. There is nothing gained in identifying with an ideology that you hope will "sweep the world" or provide the "keys to history." There is nothing gained in claiming to be "pro-technology" or "anti-technology" at a level of generality at which no technologies actually exist. There is nothing gained in foreswearing the urgencies of today for an idealized and foreclosed "The Future" nor in dis-identifying with your human peers so as to better identify with imaginary post-human or transhuman ones. There is nothing gained in the consolations of faith when there is so much valuable, actual work to do, when there are so many basic needs to fulfill, when there is so much pleasure and danger in the world of our peers at hand. There is nothing gained by an alliance with incumbent interests to secure a place in the future when these incumbents are exposed now as having no power left but the power to destroy the world and the open futurity altogether.
The Superlative Technology Critique is not finally a critique about technology, after all, because it recognizes that "technology" is functioning as a surrogate term in these discourses it critiques, the evocation of "technology" functions symptomatically in these discourses and sub(cult)ures. The critique of Superlativity is driven first of all by commitments to democracy, diversity, equity, sustainability, and substantiated consent. I venture to add, it is driven by a commitment to basic sanity, sanity understood as a collectively substantiated worldly and present concern itself. The criticisms I seem to be getting are largely from people who would either deny the relevance of my own political, social, and cultural emphasis altogether (a denial that likely marks them as unserious as far as I'm concerned) or who disapprove of my political commitment to democracy, my social commitment to commons, and my cultural commitment to planetary polyculture (a disapproval that likely marks them as reactionaries as far as I'm concerned). There is much more for me to say in this vein, and of course I will continue to do so as best I can, and everyone is certainly free and welcome to contribute to or to disdain my project as you will, but I am quite content with the focus my Critique has assumed so far and especially by the enormously revealing responses it seems to generate.
Sanewashing Superlativity (For a More Gentle Seduction)
In his latest deft response to my "so-called [?] Superlative Technology Critique," Michael Anissimov reassures his readers that "[Richard] Jones and Carrico are both wrong [to say] that transhumanists have a 'strong, pre-existing attachment to a particular desired outcome.' A minority of transhumanists maybe, but not a majority." Since Michael isn't a muzzy literary type like me (Is this critique postmodernism or something? wonders one of Michael's readers anxiously in his Comments section, No, no: It's Marxism another reader grimly corrects her), we can be sure that when Michael insists that "a majority" of "transhumanists" have no strong attachments to particular desired outcomes, well, no doubt he has crunched all the relevant numbers before saying so. Who am I to doubt him?
"What transhumanists want is for humanity to enjoy healthier, longer lives and higher standards of living provided by safe, cheap, personalized products," Anissimov patiently explains. Since there are hundreds of millions of people who would surely cheerfully affirm such vacuities (among them, me) and yet after over twenty years of organizational effort the archipelago of technophilic cult organizations that trumpet their "transhumanism" -- so-called! -- has never managed yet to corral together more than a few thousand mostly North Atlantic white middle-class male enthusiasts from among these teeming millions to their Cause, one suspects that there may be some more problematic transhumanistical content that is holding them back. Contrary to the rants about a dire default "Deathism" and "Luddism" in the general populace one hears from some transhumanists exasperated that their awesome faith, er, "movement," has not yet swept the world, I will venture to suggest that it isn't actually a rampaging general desire for short unhealthy unsafe unfree lives of poverty or feudalism that keeps all these people from joining their fabulous Robot Cult.
Back in 1989 Marc Stiegler wrote a short story entitled "The Gentle Seduction" that has assumed a special place in the transhumanist sub(cult)ural imaginary. In the opening passage one of the main characters, Jack, asks the other main character -- who never gets a name, interestingly enough, and is referred to merely pronomially as "her" and "she" -- the following portentious question: "Have you ever heard of Singularity?" "She" hasn't, of course, and Jack explains the notion with relish:
It is very curious that after the conjuration of such a looming unimaginably transformative and overwhelming change Jack would become tranquil rather than concerned like any sensible person would, however optimistic, at such a prospect, but of course the reason for this is that he is lying. Already we have been told that when he speaks "of the future… [it was as if] he could see it all very clearly. He spoke as if he were describing something as real and obvious as the veins of a leaf..." Of course, in Superlative discourses, especially in the Singularitarian variations that would trump history through a technodevelopmental secularization of the Rapture, the use of the term "unimaginable" is deployed rather selectively: to invest pronouncements with an appropriately prophetic cadence or promissory transcendentalizing significance, or to finesse the annoying fact that while Godlike outcomes are presumably certain the ways through all those pesky intermediary technical steps and political impasses that stand between the way we live now and all those marvelous Godlike eventualities remain conspicuously uncertain.
The future that Jack "sees" so clearly, as it happens, is not one he characterizes in Anissimov's reassuringly mainstream terms; that is to say, as a future in which people "enjoy healthier, longer lives and higher standards of living provided by safe, cheap, personalized products." No, Jack insists, in his future "you'll be immortal." But, wait, there's more. "You'll have a headband… It'll allow you to talk right to your computer." He continues on: "[W]ith nanotechnology they'll build these tiny little machines -- machines the size of a molecule… They'll put a billion of them in a spaceship the size of a Coke can, and shoot it off to an asteroid. The Coke can will rebuild the asteroid into mansions and palaces. You'll have an asteroid all to yourself, if you want one." Gosh, immortality alone on an asteroid stuffed with mansions and jewels and a smart AI to keep you company. How seductive (see story title)! Even better is this rather gnomic addendum, a favorite of would-be gurus everywhere: "'I won't tell you all the things I expect to happen,' he smiled mischievously, 'I'm afraid I'd really scare you!" Father Knows Best, eh? And it's hard not to like the boyish oracularity of that "mischievous smile." As the story unfolds, we discover that Jack likely refers here to the fact that "she" will eventually download her consciousness into a series of increasingly exotic, and eventually networked, robot "bodies" and then utterly disembodied informational forms.
The story is a truly odd and symptomatic little number -- definitely an enjoyable and enlightening read for all that -- juxtaposing emancipatory rhetoric in a curious way to the sort of reactionary details one has come to expect from especially American technophilic discourse. (For some of the reasons, Richard Barbrook and Andy Cameron's The Californian Ideology always repays re-reading, as does Jedediah Purdy's The God of the Digirati, and Paulina Borsook's excellent book Cyberselfish, which entertainingly provides a wealth of supplementary detail.) The very first sentence mobilizes archetypes so bruisingly old-fashioned (but, you know, it's the future!) to make you blush even if you never even heard of eco-feminism: "He worked with computers; she worked with trees." By sentence two we are squirming with discomfort: "She was surprised that he was interested in her. He was so smart; she was so… normal." ("Normal" people aren't "smart"? "Normal" people should feel privileged when our smart betters deign to notice us?) Later in the story, progress and emancipation and even revolution are drained of social struggle and political content altogether and reduced to a matter of shopping for ever more powerful gizmos offered for sale in catalogues -- elaborate robots, rejuvenation pills, genius pills, brain-computer interfaces, robot bodies, the promised asteroid mansions, and so on. Politics as consumption, how enormously visionary. One also detects in the story a discomforting insinuation of body-loathing, rather like the hostility to the "meat body" one encounters in some Cyberpunk fiction, from the initial curious fact that Jack and the unnamed protagonist sleep together but never have sex (an odd detail in a story that so clearly means to invoke the conventions of romantic love), and that the emancipatory sequence of technological empowerments undergone by the protagonist are always phrased as a series of relinquishments, of her morphology, of her body, of embodiment altogether, of narrative selfhood by the end, and each relinquishment signaled by the repeated refrain, "it just didn't seem to matter," where it is a loss of matter that fails to matter.
Be all that as it may, the specific point I would want to stress here is that "The Gentle Seduction" has a highly particular vision of what the future will look like, and is driven by an evangelical zeal to implement just that future. It is a future with a highly specific set of characteristics, involving particular construals of robotics, artificial intelligence, nanotechnology, and technological immortality (involving first genetic therapies but culminating in a techno-spiritualized "transcendence" of the body through digitality). These characteristics, furthermore, are described as likely to arrive within the lifetimes of lucky people now living and are described as inter-implicated or even converging outcomes, crystallizing in a singular momentous Event, the Singularity, an Event in which people can believe, about which they can claim superior knowledge as believers, which they go on to invest with conspicuously transcendental significance, and which they declare to be unimaginable in key details but at once perfectly understood in its essentials. This highly specific vision in Stiegler's story is one and the same with the vision humorously documented in Ed Regis's rather ethnographic Great Mambo Chicken and the Transhuman Condition, published the following year, in 1990, and in Damien Broderick's The Spike, published twelve years later, and, although the stresses shift here and there, sometimes emphasizing connections between cybernetics and psychedelia (as in early Douglas Rushkoff), sometimes emphasizing robotics and consciousness uploading (as in Hans Moravek's Mind Children -- whose work is critiqued exquisitely in N. Katherine Hayle's How We Became Posthuman), sometimes emphasizing Drexlerian nanotechnology (as in Chris Peterson's Unbounding the Future), or longevity (as in Brian Alexander's Rapture), it is fairly astonishing to realize just how unchanging this vision is in its specificity, in its ethos, in its cocksure predictions, even in its cast of characters. Surely a vision of looming incomprehensible transformation should manage to be a bit less… static than transhumanism seems to be?
Although Anissimov wants to reassure the world that transhumanists have no peculiar commitments to particular superlative outcomes one need only read any of them for any amount of time to see the truth of the matter. Far more amusing than his denials and efforts at organizational sanewashing go, however, is his concluding admonishment of those -- oh, so few! -- transhumanists or Singularitarians who might be vulnerable to accusations of Superlativity: "If any transhumanists do have specific attachments to particular desired outcome," Anissimov warns, "I suggest they drop them — now." Well, then, that should do it. "The transhumanist identity," he continues, "should not be defined by a yearning for such outcomes. It is defined by a desire to use technology to open up a much wider space of morphological diversity than experienced today." It is very difficult to see how a transhumanist "identity" would long survive being evacuated of its actual content apart from a commitment to something that looks rather like mainstream secular multicultural pro-choice attitudes that seem to thrive quite well, thank you very much, without demanding people join Robot Cults. The truth is, of course, that this is all public relations spin on the part of a Director of the Singularity Insitutute for Artificial Intelligence (Robot Cult Ground Zero) and co-founder of The Immortality Institute (a Technological Immortalist outfit), and all around muckety muck and bottle washer in the World Transhumanist Association (Sub(cult)ural Superlativity Grand Central Station), and so on. Although one can be sure that none of the sub(cult)ural futurists among his readership will really take Michael up on his suggestion to icksnay on the azycray obotray ultcay stuff in public places, at least he has posted something to which he can regularly refer whenever sensible people gently suggest he and his friends are sounding a little bit nuts on this or that burning issue concerning Robot Gods Among Us, the Pleasures of Spending Eternity Uploaded into a Computer, or coping with the Urgent Risks of a World Turned into Nano-Goo, from time to time.
I will remind my own readers that Extropians, Dynamists, Raelians, Singularitarians, Transhumanists, Technological Immortalists and so on have formed a number of curious subcultures and advocacy organizations which I regularly castigate for their deranging impact on technodevelopmental policy discourse and for the cult-like attributes they seem to me to exhibit. Since these organizations and identity movement are really quite marginal as far as their actual memberships go, it is important to stress that apart from some practical concerns I have about the damaging and rather disproportionate voice these Superlative Sub(cult)ural formulations have in popular technology discourse and on public technoscientific deliberation it is really the way these extreme sub(cult)ures represent and symptomize especially clearly what are more prevailing general attitudes toward and broader tendencies exhibited in technodevelopmental change that makes them interesting to me and worthy of this kind of attention.
"What transhumanists want is for humanity to enjoy healthier, longer lives and higher standards of living provided by safe, cheap, personalized products," Anissimov patiently explains. Since there are hundreds of millions of people who would surely cheerfully affirm such vacuities (among them, me) and yet after over twenty years of organizational effort the archipelago of technophilic cult organizations that trumpet their "transhumanism" -- so-called! -- has never managed yet to corral together more than a few thousand mostly North Atlantic white middle-class male enthusiasts from among these teeming millions to their Cause, one suspects that there may be some more problematic transhumanistical content that is holding them back. Contrary to the rants about a dire default "Deathism" and "Luddism" in the general populace one hears from some transhumanists exasperated that their awesome faith, er, "movement," has not yet swept the world, I will venture to suggest that it isn't actually a rampaging general desire for short unhealthy unsafe unfree lives of poverty or feudalism that keeps all these people from joining their fabulous Robot Cult.
Back in 1989 Marc Stiegler wrote a short story entitled "The Gentle Seduction" that has assumed a special place in the transhumanist sub(cult)ural imaginary. In the opening passage one of the main characters, Jack, asks the other main character -- who never gets a name, interestingly enough, and is referred to merely pronomially as "her" and "she" -- the following portentious question: "Have you ever heard of Singularity?" "She" hasn't, of course, and Jack explains the notion with relish:
"Singularity is a time in the future as envisioned by Vernor Vinge. It'll occur when the rate of change of technology is very great -- so great that the effort to keep up with the change will overwhelm us. People will face a whole new set of problems that we can't even imagine." A look of great tranquility smoothed the ridges around his eyes.
It is very curious that after the conjuration of such a looming unimaginably transformative and overwhelming change Jack would become tranquil rather than concerned like any sensible person would, however optimistic, at such a prospect, but of course the reason for this is that he is lying. Already we have been told that when he speaks "of the future… [it was as if] he could see it all very clearly. He spoke as if he were describing something as real and obvious as the veins of a leaf..." Of course, in Superlative discourses, especially in the Singularitarian variations that would trump history through a technodevelopmental secularization of the Rapture, the use of the term "unimaginable" is deployed rather selectively: to invest pronouncements with an appropriately prophetic cadence or promissory transcendentalizing significance, or to finesse the annoying fact that while Godlike outcomes are presumably certain the ways through all those pesky intermediary technical steps and political impasses that stand between the way we live now and all those marvelous Godlike eventualities remain conspicuously uncertain.
The future that Jack "sees" so clearly, as it happens, is not one he characterizes in Anissimov's reassuringly mainstream terms; that is to say, as a future in which people "enjoy healthier, longer lives and higher standards of living provided by safe, cheap, personalized products." No, Jack insists, in his future "you'll be immortal." But, wait, there's more. "You'll have a headband… It'll allow you to talk right to your computer." He continues on: "[W]ith nanotechnology they'll build these tiny little machines -- machines the size of a molecule… They'll put a billion of them in a spaceship the size of a Coke can, and shoot it off to an asteroid. The Coke can will rebuild the asteroid into mansions and palaces. You'll have an asteroid all to yourself, if you want one." Gosh, immortality alone on an asteroid stuffed with mansions and jewels and a smart AI to keep you company. How seductive (see story title)! Even better is this rather gnomic addendum, a favorite of would-be gurus everywhere: "'I won't tell you all the things I expect to happen,' he smiled mischievously, 'I'm afraid I'd really scare you!" Father Knows Best, eh? And it's hard not to like the boyish oracularity of that "mischievous smile." As the story unfolds, we discover that Jack likely refers here to the fact that "she" will eventually download her consciousness into a series of increasingly exotic, and eventually networked, robot "bodies" and then utterly disembodied informational forms.
The story is a truly odd and symptomatic little number -- definitely an enjoyable and enlightening read for all that -- juxtaposing emancipatory rhetoric in a curious way to the sort of reactionary details one has come to expect from especially American technophilic discourse. (For some of the reasons, Richard Barbrook and Andy Cameron's The Californian Ideology always repays re-reading, as does Jedediah Purdy's The God of the Digirati, and Paulina Borsook's excellent book Cyberselfish, which entertainingly provides a wealth of supplementary detail.) The very first sentence mobilizes archetypes so bruisingly old-fashioned (but, you know, it's the future!) to make you blush even if you never even heard of eco-feminism: "He worked with computers; she worked with trees." By sentence two we are squirming with discomfort: "She was surprised that he was interested in her. He was so smart; she was so… normal." ("Normal" people aren't "smart"? "Normal" people should feel privileged when our smart betters deign to notice us?) Later in the story, progress and emancipation and even revolution are drained of social struggle and political content altogether and reduced to a matter of shopping for ever more powerful gizmos offered for sale in catalogues -- elaborate robots, rejuvenation pills, genius pills, brain-computer interfaces, robot bodies, the promised asteroid mansions, and so on. Politics as consumption, how enormously visionary. One also detects in the story a discomforting insinuation of body-loathing, rather like the hostility to the "meat body" one encounters in some Cyberpunk fiction, from the initial curious fact that Jack and the unnamed protagonist sleep together but never have sex (an odd detail in a story that so clearly means to invoke the conventions of romantic love), and that the emancipatory sequence of technological empowerments undergone by the protagonist are always phrased as a series of relinquishments, of her morphology, of her body, of embodiment altogether, of narrative selfhood by the end, and each relinquishment signaled by the repeated refrain, "it just didn't seem to matter," where it is a loss of matter that fails to matter.
Be all that as it may, the specific point I would want to stress here is that "The Gentle Seduction" has a highly particular vision of what the future will look like, and is driven by an evangelical zeal to implement just that future. It is a future with a highly specific set of characteristics, involving particular construals of robotics, artificial intelligence, nanotechnology, and technological immortality (involving first genetic therapies but culminating in a techno-spiritualized "transcendence" of the body through digitality). These characteristics, furthermore, are described as likely to arrive within the lifetimes of lucky people now living and are described as inter-implicated or even converging outcomes, crystallizing in a singular momentous Event, the Singularity, an Event in which people can believe, about which they can claim superior knowledge as believers, which they go on to invest with conspicuously transcendental significance, and which they declare to be unimaginable in key details but at once perfectly understood in its essentials. This highly specific vision in Stiegler's story is one and the same with the vision humorously documented in Ed Regis's rather ethnographic Great Mambo Chicken and the Transhuman Condition, published the following year, in 1990, and in Damien Broderick's The Spike, published twelve years later, and, although the stresses shift here and there, sometimes emphasizing connections between cybernetics and psychedelia (as in early Douglas Rushkoff), sometimes emphasizing robotics and consciousness uploading (as in Hans Moravek's Mind Children -- whose work is critiqued exquisitely in N. Katherine Hayle's How We Became Posthuman), sometimes emphasizing Drexlerian nanotechnology (as in Chris Peterson's Unbounding the Future), or longevity (as in Brian Alexander's Rapture), it is fairly astonishing to realize just how unchanging this vision is in its specificity, in its ethos, in its cocksure predictions, even in its cast of characters. Surely a vision of looming incomprehensible transformation should manage to be a bit less… static than transhumanism seems to be?
Although Anissimov wants to reassure the world that transhumanists have no peculiar commitments to particular superlative outcomes one need only read any of them for any amount of time to see the truth of the matter. Far more amusing than his denials and efforts at organizational sanewashing go, however, is his concluding admonishment of those -- oh, so few! -- transhumanists or Singularitarians who might be vulnerable to accusations of Superlativity: "If any transhumanists do have specific attachments to particular desired outcome," Anissimov warns, "I suggest they drop them — now." Well, then, that should do it. "The transhumanist identity," he continues, "should not be defined by a yearning for such outcomes. It is defined by a desire to use technology to open up a much wider space of morphological diversity than experienced today." It is very difficult to see how a transhumanist "identity" would long survive being evacuated of its actual content apart from a commitment to something that looks rather like mainstream secular multicultural pro-choice attitudes that seem to thrive quite well, thank you very much, without demanding people join Robot Cults. The truth is, of course, that this is all public relations spin on the part of a Director of the Singularity Insitutute for Artificial Intelligence (Robot Cult Ground Zero) and co-founder of The Immortality Institute (a Technological Immortalist outfit), and all around muckety muck and bottle washer in the World Transhumanist Association (Sub(cult)ural Superlativity Grand Central Station), and so on. Although one can be sure that none of the sub(cult)ural futurists among his readership will really take Michael up on his suggestion to icksnay on the azycray obotray ultcay stuff in public places, at least he has posted something to which he can regularly refer whenever sensible people gently suggest he and his friends are sounding a little bit nuts on this or that burning issue concerning Robot Gods Among Us, the Pleasures of Spending Eternity Uploaded into a Computer, or coping with the Urgent Risks of a World Turned into Nano-Goo, from time to time.
I will remind my own readers that Extropians, Dynamists, Raelians, Singularitarians, Transhumanists, Technological Immortalists and so on have formed a number of curious subcultures and advocacy organizations which I regularly castigate for their deranging impact on technodevelopmental policy discourse and for the cult-like attributes they seem to me to exhibit. Since these organizations and identity movement are really quite marginal as far as their actual memberships go, it is important to stress that apart from some practical concerns I have about the damaging and rather disproportionate voice these Superlative Sub(cult)ural formulations have in popular technology discourse and on public technoscientific deliberation it is really the way these extreme sub(cult)ures represent and symptomize especially clearly what are more prevailing general attitudes toward and broader tendencies exhibited in technodevelopmental change that makes them interesting to me and worthy of this kind of attention.
Wednesday, October 24, 2007
Richard Jones Critiques Superlativity
Over on the blog Soft Machines yesterday, Richard Jones -- a professor of physics, science writer, and currently Senior Advisor for Nanotechnology for the UK's Engineering and Physical Sciences Research Council -- offered up an excellent (and far more readable than I tend to manage to be) critique of Superlative Technology Discourses, in a nicely portentiously titled post, “We will have the power of the gods”. Follow the link to read the whole piece, here a some choice bits:
More like this, please.
"Superlative technology discourse… starts with an emerging technology with interesting and potentially important consequences, like nanotechnology, or artificial intelligence, or the medical advances that are making (slow) progress combating the diseases of aging. The discussion leaps ahead of the issues that such technologies might give rise to at the present and in the near future, and goes straight on to a discussion of the most radical projections of these technologies. The fact that the plausibility of these radical projections may be highly contested is by-passed by a curious foreshortening….
[T]his renders irrelevant any thought that the future trajectory of technologies should be the subject of any democratic discussion or influence, and it distorts and corrupts discussions of the consequences of technologies in the here and now. It’s also unhealthy that these “superlative” technology outcomes are championed by self-identified groups -- such as transhumanists and singularitarians -- with a strong, pre-existing attachment to a particular desired outcome - an attachment which defines these groups’ very identity. It’s difficult to see how the judgements of members of these groups can fail to be influenced by the biases of group-think and wishful thinking….
The difficulty that this situation leaves us in is made clear in [an] article by Alfred Nordmann -- “We are asked to believe incredible things, we are offered intellectually engaging and aesthetically appealing stories of technical progress, the boundaries between science and science fiction are blurred, and even as we look to the scientists themselves, we see cautious and daring claims, reluctant and self- declared experts, and the scientific community itself at a loss to assert standards of credibility.” This seems to summarise nicely what we should expect from Michio Kaku’s forthcoming series, “Visions of the future”. That the program should take this form is perhaps inevitable; the more extreme the vision, the easier it is to sell to a TV commissioning editor…
Have we, as Kaku claims, “unlocked the secrets of matter”? On the contrary, there are vast areas of science -- areas directly relevant to the technologies under discussion -- in which we have barely begun to understand the issues, let alone solve the problems. Claims like this exemplify the triumphalist, but facile, reductionism that is the major currency of so much science popularisation. And Kaku’s claim that soon “we will have the power of gods” may be intoxicating, but it doesn’t prepare us for the hard work we’ll need to do to solve the problems we face right now.
More like this, please.
Superlativity as Anti-Democratizing
Upgraded and adapted from Comments:
Friend of Blog Michael Anissimov said: Maybe "superlative" technologies have a media megaphone because many educated people find these arguments persuasive.
There is no question at all that many educated people fall for Superlative Technology Discourses. It is very much a discourse of reasonably educated, privileged people (and also, for that matter, mostly white guys in North Atlantic societies). One of the reasons Superlativity comports so well with incumbent interests is that many of its partisans either are or identify with such incumbents themselves.
However, again, as I have taken pains to explain, even people who actively dis-identify with the politics of incumbency might well support such politics inadvertently through their conventional recourse to Superlative formulations, inasumuch as these lend themselves so forcefully to anti-pluralistic reductionism, to elite technocratic solutions and policies, to the naturalization of neoliberal corporate-military "competitiveness" and "innovation" and such as the key terms through which technoscientific "development" can be discussed, to special vulnerability to hype, groupthink, and True Belief, and so on, all of which tend to conduce to incumbent interests and reactionary politics in general.
If a majority
Whoa, now, just to be clear: The "many" of your prior sentence, Michael, represents neither a "majority" of "educated" people (on any construal of the term "educated" I know of), nor a "majority" in general.
If a majority decides to allocate research funds towards Yudkowskian AGI and Drexlerian MNT, who would you be to question the democratic outcome?
Who would I be to question a democratic outcome? Why, a democratic citizen with an independent mind and a right to free speech, that's who.
I abide by democratic outcomes even where I disapprove of them time to time, and then I make my disapproval known and understood as best I can in the hopes that the democratic outcome will change for the better -- or if I fervently disapprove of such an outcome, I might engage in civil disobedience and accept the criminal penalty involved to affirm the law while disapproving the concrete outcome. All that is democracy, too, in my understanding of it.
In the past, Michael, you have often claimed to be personally insulted by my suggestions that Superlative discourses have anti-democratizing tendencies -- you have wrongly taken such claims as equivalent to the accusation that Superlative Technocentrics are consciously Anti-Democratic, which is not logically implied in the claim at all (although I will admit that the evidence suggests that Superlativity is something of a strange attractor for libertopians, technocrats, Randroids, Bell Curve racists and other such anti-democratic dead-enders). For me, structural tendencies to anti-democratization are easily as or more troubling than explicit programmatic commitment to anti-democracy (which are usually marginalized into impotence in reasonably healthy democratic societies soon enough, after all). When you have assured me that you are an ardent democrat in your politics yourself, whatever your attraction to Superlative technodevelopmental formulations, I have tended to take your word for it.
But when you seem to suggest that "democracy" requires that one "not question" democratic outcomes I find myself wondering why on earth you would advocate democracy on such terms? It's usually only reactionaries, after all, who falsely characterize democracy as "mob rule" -- and they do so precisely because they hate democracy and denigrate common people (with whom they dis-identify). Actual democratically-minded folks tend not to characterize their own views in such terms. Democracy is just the idea that people should have a say in the public decisions that affect them -- for me, democracy is a dynamic, experimental, peer-to-peer formation.
Because that [AGI/MNT funding] is what is likely going to happen in the next couple decades.
Be honest: if you were you as you are now twenty years ago, would you have said the same? What could happen in twenty years' time to make you say otherwise?
I personally think it is an arrant absurdity to think that majorities will affirm specifically Yudkowskian or Drexlerian Superlative outcomes by name in two decades. Of the two, only Drexler seems to me likely to be remembered at all on my reckoning (don't misunderstand me, I certainly don't expect to be "remembered" myself, I don't think that is an indispensable measure of a life well-lived, particularly).
On the flip side, it seems to me that once one has dropped the Superlative-tinted glasses, one can say that funding decisions by representatives democratically accountable to majorities are already funding research and development into nanoscale interventions and sophisticated software. I tend to be well pleased by that sort of thing, thank you very much. If one is looking for Robot Gods or Utility Fogs, however, I suspect that in twenty years' time one will find them on the same sf bookshelves where one properly looks for them today, or looked for them twenty years ago.
Friend of Blog Michael Anissimov said: Maybe "superlative" technologies have a media megaphone because many educated people find these arguments persuasive.
There is no question at all that many educated people fall for Superlative Technology Discourses. It is very much a discourse of reasonably educated, privileged people (and also, for that matter, mostly white guys in North Atlantic societies). One of the reasons Superlativity comports so well with incumbent interests is that many of its partisans either are or identify with such incumbents themselves.
However, again, as I have taken pains to explain, even people who actively dis-identify with the politics of incumbency might well support such politics inadvertently through their conventional recourse to Superlative formulations, inasumuch as these lend themselves so forcefully to anti-pluralistic reductionism, to elite technocratic solutions and policies, to the naturalization of neoliberal corporate-military "competitiveness" and "innovation" and such as the key terms through which technoscientific "development" can be discussed, to special vulnerability to hype, groupthink, and True Belief, and so on, all of which tend to conduce to incumbent interests and reactionary politics in general.
If a majority
Whoa, now, just to be clear: The "many" of your prior sentence, Michael, represents neither a "majority" of "educated" people (on any construal of the term "educated" I know of), nor a "majority" in general.
If a majority decides to allocate research funds towards Yudkowskian AGI and Drexlerian MNT, who would you be to question the democratic outcome?
Who would I be to question a democratic outcome? Why, a democratic citizen with an independent mind and a right to free speech, that's who.
I abide by democratic outcomes even where I disapprove of them time to time, and then I make my disapproval known and understood as best I can in the hopes that the democratic outcome will change for the better -- or if I fervently disapprove of such an outcome, I might engage in civil disobedience and accept the criminal penalty involved to affirm the law while disapproving the concrete outcome. All that is democracy, too, in my understanding of it.
In the past, Michael, you have often claimed to be personally insulted by my suggestions that Superlative discourses have anti-democratizing tendencies -- you have wrongly taken such claims as equivalent to the accusation that Superlative Technocentrics are consciously Anti-Democratic, which is not logically implied in the claim at all (although I will admit that the evidence suggests that Superlativity is something of a strange attractor for libertopians, technocrats, Randroids, Bell Curve racists and other such anti-democratic dead-enders). For me, structural tendencies to anti-democratization are easily as or more troubling than explicit programmatic commitment to anti-democracy (which are usually marginalized into impotence in reasonably healthy democratic societies soon enough, after all). When you have assured me that you are an ardent democrat in your politics yourself, whatever your attraction to Superlative technodevelopmental formulations, I have tended to take your word for it.
But when you seem to suggest that "democracy" requires that one "not question" democratic outcomes I find myself wondering why on earth you would advocate democracy on such terms? It's usually only reactionaries, after all, who falsely characterize democracy as "mob rule" -- and they do so precisely because they hate democracy and denigrate common people (with whom they dis-identify). Actual democratically-minded folks tend not to characterize their own views in such terms. Democracy is just the idea that people should have a say in the public decisions that affect them -- for me, democracy is a dynamic, experimental, peer-to-peer formation.
Because that [AGI/MNT funding] is what is likely going to happen in the next couple decades.
Be honest: if you were you as you are now twenty years ago, would you have said the same? What could happen in twenty years' time to make you say otherwise?
I personally think it is an arrant absurdity to think that majorities will affirm specifically Yudkowskian or Drexlerian Superlative outcomes by name in two decades. Of the two, only Drexler seems to me likely to be remembered at all on my reckoning (don't misunderstand me, I certainly don't expect to be "remembered" myself, I don't think that is an indispensable measure of a life well-lived, particularly).
On the flip side, it seems to me that once one has dropped the Superlative-tinted glasses, one can say that funding decisions by representatives democratically accountable to majorities are already funding research and development into nanoscale interventions and sophisticated software. I tend to be well pleased by that sort of thing, thank you very much. If one is looking for Robot Gods or Utility Fogs, however, I suspect that in twenty years' time one will find them on the same sf bookshelves where one properly looks for them today, or looked for them twenty years ago.
Saturday, October 20, 2007
Superlative Church
Upgraded and Adapted from Comments:
ONE
I must say that this seems a bit disingenuous to me. Of course you have castigated me for my tone and language and so on in the past many times, but that's hardly the substance of the discussion we've been having here.
Look, I am offering up rhetorical, cultural, and political analyses delineating general tendencies (oversimplification of technodevelopmental complexities, fixations on particular idealized outcomes, vulnerabilities to technological determinism and technocratic elitism, and so on) that seem to me to arise from certain Superlative and Sub(cult)ural ways of framing technodevelopmental problems.
Individual people who read such analyses and then complain that I am insulting them are, for the most part, finding in what I say a shoe that fits on their own and wearing it themselves. And that is simply not the same thing as me insulting them. It is basic incomprehension or cynical distortion to say otherwise (sometimes my critiques are of particular authors or particular texts, and the charge of personal insult could conceivably make sense in such contexts, but not when my analyses are general and when the categories I deploy name general tendencies and general social and cultural formations).
The fact is, that you have actually compared your personal "transhumanist" identity, and in earlier exchanges with me your "Technological Immortalist" identity to identity categories like being gay, or gypsy, and so on. Clearly these comparisons are designed to mobilize proper progressive intuitions about lifeway diversity in multiculture by analogy to persecuted minorities. I think this sort of analogy is wildly inappropriate, and perhaps your response here suggests that you have come to agree that it is upon further consideration. Maybe you weren't entirely conscious of your rhetoric here?
As you know I am immensely interested in the politics and policy of ongoing technodevelopmental social struggle, and one of the things that troubles me enormously is that any shift from a deliberative/open into a subcultural/identity mode of technodevelopmental politics is going to be extremely vulnerable to mistake critique for persecution, disagreement for defamation.
But how can one debate about a changing diversity of best technodevelopmental outcomes when some will feel threatened in their very identities by the prospect of a failure to arrive at their own conception of best outcomes? How can such subcultural identifications with particular outcomes comport with democratic intuitions that we must keep the space of deliberation radically open -- even as we struggle together to find our way to our provisional sense of best, fairest, safest, emancipatory outcomes -- so as always to remain responsive to the inevitable existing diversity of stakeholders to any technodevelopmental state of affairs, in the present now as well as in future presents to come?
This is why I stress that the anti-democratizing effects of Superlative and Sub(cult)ural Technocentrisms are often more structural than intentional: one can affirm democratic ideals and yet contribute to discursive subversions of democracy against the grain of one's affirmed beliefs in these matters. It completely misses the force of my point and the nature of my deepest worries to imagine that I am calling people anti-democratic names when I try to delineate these tendencies. If only it were so simple as a few anti-democratic bad apples! Such personalizations of the problem utterly trivialize the issues and stakes on my own terms, quite apart from these issues some of my conversational partners complain of that their feelings have been hurt by what they see as unfair accusations or what have you.
None of this is to deny, by the way, that there are indeed explicit reactionaries and authoritarian personalities -- both gurus and followers -- to be found aplenty in Superlative Technocentric social formations. It is well documented that there are unusual numbers of both to be seen in these curious marginal spaces. And I have exposed and ridiculed these manifestions among the Extropians, Singularitarians, transhumanists, and so on many times before, and I will continue to spotlight and to ridicule them as they well deserve.
But my own sense is that it is the larger structural tendencies that preoccupy my own attention that make these formations strange attractors for some reactionaries, occasional authoritarians, legions of True Believers, and so on, rather than vice versa. And it is also true that these structural tendencies can yield their anti-democratizing effects just as well when Superlative and Sub(cult)ural Technocentrics have no explicit anti-democratizing intentions in the least.
Since you probably read all of those claims about general tendencies as personal insults in any case it isn't entirely clear to me that you will have quite grasped the force of my critique by my lights, but such are the risks of interpersonal communication.
TWO
Well, needless to say, not all Superlative Technocentrics would agree with you that the timescales are that different, inasmuch as they finesse this problem through the expedient recourse to accelerationalism, whereby extreme or distant outcomes are rendered "proximate" by way of accelerating change, and even accelerating acceleration to really confuse matters and make the hype more plausible.
But setting all that aside, you simply can't have thought about this issue very clearly. Of course becoming wedded to Superlative outcomes influences your sense of the stakes and significance of technoscience quandaries in the present.
Much of the force of Jeron Lanier's cybernetic totalism critique, for example, derives from the way he shows faith in the Superlative outcome of Strong disembodied AI becomes a lens distorting the decisions of coders here and now. Word processing programs will correct authorial "errors" that aren't errors in fact, substituting the program's "judgement" for the author's in part because too many coders see this crappy feature through the starry-eyed anticipation of an AI that will actually have judgments, properly speaking.
The fears and fantasies of medicalized immortality crazily distort contemporary bioethical framings of genetic and prosthetic medicine here and now, all the time, and almost always to the cost of sense. Surely you agree with that, at least when the distortions involve bioconservative intimations of apocalypse and, as they like to handwave, "Playing God," arising from research and development into new genetic and prosthetic medical techniques to relieve people from suffering unnecessarily from now treatable diseases.
There are also incredibly energetic debates about whether the definition of "nanotechnology" will refer to current and proximately upcoming interventions at the nanoscale (and all their problems) or to more Superlative understandings of the term when public funds are dispersed or regulations contemplated.
So, of course your Superlative framing of technodevelopmental outcomes impacts your present perception of technodevelopmental stakes. I suspect that you are now going to walk back your claim yet again and try another tack altogether while claiming I have misunderstood you all along, correct?
THREE
It seems to me that this IS INDEED no small part of the issue here.
I connect Sub(cult)ural Futurism to Superlative Technocentricity, inasmuch as a shared enthusiasm for particular, usually Superlative technodevelopmental outcomes is the bond that actually serves to maintain these subcultures. But the politics of subcultural maintenance in turn impose restrictions on the openness, experimentalism, flexibility of the technoscientific deliberation you can engage in without risk to the solidarity of the identity formation itself.
This is why so many Superlative and Sub(cult)ural Technocentrics can constantly pretend that the future is going to be wildly different from the present and wildly soon, and yet the Superlative Sub(cult)ural vision of the future itself, from its depiction in Regis's Great Mambo Chicken, to Stiegler's "The Gentle Seduction," to Peterson and Drexler's, Unbounding the Future, to Alexander's Rapture to the latest pop futurological favorites in the Superlative mode simply reproduce virtually the same static vision, over and over again, calling attention to the same "visionaries," the same promissory technologies (strong AI, ubiquitous automation, virtual reality, cryonics, nanotechnology, genetic medicine, and often "uploading" personality into information after first discursively reducing it to information already), the same appeals to "superhuman" capacities, technological immortality, personal wealth beyond the dreams of avarice and post-political abundance in general (usually framed in a way that appeals to neoliberal/libertarian anti-political intuitions), the same seductive conjuration of the conventional omni-predicates of theology, but this time personalized and prostheticized, omnipotence, omniscience, omnibenevolence, the same scientistic championing of reductive totalizing explanations coupled with anti-intellectual pillorying of every other kind of thought, poetic, cultural, political, and on and on and on. For two decades and counting the vision has remained unchanged in its essentials, including the insistence on utter, totalizing, accelerating, transcendentalizing change, usually uttered in the tonalities of ecstasy or of dread, a prophetic declaration of transformation that never seems to transform itself, of change that never seems to change, a static, dwindling, tired repetition of platitudes in the midst of a planetary technodevelopmental disruption (corporate precarization, catastrophic climate change, rampant militarization, emerging peer-to-peer network formations, emerging Green movements demanding decentralized nontoxic sustainable appropriate techs, emerging non-normativizing anti-eugenicist movements to democratize medical research, development, and provision, etc.) to which static Sub(cult)ural Superlativities seem to respond less and less in substance.
Superlative Sub(cult)ural Technocentrisms are too much like straightforward faiths, with a particular heaven in mind and a few Churches on hand with marginal memberships. And, as I keep saying, as esthetic and moral formations faithful lifeways seem to me perfectly unobjectionable even when they are not my own cup of tea. What is troubling about Superlativity is that its faithful seem curiously cocksure that they are champions of science rather than True Believers in the first place, which makes them ill-content to confine themselves to their proper sphere, that is offering up to their memberships the moral satisfactions of intimate legibility and belonging as well as esthetic pathways for personal projects of perfection. They fancy themselves instead, via reductionism, to be making instrumental claims that solicit scientific consensus, or, via moralizing pan-ideology, to be making ethical claims that solicit universal assent. (For a sketch of my sense of the different modes of rationality in play here see my Technoethical Pluralism.)
This sort of delusion is common enough in variously faithful people (especially in the fundamentalist modes of belief that Sub(cult)ural Futurisms seem so often to typify) and would hardly qualify as particularly harmful or worthy of extended attention given the incredible marginality of these formations -- an abiding marginality that has remained unchanged for decades, after all. But Superlative Technology Discourses seem to me to have an enormous and disproportionately influential media megaphone deriving -- on the one hand -- from their symptomatic relation to much broader fears/fantasies of agency occasioned by contemporary technodevelopmental churn and -- on the other hand -- from their rhetorical congeniality to neoliberal assumptions that serve the interests of incumbent corporate-military interests. That is why I devote so much of my own attention to their exposure and analysis.
ONE
Me: "Do I have to remind you that you have responded in the past to some of my characterizations of Superlative outcomes as implausible by arguing not that they were wrong but that they constituted defamation against transhumanists like you?"
Giulio Prisco: Do I have to remind you that what I told you was that the TONE and LANGUAGE you used were unnecessarily conflictive and insulting -- not an argument about your ideas, just a remark about your lack of manners.
I must say that this seems a bit disingenuous to me. Of course you have castigated me for my tone and language and so on in the past many times, but that's hardly the substance of the discussion we've been having here.
Look, I am offering up rhetorical, cultural, and political analyses delineating general tendencies (oversimplification of technodevelopmental complexities, fixations on particular idealized outcomes, vulnerabilities to technological determinism and technocratic elitism, and so on) that seem to me to arise from certain Superlative and Sub(cult)ural ways of framing technodevelopmental problems.
Individual people who read such analyses and then complain that I am insulting them are, for the most part, finding in what I say a shoe that fits on their own and wearing it themselves. And that is simply not the same thing as me insulting them. It is basic incomprehension or cynical distortion to say otherwise (sometimes my critiques are of particular authors or particular texts, and the charge of personal insult could conceivably make sense in such contexts, but not when my analyses are general and when the categories I deploy name general tendencies and general social and cultural formations).
The fact is, that you have actually compared your personal "transhumanist" identity, and in earlier exchanges with me your "Technological Immortalist" identity to identity categories like being gay, or gypsy, and so on. Clearly these comparisons are designed to mobilize proper progressive intuitions about lifeway diversity in multiculture by analogy to persecuted minorities. I think this sort of analogy is wildly inappropriate, and perhaps your response here suggests that you have come to agree that it is upon further consideration. Maybe you weren't entirely conscious of your rhetoric here?
As you know I am immensely interested in the politics and policy of ongoing technodevelopmental social struggle, and one of the things that troubles me enormously is that any shift from a deliberative/open into a subcultural/identity mode of technodevelopmental politics is going to be extremely vulnerable to mistake critique for persecution, disagreement for defamation.
But how can one debate about a changing diversity of best technodevelopmental outcomes when some will feel threatened in their very identities by the prospect of a failure to arrive at their own conception of best outcomes? How can such subcultural identifications with particular outcomes comport with democratic intuitions that we must keep the space of deliberation radically open -- even as we struggle together to find our way to our provisional sense of best, fairest, safest, emancipatory outcomes -- so as always to remain responsive to the inevitable existing diversity of stakeholders to any technodevelopmental state of affairs, in the present now as well as in future presents to come?
This is why I stress that the anti-democratizing effects of Superlative and Sub(cult)ural Technocentrisms are often more structural than intentional: one can affirm democratic ideals and yet contribute to discursive subversions of democracy against the grain of one's affirmed beliefs in these matters. It completely misses the force of my point and the nature of my deepest worries to imagine that I am calling people anti-democratic names when I try to delineate these tendencies. If only it were so simple as a few anti-democratic bad apples! Such personalizations of the problem utterly trivialize the issues and stakes on my own terms, quite apart from these issues some of my conversational partners complain of that their feelings have been hurt by what they see as unfair accusations or what have you.
None of this is to deny, by the way, that there are indeed explicit reactionaries and authoritarian personalities -- both gurus and followers -- to be found aplenty in Superlative Technocentric social formations. It is well documented that there are unusual numbers of both to be seen in these curious marginal spaces. And I have exposed and ridiculed these manifestions among the Extropians, Singularitarians, transhumanists, and so on many times before, and I will continue to spotlight and to ridicule them as they well deserve.
But my own sense is that it is the larger structural tendencies that preoccupy my own attention that make these formations strange attractors for some reactionaries, occasional authoritarians, legions of True Believers, and so on, rather than vice versa. And it is also true that these structural tendencies can yield their anti-democratizing effects just as well when Superlative and Sub(cult)ural Technocentrics have no explicit anti-democratizing intentions in the least.
Since you probably read all of those claims about general tendencies as personal insults in any case it isn't entirely clear to me that you will have quite grasped the force of my critique by my lights, but such are the risks of interpersonal communication.
TWO
Me: "So you really think Superlative frames have no impact on your assessments of the significance and stakes of emerging genetic and prosthetic healthcare, nanoscale toxicity and sensors or current biotechnologies, security issues connected with networked malware today, cybernetic totalist ideology in contemporary coding cultures, and so on?
Giulio Prisco: Yes. Why should I have written it otherwise? The timescales involved are quite different aren't they? The Robot God, the Eschaton or whatever you like have nothing to do with health care and network security (and civil rights, and world peace, and...), so why should I let the RG have any impact on my assessments here and now?
Well, needless to say, not all Superlative Technocentrics would agree with you that the timescales are that different, inasmuch as they finesse this problem through the expedient recourse to accelerationalism, whereby extreme or distant outcomes are rendered "proximate" by way of accelerating change, and even accelerating acceleration to really confuse matters and make the hype more plausible.
But setting all that aside, you simply can't have thought about this issue very clearly. Of course becoming wedded to Superlative outcomes influences your sense of the stakes and significance of technoscience quandaries in the present.
Much of the force of Jeron Lanier's cybernetic totalism critique, for example, derives from the way he shows faith in the Superlative outcome of Strong disembodied AI becomes a lens distorting the decisions of coders here and now. Word processing programs will correct authorial "errors" that aren't errors in fact, substituting the program's "judgement" for the author's in part because too many coders see this crappy feature through the starry-eyed anticipation of an AI that will actually have judgments, properly speaking.
The fears and fantasies of medicalized immortality crazily distort contemporary bioethical framings of genetic and prosthetic medicine here and now, all the time, and almost always to the cost of sense. Surely you agree with that, at least when the distortions involve bioconservative intimations of apocalypse and, as they like to handwave, "Playing God," arising from research and development into new genetic and prosthetic medical techniques to relieve people from suffering unnecessarily from now treatable diseases.
There are also incredibly energetic debates about whether the definition of "nanotechnology" will refer to current and proximately upcoming interventions at the nanoscale (and all their problems) or to more Superlative understandings of the term when public funds are dispersed or regulations contemplated.
So, of course your Superlative framing of technodevelopmental outcomes impacts your present perception of technodevelopmental stakes. I suspect that you are now going to walk back your claim yet again and try another tack altogether while claiming I have misunderstood you all along, correct?
THREE
Giulio Prisco: Note to Nick: I agree with "rooting for the Transhumanist Team is different from and secondary to actually trying to make the world better". This is not the issue here.
It seems to me that this IS INDEED no small part of the issue here.
I connect Sub(cult)ural Futurism to Superlative Technocentricity, inasmuch as a shared enthusiasm for particular, usually Superlative technodevelopmental outcomes is the bond that actually serves to maintain these subcultures. But the politics of subcultural maintenance in turn impose restrictions on the openness, experimentalism, flexibility of the technoscientific deliberation you can engage in without risk to the solidarity of the identity formation itself.
This is why so many Superlative and Sub(cult)ural Technocentrics can constantly pretend that the future is going to be wildly different from the present and wildly soon, and yet the Superlative Sub(cult)ural vision of the future itself, from its depiction in Regis's Great Mambo Chicken, to Stiegler's "The Gentle Seduction," to Peterson and Drexler's, Unbounding the Future, to Alexander's Rapture to the latest pop futurological favorites in the Superlative mode simply reproduce virtually the same static vision, over and over again, calling attention to the same "visionaries," the same promissory technologies (strong AI, ubiquitous automation, virtual reality, cryonics, nanotechnology, genetic medicine, and often "uploading" personality into information after first discursively reducing it to information already), the same appeals to "superhuman" capacities, technological immortality, personal wealth beyond the dreams of avarice and post-political abundance in general (usually framed in a way that appeals to neoliberal/libertarian anti-political intuitions), the same seductive conjuration of the conventional omni-predicates of theology, but this time personalized and prostheticized, omnipotence, omniscience, omnibenevolence, the same scientistic championing of reductive totalizing explanations coupled with anti-intellectual pillorying of every other kind of thought, poetic, cultural, political, and on and on and on. For two decades and counting the vision has remained unchanged in its essentials, including the insistence on utter, totalizing, accelerating, transcendentalizing change, usually uttered in the tonalities of ecstasy or of dread, a prophetic declaration of transformation that never seems to transform itself, of change that never seems to change, a static, dwindling, tired repetition of platitudes in the midst of a planetary technodevelopmental disruption (corporate precarization, catastrophic climate change, rampant militarization, emerging peer-to-peer network formations, emerging Green movements demanding decentralized nontoxic sustainable appropriate techs, emerging non-normativizing anti-eugenicist movements to democratize medical research, development, and provision, etc.) to which static Sub(cult)ural Superlativities seem to respond less and less in substance.
Superlative Sub(cult)ural Technocentrisms are too much like straightforward faiths, with a particular heaven in mind and a few Churches on hand with marginal memberships. And, as I keep saying, as esthetic and moral formations faithful lifeways seem to me perfectly unobjectionable even when they are not my own cup of tea. What is troubling about Superlativity is that its faithful seem curiously cocksure that they are champions of science rather than True Believers in the first place, which makes them ill-content to confine themselves to their proper sphere, that is offering up to their memberships the moral satisfactions of intimate legibility and belonging as well as esthetic pathways for personal projects of perfection. They fancy themselves instead, via reductionism, to be making instrumental claims that solicit scientific consensus, or, via moralizing pan-ideology, to be making ethical claims that solicit universal assent. (For a sketch of my sense of the different modes of rationality in play here see my Technoethical Pluralism.)
This sort of delusion is common enough in variously faithful people (especially in the fundamentalist modes of belief that Sub(cult)ural Futurisms seem so often to typify) and would hardly qualify as particularly harmful or worthy of extended attention given the incredible marginality of these formations -- an abiding marginality that has remained unchanged for decades, after all. But Superlative Technology Discourses seem to me to have an enormous and disproportionately influential media megaphone deriving -- on the one hand -- from their symptomatic relation to much broader fears/fantasies of agency occasioned by contemporary technodevelopmental churn and -- on the other hand -- from their rhetorical congeniality to neoliberal assumptions that serve the interests of incumbent corporate-military interests. That is why I devote so much of my own attention to their exposure and analysis.
Friday, October 19, 2007
Superlativity Is an Affront to Common Sense
"To be natural is such a difficult pose to keep up."--Oscar Wilde
Upgraded and adapted from Comments:
Giulio wrote: I wish to... focus on concrete things... [Y]ou are... focusing on abstract issues characterized by endless questioning of others' hidden motivations and "identity". As I see things, I am focused on outcomes, and you are focused on identity.
This comment would be, frankly, flabbergasting if I weren't so completely used to hearing it at this point from my Superlatively Technocentric critics.
Let me ask you a few questions: Are you denying you have unconscious motives that can sometimes be more intelligible to others than to yourself? Are you denying that the normative pressures exerted by social formations impact conduct and perception and can look different depending on whether one is inside or outside that formation? Are you denying that all progress, including technoscientific progress, is shaped by social, cultural, and political factors -- or do you believe that progress is simply a matter of the socially indifferent accumulation of useful techniques and technical devices? Are you denying that the predominate vocabularies through which people talk about "progress" and "development" are shaped by the demands of corporate and military competitiveness, and that this might matter to people concerned with matters of sustainability, democracy, and social justice? Are you denying that argument actually consists of more than simply the delineation of logical propositions and the relations of entailment that obtain between them -- and that the force of argument will depend on metaphor, framing, schematism, to the citation of customary topics, tropes, generic conventions, and so on? Do you deny that cults exist, do you deny that they can be studied in ways that generate useful observations, and that these observations can illuminate authoritarian organizational structures beyond the marginal cult form, fundamentalist religiosity, for example, charismatic art movements, marginal subcultural identity politics, populist politics with mass-mediated celebrity leaders, and so on? Do you deny that some people exhibit what is popularly described as True Belief, largely insulated from criticism and driven by defensive forms of identification? Do you deny that there is a difference between forms of identity politics and other modes of politics, or do you simply deny that these differences make a difference? Do you deny that stress and fear of technodevelopmental change can activate irrational passions at the level of personal and mass psychology and that it actually pays to attend to these effects?
These are all perfectly concrete questions as far as I can tell, to all of which I devote considerable attention here at Amor Mundi, and they are none of them more "abstract" than most of the scientific and "technical" claims that exercise the attention of Superlatively Technocentric people -- they are certainly no more abstract than the sociological claims that preoccupy some who would pooh-pooh cultural readings like mine as "literary" or "abstruse," and -- honestly -- these concerns are easily quite as "concrete," you can be sure, as "hardboiled" predictions about the arrival any time soon of Robot Overlords or digital immortality for embodied human people.
These concerns I've listed aren't the only things in the world that repay our attention, certainly, but to dismiss these sorts of issues as some of my interlocutors sometimes seem to do is really just too obtuse for words. It's hard even to know how to respond to such attitudes sometimes. There is something painfully insufferable about the smug dismissals of "abstractness" in favor of "concreteness" one hears from facile reductionists. And there is something painfully self-defeatingly anti-intellectual about the incessant attribution to whole modalities of intelligent expressivity of a disqualifying "effeteness" "eliteness" "muddle-mindedness" "abstruseness" all in a self-promotional effort to market one's own reductionism as paragon. (To the inevitable idiotic response that I am doing the same thing to the scientifically-minded: I actually deeply respect and affirm scientific rationality, but know that to force it to apply everywhere is as distortive of its dignity and worth as to deny it application everywhere.) I just can't tell you how incessant testaments to these attitudes gets things really off on the wrong foot for somebody like me.
Michael Anissimov, for example, offers up this helpful statement in the opening gambit of a response to me elsewhere in the Moot: "A stumbling point is the sometimes unnecessary verbosity of your writing."
What am I supposed to say to that? Hey, fuck you? Honestly!
"Unnecessary verbosity?" Tell me, Michael, are all your words precisely the necessary ones? Necessary to whom, for what purpose? What if I chose some of my words because they delight me? Because they strike me as funny, because I like the sound of them? You got a problem with that?
What kind of self-image drives the choice to frame a discussion with moves like that in the first place? If I may offer up one of my questionable "armchair psychologizing" readings, I'll admit that there are times when I find myself getting the eerie feeling that advocates for AI try to write like Spock or Colossus would (or think they are so doing, since such projects inevitably fail: conceptual language is always metaphorical, argumentative moves are always as figurative as literal) as some kind of amateur performative tribute to the post-biological AI they believe in but which never seems to arrives on the scene as such. Be the change you want to see in the world... only now with robots!
This isn't a clinical diagnosis, of course, this isn't even an accusation. I know this seems to be hard for you guys to grasp, but I don't really believe it's "true" that you are performatively compensating for the interminably failed arrival of AI by acting out in this way. I obviously don't have remotely enough in the way of data to affirm this "theory" as a warranted belief or anything. Treat it as you would surely treat comparable glib conjectures offered up in actual social conversation, as a kind of placeholder for the real perplexity I often find myself feeling when confronted with the curious claims and moves of technocentric folks. Clearly, something rather odd is afoot that makes you guys talk and act this way. Who knows what finally it's all about? It certainly seems to have a measure of defensiveness and projection in it. Treat my proposal of one kind of provisional explanation that seems to hit the pitch and scale of the phenomenon at least as an utterance of the form: What is up with you guys?
Be that as it may, just to be clear: I do like writing words that are "unnecessary." There are plenty of things I like doing that are unnecessary. I am unafraid of the dimensions of experience and expression that are not governed only by necessity. I can cope quite well with such necessities as I must, but I don't want to live in the (to me) gray world where everything gets framed in those terms. That's a dull, ugly world, a robot world, as far as I can tell. Stop crowing about how not verbose, not abstract, not esthetic you guys are if you want to impress me or the people who likely come here for pleasure or provocation. It's not at all a winning strategy in a place like this.
Giulio continues: I am telling you... forget that I am a God Robot Cultist who engages in Superlative Technology Discourse and believes that the Eschaton will upload him to a Techno-Heaven, and let's join forces to achieve the common objectives.
Are you shitting me? I am quite happy to ignore the Robot Cult thing if we are in conversation about whether it's better to wipe your ass with two-ply or three-ply, but if we are talking about documenting and shaping the ongoing discursive formations through which technodevelopmental social struggle is articulated, then I'm not going to ignore the Robot Cult thing. It's relevant. It's WAY relevant. It's epically WAY relevant.
I think different identities should not matter much as long as there is agreement on outcomes.
Agreement on outcomes? I want to democratize technodevelopmental social struggle, I don't aspire to prostheticized Omni-Predicated transcendence of finitude, incarnation, and contested plurality. Dude, five thousand mostly white North Atlantic sf/popular futurology enthusiasts joined a social club they enjoy (which is perfectly fine and possibly charming) and decided they were a "movement" (which is rather silly but still mostly fine).
Now, when you sometimes seem to want to compare your own discomfort as a "transhumanist-identified person" because people are inquiring into the politics of your organizational structure, the leading metaphors in your canonical literature, and the peculiar entailments of your arguments with the suffering of persecuted ethnic, religious, gender minorities (minorities consisting of millions of people with long histories of documented abuses leaving palpable legacies generating irrational stigmas with which we all must cope as people who share a diverse world) it is, to be blunt, fairly flabbergasting.
When, on the other hand, you try to pretend that your status as a "transhumanist-identified person" is a triviality like eye color or preferring boxers to briefs, this is nearly as flabbergasting: inasmuch as if it were true you wouldn't be going on about it so endlessly, even in the face of my ongoing critique, but also because it's so palpably like saying a person's actually-affirmed identification as a Scientologist, a Mormon, a Freemason, or a Republican is something that one should pay no attention to in matters of urgent concern directly shaped by that affiliation and in ways that tend to subvert one's own ends. Who in the hell would ever think like that? What are you talking about?
"[B]eing or not being a cuckold has nothing to do with politics. It is something else, that belongs to a separate and non overlapping part of life. Same with sexual, religious, and football team preferences: these things have NOTHING to do with politics."
Well, actually, all of these things are enormously imbricated in politics. But bracketing all that, just sticking to the specifically wrong thing you are saying instead of all the many more generally wrong things you are saying here: If you happen to be a cuckold I agree that I could care less about that if we are arguing about a current technoscientific issue (unless attitudes toward infidelity come into that issue in some way, obviously). If you see current technoscientific issues as stepping stones along a path that eventuates in the transcendentalizing arrival at Superlative Omni-predicated outcomes in which you have invested deep personal faith, then it would be almost unthinkably stupid for me not to care about how these curious attitudes of yours might derange your sense of the stakes at hand, the significance of the issues, your estimation of likely developments in the near to middle term, the terms, frames, and formulations that will appeal to you and have intuitive force, and so on.
Even worse tactically, because you wish to disqualify many people who would be in _your_ camp when concrete and practical matters are discussed.
I don't have the power to "disqualify" people. I say what I think is right, what I think is wrong, what I think is possible, what I think is important, and what I think is ridiculous. And I won't stop.
In general, I am not looking for a "camp" to find my way to in any case, and while I am a big believer in political organizing I understand p2p politics well enough to know how foolish pan-movements, party machines, and camp mentalities should be to anybody who claims "tactical" wisdom in contemporary politics.
Upgraded and adapted from Comments:
Giulio wrote: I wish to... focus on concrete things... [Y]ou are... focusing on abstract issues characterized by endless questioning of others' hidden motivations and "identity". As I see things, I am focused on outcomes, and you are focused on identity.
This comment would be, frankly, flabbergasting if I weren't so completely used to hearing it at this point from my Superlatively Technocentric critics.
Let me ask you a few questions: Are you denying you have unconscious motives that can sometimes be more intelligible to others than to yourself? Are you denying that the normative pressures exerted by social formations impact conduct and perception and can look different depending on whether one is inside or outside that formation? Are you denying that all progress, including technoscientific progress, is shaped by social, cultural, and political factors -- or do you believe that progress is simply a matter of the socially indifferent accumulation of useful techniques and technical devices? Are you denying that the predominate vocabularies through which people talk about "progress" and "development" are shaped by the demands of corporate and military competitiveness, and that this might matter to people concerned with matters of sustainability, democracy, and social justice? Are you denying that argument actually consists of more than simply the delineation of logical propositions and the relations of entailment that obtain between them -- and that the force of argument will depend on metaphor, framing, schematism, to the citation of customary topics, tropes, generic conventions, and so on? Do you deny that cults exist, do you deny that they can be studied in ways that generate useful observations, and that these observations can illuminate authoritarian organizational structures beyond the marginal cult form, fundamentalist religiosity, for example, charismatic art movements, marginal subcultural identity politics, populist politics with mass-mediated celebrity leaders, and so on? Do you deny that some people exhibit what is popularly described as True Belief, largely insulated from criticism and driven by defensive forms of identification? Do you deny that there is a difference between forms of identity politics and other modes of politics, or do you simply deny that these differences make a difference? Do you deny that stress and fear of technodevelopmental change can activate irrational passions at the level of personal and mass psychology and that it actually pays to attend to these effects?
These are all perfectly concrete questions as far as I can tell, to all of which I devote considerable attention here at Amor Mundi, and they are none of them more "abstract" than most of the scientific and "technical" claims that exercise the attention of Superlatively Technocentric people -- they are certainly no more abstract than the sociological claims that preoccupy some who would pooh-pooh cultural readings like mine as "literary" or "abstruse," and -- honestly -- these concerns are easily quite as "concrete," you can be sure, as "hardboiled" predictions about the arrival any time soon of Robot Overlords or digital immortality for embodied human people.
These concerns I've listed aren't the only things in the world that repay our attention, certainly, but to dismiss these sorts of issues as some of my interlocutors sometimes seem to do is really just too obtuse for words. It's hard even to know how to respond to such attitudes sometimes. There is something painfully insufferable about the smug dismissals of "abstractness" in favor of "concreteness" one hears from facile reductionists. And there is something painfully self-defeatingly anti-intellectual about the incessant attribution to whole modalities of intelligent expressivity of a disqualifying "effeteness" "eliteness" "muddle-mindedness" "abstruseness" all in a self-promotional effort to market one's own reductionism as paragon. (To the inevitable idiotic response that I am doing the same thing to the scientifically-minded: I actually deeply respect and affirm scientific rationality, but know that to force it to apply everywhere is as distortive of its dignity and worth as to deny it application everywhere.) I just can't tell you how incessant testaments to these attitudes gets things really off on the wrong foot for somebody like me.
Michael Anissimov, for example, offers up this helpful statement in the opening gambit of a response to me elsewhere in the Moot: "A stumbling point is the sometimes unnecessary verbosity of your writing."
What am I supposed to say to that? Hey, fuck you? Honestly!
"Unnecessary verbosity?" Tell me, Michael, are all your words precisely the necessary ones? Necessary to whom, for what purpose? What if I chose some of my words because they delight me? Because they strike me as funny, because I like the sound of them? You got a problem with that?
What kind of self-image drives the choice to frame a discussion with moves like that in the first place? If I may offer up one of my questionable "armchair psychologizing" readings, I'll admit that there are times when I find myself getting the eerie feeling that advocates for AI try to write like Spock or Colossus would (or think they are so doing, since such projects inevitably fail: conceptual language is always metaphorical, argumentative moves are always as figurative as literal) as some kind of amateur performative tribute to the post-biological AI they believe in but which never seems to arrives on the scene as such. Be the change you want to see in the world... only now with robots!
This isn't a clinical diagnosis, of course, this isn't even an accusation. I know this seems to be hard for you guys to grasp, but I don't really believe it's "true" that you are performatively compensating for the interminably failed arrival of AI by acting out in this way. I obviously don't have remotely enough in the way of data to affirm this "theory" as a warranted belief or anything. Treat it as you would surely treat comparable glib conjectures offered up in actual social conversation, as a kind of placeholder for the real perplexity I often find myself feeling when confronted with the curious claims and moves of technocentric folks. Clearly, something rather odd is afoot that makes you guys talk and act this way. Who knows what finally it's all about? It certainly seems to have a measure of defensiveness and projection in it. Treat my proposal of one kind of provisional explanation that seems to hit the pitch and scale of the phenomenon at least as an utterance of the form: What is up with you guys?
Be that as it may, just to be clear: I do like writing words that are "unnecessary." There are plenty of things I like doing that are unnecessary. I am unafraid of the dimensions of experience and expression that are not governed only by necessity. I can cope quite well with such necessities as I must, but I don't want to live in the (to me) gray world where everything gets framed in those terms. That's a dull, ugly world, a robot world, as far as I can tell. Stop crowing about how not verbose, not abstract, not esthetic you guys are if you want to impress me or the people who likely come here for pleasure or provocation. It's not at all a winning strategy in a place like this.
Giulio continues: I am telling you... forget that I am a God Robot Cultist who engages in Superlative Technology Discourse and believes that the Eschaton will upload him to a Techno-Heaven, and let's join forces to achieve the common objectives.
Are you shitting me? I am quite happy to ignore the Robot Cult thing if we are in conversation about whether it's better to wipe your ass with two-ply or three-ply, but if we are talking about documenting and shaping the ongoing discursive formations through which technodevelopmental social struggle is articulated, then I'm not going to ignore the Robot Cult thing. It's relevant. It's WAY relevant. It's epically WAY relevant.
I think different identities should not matter much as long as there is agreement on outcomes.
Agreement on outcomes? I want to democratize technodevelopmental social struggle, I don't aspire to prostheticized Omni-Predicated transcendence of finitude, incarnation, and contested plurality. Dude, five thousand mostly white North Atlantic sf/popular futurology enthusiasts joined a social club they enjoy (which is perfectly fine and possibly charming) and decided they were a "movement" (which is rather silly but still mostly fine).
Now, when you sometimes seem to want to compare your own discomfort as a "transhumanist-identified person" because people are inquiring into the politics of your organizational structure, the leading metaphors in your canonical literature, and the peculiar entailments of your arguments with the suffering of persecuted ethnic, religious, gender minorities (minorities consisting of millions of people with long histories of documented abuses leaving palpable legacies generating irrational stigmas with which we all must cope as people who share a diverse world) it is, to be blunt, fairly flabbergasting.
When, on the other hand, you try to pretend that your status as a "transhumanist-identified person" is a triviality like eye color or preferring boxers to briefs, this is nearly as flabbergasting: inasmuch as if it were true you wouldn't be going on about it so endlessly, even in the face of my ongoing critique, but also because it's so palpably like saying a person's actually-affirmed identification as a Scientologist, a Mormon, a Freemason, or a Republican is something that one should pay no attention to in matters of urgent concern directly shaped by that affiliation and in ways that tend to subvert one's own ends. Who in the hell would ever think like that? What are you talking about?
"[B]eing or not being a cuckold has nothing to do with politics. It is something else, that belongs to a separate and non overlapping part of life. Same with sexual, religious, and football team preferences: these things have NOTHING to do with politics."
Well, actually, all of these things are enormously imbricated in politics. But bracketing all that, just sticking to the specifically wrong thing you are saying instead of all the many more generally wrong things you are saying here: If you happen to be a cuckold I agree that I could care less about that if we are arguing about a current technoscientific issue (unless attitudes toward infidelity come into that issue in some way, obviously). If you see current technoscientific issues as stepping stones along a path that eventuates in the transcendentalizing arrival at Superlative Omni-predicated outcomes in which you have invested deep personal faith, then it would be almost unthinkably stupid for me not to care about how these curious attitudes of yours might derange your sense of the stakes at hand, the significance of the issues, your estimation of likely developments in the near to middle term, the terms, frames, and formulations that will appeal to you and have intuitive force, and so on.
Even worse tactically, because you wish to disqualify many people who would be in _your_ camp when concrete and practical matters are discussed.
I don't have the power to "disqualify" people. I say what I think is right, what I think is wrong, what I think is possible, what I think is important, and what I think is ridiculous. And I won't stop.
In general, I am not looking for a "camp" to find my way to in any case, and while I am a big believer in political organizing I understand p2p politics well enough to know how foolish pan-movements, party machines, and camp mentalities should be to anybody who claims "tactical" wisdom in contemporary politics.
Subscribe to:
Posts (Atom)