Using Technology to Deepen Democracy, Using Democracy to Ensure Technology Benefits Us All

Monday, October 01, 2007

Superlativity as Proxy

There have been a number of really rich engagements with the recent Superlativity posts (much of the credit goes to Michael Anissimov for bringing the critique to the attention of his readers, few of whom can have been too sympathetic to my argumentative line after all, and taking it seriously himself), and I wanted to bring this one from Richard Jones up to the surface, since it occurs rather deeply into comments:
For some more discussion of the technical issues that, to say the least, make the success of the MNT project far from a certainty, take a look at:
http://www.softmachines.org/wordpress/?p=175

I might say that, although as a physicist I have engaged at a technical level with the proponents of the MNT vision for a few years, I'm increasingly concluding that the kind of cultural and ideological perspective that people like Dale are bringing to this discussion is perhaps more valuable and pertinent than the technical discussion. Or perhaps, to state this in a stronger way, discussions that at first seem to be technical in nature in actuality are proxies for profound ideological disagreements.

What I would highlight here is the concluding statement: "[D]iscussions that at first seem to be technical in nature in actuality are proxies for profound ideological disagreements." I think this is a crucial insight, one that most Superlative Technocentrics easily grasp when it is applied to, say, the rhetorical work of "hard sf" literature, and which most Superlative Technocentrics will see the sense of when it is applied to the presumably "technical" risk-discourse of bioconservatives (as I discuss here and here, for example), but seem to have considerable difficulty with once it gets applied to their own formulations.

My Superlative Technology Discourse Critique has been preoccupied with three discursive transcendentalizations of technodevelopmental quandaries of the emerging and proximately upcoming technoscientific terrain: First, the transcendentalization of healthcare discourse in an emerging era of consensual non-normative modification medicine into so-called "Technological Immortalism." Second, the transcendentalization of security discourse in an emerging era of networked malware into so-called "Singularitarian" preoccupations with post-biological superintelligent Robot Overlords. Third, the circumvention of the impasse of stakeholder politics through a superabundance to be enabled by cheap programmable molecular manufacturing, a transcendentalization of political discourse, in an era already suspicious that scarcity is largely a political artifact, into a superabundance I jokingly summarize as "Nanosanta."

This picture is complicated somewhat by the fact that the Singularitarian mode of Superlativity is picking up the torch from one of the classic Superlative Discourses of the twentieth century: the Strong Program of AI, from which it differs in important respects but on which it nonetheless depends, not only conceptually, but culturally, figuratively, and so on.

Another way of phrasing these connections is to survey the modes of Superlative Technocentricity as relying on questionable inter-implicated notions of post-embodied consciousness, post-embodied and "hence" post-mortal life, post-historical singularity, post-political superabundance, post-democratic technocracy.

I am concerned quite urgently with the emerging and proximately upcoming technodevelopmental quandaries that seem to me to be irrationally hyperbolized and transcendentalized in their Superlative formulations in this way, and it this derangement of sense where sense is needed (not to mention what I have discussed as the special appeal of these Superlative formulations from the perspective of incumbent interests), this activation of irrational passions where democratic deliberation is demanded that provokes me to return to this topic so insistently.

It's also true that I am quite simply skeptical of the more hyperbolic predictive claims made by Superlative Technocentrics, especially claims about the developmental timescales that drive their expectations. I am suspicious how uncaveated so many of their claims tend to be, contrary to those of most scientists with whom I am familiar. I am struck by what seems to be a widespread tendency to overestimate our contemporary theoretical grasp of basic concepts like intelligence in general, when they go on to make glib predictions about post-biological superintelligence. I discern a worrying tendency among them to underestimate the extreme bumpiness we should all know to expect by now along the developmental pathways from which the relevant technical capacities could arrive, not to mention a correlated tendency, just as worrying, to assume that these technologies, upon arrival, would function more smoothly than technologies almost ever do. I am puzzled by the amount of argumentative weight that tends to placed in Superlative Technology Discourses on loose analogies the aptness of which often seem questionable indeed: to find in the embodied human mind a "reality-referent" for a post-biological mind, let alone a superintelligent one, to find in biological molecules a "reality-referent" for dry programmable general purpose nanofactories, to find in a mansion repaired and maintained for centuries a "reality-referent" for rejuvination medicine, and so on. Finally, I am obviously struck by the exhibition in these discourses of what looks to me like a rather stark obliviousness about the extent to which what we tend to call "technological development" is articulated in fact not just by the autonomous accumulation of technical accomplishments but by social, cultural, and political factors as well, in consequence of which they simply rarely take these adequately into account at all. In light of all this it can certainly be a bit trying to hear the barrage of self-congratulatory protestations to superior scientificity that can sometimes greet even informed, respectful, seriously engaged critiques.

All that said, however skeptical I may be of the strongest technical claims arriving out of these discourses, I must admit that of these three modes of Superlative Technocentricity, the mode I describe as "Nanosanta" receives the most political and least technical of my critiques. That's one of the reasons Richard's technical intervention is so welcome and useful (and this would remain true even if, ultimately, I were to be unconvinced by it).

Whereas the claims of Singularitarians seem to me probably too incoherent conceptually and too compromised politically for redemption in anything like their present form, and the same goes, I'm afraid, for the essentially religious claims of the Technological Immortalists (claims that should, in my view, you will recall, be carefully disarticulated from claims about healthcare in an emerging scene of non-normative genetic, prosthetic, and cognitive modification medicine, including medicine with likely effects on healthy human lifespan), it has seemed to me that many of the claims made about molecular nanotechnology were on less shaky ground, and many of its advocates less prone (though not immune) to the worst pathologies of Superlativity.

I do think that sometimes "nanotechnology" comes to name always only a ferocious identification with the arrival of a particular idealized outcome -- like the appearance on the scene of a programmable general-purpose nanofactory -- rather than encompassing the wider range of nanoscale interventions that are sure to be taken up long before any such outcome, much of which is likely to be called "biotechnology," in fact, much of which is likely to concern nanoscale sensors and material toxicities that fail to pass muster as nanotechnology in a more restrictive characterization, and so on. This seems to me to be properly understood as a Superlative derangement of policy discourse in a way that is analogous to my critiques of the derangement of quandaries about networked malware into preoccupations with Robot Gods or the derangement of quandaries about consensual nonnormative healthcare provision into preoccupations with Technological Immortalism.

But the focus of my own Superlative Critique where molecular manufacturing is concerned has always had less to do with these sorts of technical and predictive worries than with concerns about the anti-democratizing or post-politicizing gesture that seems to me to invigorate quite a bit of Superlativity at the Nanoscale.

One cannot point out too many times, for example, that neither "nanotechnology" nor "automation" will one day magically cut or circumvent the basic impasse that inaugurates politics: namely, that we share a finite world with an ineradicable diversity of peers with different stakes, different aspirations, different capacities on whom we depend for our flourishing, from whom we can count on betrayal, misunderstanding, and endless frustration, and with whom we want to be reconciled.

The simple truth is that abundance is already here, already within our grasp (just like war is over... if you want it), and so, it is the defense of injustice in the name of championing parochial prosperity that is the threat to the arrival of the available abundance worth having.

If new cheap robust sustainable materials modified at the nanoscale or new cheap robust sustainable products manufactured via nanoscale replication in whatever construal actually were to arrive on the scene, these would contribute to general welfare and prosperity only if that is the value that defines the societies in which these technodevelopmental outcomes made their appearance. Otherwise, they absolutely would not.

If one wants to arrive at something like the Superlative outcome of "Nanosantalogical" superabundance, what one should be fighting for is to protect and extend democracy, to implement steeply progressive taxation, to broaden welfare as widely as possible, and to make software instructions available for free (else they certainly won't be and then Nanosanta will be sure to open his bag only for the rich). If one wants to arrive at something like the Old School Superlative outcome of universal automation or Robo-Abundance, what one should be fighting for is to implement a basic income guarantee, otherwise automation (including much that gets called "outsourcing" and "crowdsourcing" in contemporary parlance) will simply function as further wealth concentration for incumbents. Needless to say, I worry that no small amount of the post-political handwaving of the Nanosantalogical mode of Superlativity derives from a prior commitment to neoliberal assumptions, and functions as a proxy (to return to this post's initial topic) precisely for a worldview that would not in fact be displeased at all with the prospect of such wealth concentration for incumbents or with a stingy Nanosanta with a bag full of toys only for already rich girls and boys. These are the discursive derangements that attract my primary interest when talk of MNT goes Superlative.

7 comments:

Anonymous said...

"All that said, however skeptical I may be of the strongest technical claims arriving out of these discourses, I must admit that of these three modes of Superlative Technocentricity, the mode I describe as "Nanosanta" receives the most political and least technical of my critiques."

They also seem to be your best arguments. The cumulative effects of many institutional failures and problems of education can prevent the use of even quite simple and effective technologies, e.g. the limited use of oral rehydration therapy to prevent deaths from diarrhea in many African countries. It's true that without some level of political will, resources will not be redistributed.

I agree that as rapidly advancing automation (including automated manufacturing) reduces the value of human labor for many individuals measures such as a universal basic income will be essential to ensure broad prosperity.

I would also say that Drexlerian nanotechnology is not radically discontinuous from other means of production. For instance, highly automated machinery for constructing solar cells and components for construction machines could provide very large amounts of energy cheaply without any need for 'assemblers.' The computer industry has technologies in the pipeline to provide great increases in computing power per dollar for a number of years.

On the other hand, in your rhetoric you seem very resistant to the idea that developing a technology can change the amount of political will required to implement particular policies. Brian Wang frequently makes the point that just maintaining current levels of welfare spending or foreign aid as a percentage of GDP will produce enormous benefits if economies grow more rapidly. Implementing tough carbon restrictions would be much easier politically (note the failure of European countries to follow through on their Kyoto commitments) if the economic costs of compliance were lower.

I understand that in rhetoric you wish to focus attention on the role of political action so as to encourage more of it, but it often seems that you exaggerate the likely impact of your audience 'caring more.' Some religious conservatives say that state welfare spending is inappropriate because we should meet the basic needs of the poor through private charity. Many progressives would say that it is unrealistic to expect that level of enthusiasm for altruism, and suggest using the instrument of the state to convert limited altruism into a vote for tax-financed redistribution (a vote that probably won't affect the outcome, so that selfish motivations are diluted more than altruistic ones). At the next level, sometimes techniques to leverage limited political will are important, including devoting advocacy efforts to the development of technology that reduces the cost of good policies.

With regard to life extension advocacy, I would view this as analogous to institutional failures with respect to public health. While there are strong incentives for the development of new drugs to treat particular diseases (many of them created by identity groups for those who have suffered a particular disease or had a family member do so), investments in researching and reducing obesity or smoking (major determinants of health) faced substantial barriers from entrenched interests (the food and cigarette industries) and from much of the population (smokers, Americans concerned with autonomy and control over one's body, fast food fans, etc). Action to mobilize societal resources to investigate and eventually apply methods of dealing with aging can thus be quite beneficial. The low quantity of resources historically invested in studying the mechanisms of caloric restriction (which has now produced several actually-existing biotechnology firms seeking to influence related biological pathways, such as Elixir and Sirtris) shows the value of such activity.

As a political project, encouraging the allocation of resources to aging research commensurate with the expected benefits (which are very large for interventions with even a fraction of the efficacy of the numerous techniques that dramatically extend animal lifespans). What would you criticize about 'technological immortalists' and their enthusiasms? The M-Prize? The Longevity Dividend? Private or public funding for Aubrey's SENS research agenda? Discussion of 'longevity escape velocity' or curing aging as an ultimate goal? The idea that a baby born today may live to 1000, given likely intermediate advances over this century? Arguments that people over 40 may do so?

The argument that we should explicitly aim to eliminate aging as a cause of death, as we aim to eliminate cancer or HIV, despite the substantial intractability of those problems, seems strong.

I won't go into AI in any detail, as nothing new or useful seems to be emerging from that discussion. However, I would like to know whether some of the objections you raise to extreme technologies are indeed "ideological differences" related to the the ethical treatment of high-impact improbable events.

Michael Anissimov said...

Dale, by the way, what do you think of Nick Bostrom? He must be one of the most high-profile Singularitarians. Is he susceptible to the charges you levy against other Singularitarians?

I agree that some the analogies used to argue the feasibility of extreme life extension etc. are not complete. They are just meant to just get people's minds working. In the case of SENS, for instance, Ending Aging recently came out, which goes into much greater detail than any simple analogy.

I find it amazing you are skeptical about the prospect of any smarter-than-human mind! This is implied in this post. Arguing that it could take us hundreds of years to create a superinteligent mind is much more credible than saying one cannot exist.

Dale Carrico said...

Utilitarian writes:

in your rhetoric you seem very resistant to the idea that developing a technology can change the amount of political will required to implement particular policies

Quite to the contrary, I have regularly pointed out that technodevelopmental vicissitudes destabilize the terrain (in ways that are non-negligibly unpredictable) in ways people must understand if they would better organize and opportunistically articulate these vicissitudes to facilitate the outcomes they desire. I do, however, resist the model of change wherein a marginal sub(cult)ure seeks to implement unilaterally an idealized and particular technodevelopmental outcome with which they presently identify, a technodevelopmental outcome between now and the attainment of which there are a series of intermediary stages all of which involve discoveries and distributional questions of grave import to stakeholders but all of which are discounted except as programmatic stepping stones along a path toward a Superlative outcome onto which all or most imaginative investment is directed. I think this is, to say the least, impractical, especially when the superlative outcome is invested with transcendentalizing significance, with all that such significance entails. But more than impractical, for me as a technoprogressive champion of democracy, this is the wrong thing to do: one must democratize technodevelopmental deliberation to ensure that the distribution of the costs, risks, and benefits of technoscientific better reflects the expressed needs, aspirations, and consent of the diversity of stakeholders to that change, and in an ongoing way.

I understand that in rhetoric you wish to focus attention on the role of political action so as to encourage more of it

It would be better to understand that as a rhetorician I seem to be aware of the role of political action in these matters, as well as the insensitivity, disavowal, or underestimation of this role in some technodevelopmental discourses that would imagine or market themselves as "technical" or supremely "scientific." I seek to encourage people to better reflect these realities in their accounts. If this emphasis empowers people to act, well, as a technoprogressive champion of democracy, of course I think that is all to the good.

What would you criticize about 'technological immortalists' and their enthusiasms?

Immortality is an essentially religious notion, and it is conceptually incoherent. Discussions of immortality activate irrational passions that do no good for those who would consensualize nonnnormative modification medicine, including emerging techniques that might increase healthy human lifespan. I have said this many times, even recently.

The M-Prize?

I think it is a good idea, and I think it would be better were it disarticulated from foolish Cult claims about living forever and the denial of "death."

The Longevity Dividend?

I'm on record defending much of the rhetoric of the Longevity Dividend, especially over the foolish self-marginalizing rhetoric of Technological Immortalism. I've even taught the Longevity Dividend in the context of my bioethical rhetoric course.

Private or public funding for Aubrey's SENS research agenda?

SENS should be disarticulated from Immortalism. SENS is more like nine claims -- [1] that we have arrived at a level of knowledge demanding a shift in our attitude toward biological processes of aging, from processes that scientists should understand in their complexity to processes in which we can intervene to stop them, reverse them, ameliorate their effects therapeutically; [2] that what we call aging may soon be regarded as a folk designation for seven specific inter-implicated forms of damage; [3]-[9] and then therapeutic recommendation for each of these damages. Only in light of the Superlative investment in a muzzy marginal notion of immortality (whatever the impact on the narrative coherence of selfhood, whatever the ongoing risks of disease, violence, and accident, whatever the abiding irrationality of the denial of finitude for which mortality has long been a shorthand, and so on) would all these claims -- with their varying levels of aptness and interest -- get corralled together into a single Superlative outcome.

What do I think of Discussion of 'longevity escape velocity' or curing aging as an ultimate goal?

I think it's a dumb distraction, hyperbolizing, pathologizing, oversimplifying, and better suited for cultists.

I would like to know whether some of the objections you raise to extreme technologies are indeed "ideological differences" related to the the ethical treatment of high-impact improbable events.

Once again, my objections are to certain technological discourses, not technologies. The technologies do not exist. I am quite for "high-impact improbable events" to attract the attention of democratic deliberation, but not the unilateral imposition of such topics by self-appointed elites who identify with Superlative outcomes and who seek to implement them whatever the expressed concerns of the actually-existing diversity of stakeholders to technodevelopmental social struggle. The point isn't "high-impact improbable events" -- this is another distraction reflecting the skewed priorities of Superlatives: After all I can claim any made up bullshit deserves our urgent immediate concern just by endlessly ratcheting up the imaginary body counts associated with it, meanwhile there are actual problems that demand urgent redress in the actual world attested to by the express aspirations and testaments of actually existing stakeholders with whom we share the world. My "ideological" perspective is that of a technoprogressive champion of democracy. You are quite right to point out that this perspective informs everything I say. It does.

Thanks for your careful and considerate and serious engagement.

Dale Carrico said...

Michael wonders:

what do you think of Nick Bostrom?

I think he is a nice smart guy who has interesting things to say on a wide variety of topics, many of which interest me personally very much, and some of which I agree with more than others, and almost all of which I find provocative.

He must be one of the most high-profile Singularitarians.

If he has officially joined the Robot Cult, I daresay this is probably a bad career move for him for the longer term. (Insert smiley for irony impaired readership.)

Ending Aging recently came out, which goes into much greater detail than any simple analogy.

By the time I saw you at the Singularity Summit I had already read the whole book, Michael. My critique of Technological Immortalist strands of Superlativity stands, and if I find the time for it I'll try to review the text in detail from a rhetorical standpoint.

I find it amazing you are skeptical about the prospect of any smarter-than-human mind! This is implied in this post.

Show this implication, derive the syllogism. I wrote that you cannot treat the intelligent embodied human mind as a "reality-referent" on analogy with which one claims to find support for the plausibility of a superintelligent post-biological consciousness. That's a very different point in my book. I quite understand that you would prefer that I reframe this as a claim about longer versus short technodevelopmental timescales, since that would finesse the basic conceptual quandaries that bedevil you, among them an ongoing handwaving away of the specifically embodied incarnation of human intelligence as we know it, the reductive typicality of the "intelligence" assumed in so much AI discourse in the face of the rich diversity of intelligences exhibited by humans and other animals in fact, and the deep difficulties of inter-translatability should one want to go on to make Superlative claims about immortality via "uploading" and so often come hard on the heels of talk of post-biological superintelligence and so on.

Anonymous said...

"I do, however, resist the model of change wherein a marginal sub(cult)ure seeks to implement unilaterally an idealized and particular technodevelopmental outcome with which they presently identify"
In the abstract I agree (or at least I agree with one meaning that I read in this statement), but it's not clear to me that this is a major model among transhumanists subculture members interested in nanotechnology. Eric Drexler has spent many years seeking to engage democratic governments, and played an important role in causing the National Nanotechnology Initiative, an open, public government effort to develop nanotechnologies. The people at Foresight and CRN seem to be seeking to raise safety concerns and possible approaches for democracies to consider. Researchers like Merkle and Freitas publish real chemistry in peer-reviewed articles, and interesting scientifically informed analyses of possible applications that are available to all. Stretching enormously, I suppose one could accuse von Ehr and Zyvex of seeking to develop 'nanofactory' -type technologies themselves, but if this objection is serious, then they are not 'marginal.'

The objection seems to me to be different from and less compelling than the 'Nanosanta' critique of neglecting redistribution and talk of a 'post-scarcity' economy (considering limited reserves of land, human labor, intellectual goods, ability of the atmosphere to absorb heat, rare elements, etc, plus the possibility of extremely rapid population growth).

It might be helpful if you could specify some substantive actions that you would prefer that the actors mentioned above substitute for their current activities, and some reason why those particular activities would have large relative benefits (taking into account the comparative advantage of the different parties).

"Immortality is an essentially religious notion, and it is conceptually incoherent."
I think that a lifespan of (Gram's number) years has a probability that is infinitesimally small given current physics (the possibilities of astronomically unlikely thermodynamic and quantum fluctuations prevents me from saying zero), but there is nothing conceptually incoherent about it. If the laws of physics were different there could indeed be beings with stable identities and infinite existences.

On accidents, violence, infectious disease, catastrophe, access to resources (individually and generally in the universe as the 2nd law of thermodynamics does its work), I agree that these things make the "Immortality Institute" (www.imminst.org) poorly-titled, but correcting the bodily processes that increase mortality and disease risk over a lifetime, i.e. "Ending Aging," does seem to be a problem that could ultimately be solved by humans.

" What do you think of...curing aging as an ultimate goal?

I think it's a dumb distraction, hyperbolizing, pathologizing, oversimplifying, and better suited for cultists."
I can understand this as a sort of Straussian argument about public influence and rhetoric (and I know that some very smart researchers want to eliminate death from aging but publicly disavow this to avoid scaring off funding, so I take this argument seriously), but even if one endorsed that (anti-democratic?) view it would be irrational to exclude a substantial portion of the potential benefits of aging research from calculation. It seems to me that this would be like assuming that cancer research can never reduce cancer rates per annum by more than 70%, and under-investing in the research for that reason.

"Discussions of immortality activate irrational passions
that do no good for those who would consensualize nonnnormative modification medicine, including emerging techniques that might increase healthy human lifespan."

The key phrase here is "do no good." Clearly, you're willing to activate irrational passions with intentionally inflammatory rhetoric, and the mere assertion that the Straussians are correct doesn't greatly help observers such as myself in evaluating which rhetorical strategy will on net attract more resources to the field (Aubrey does seem to have increased media interest in the field, enables triangulation by the Longevity Dividend or Sirtris folks, brought in donors for the M-Prize, etc). I am open to arguments on rhetorical strategy from a specialist in the field, but without hearing your substantive arguments and an account of the connection between rhetoric and concrete actions by funding agencies and researchers I can't be confident of their strength.
This does seem to be the Straussian argument, but the mere assertion is not enough to be persuasive.

"I have said this many times, even recently."
However, you don't generally distinguish between 'immortality' and preventing biological aging, or 1000 year healthspans. Aubrey de Grey, for instance, is generally careful not to use words like immortality, and his writing is heavily laden with caveats, e.g. discussion of accidents and violence. If your critique doesn't apply to Aubrey, the most public face of life extension, its more sweeping claims seem rather problematic.

"After all I can claim any made up bullshit deserves our urgent immediate concern just by endlessly ratcheting up the imaginary body counts associated with it"
Actually, this isn't so. Because the potential number of future people is much greater than the size of the current generation risks of human extinction do dominate our consideration, but this just shifts our concern to probability of extinction (one cause of extinction is roughly as bad as the next, save for the possibility of new intelligence arising if the biosphere is left intact). Known threats such as asteroid/comet impacts where preventive efforts are not yet comprehensive thus prevent exceedingly unlikely threats from dominating our attention.

Anonymous said...

Pardon a few stray sentences stemming from speedy posting, SVP.

Anonymous said...

Also, when I request specifics on what you would prefer Merkle or Phoenix or Drexler spend their time on, I don't mean to dismiss the general answers of "basic income, open IP, etc," just to get particular accounting for *marginal* impact given the number of people working on each issue, comparative advantage, and the contributions an individual can make.