Using Technology to Deepen Democracy, Using Democracy to Ensure Technology Benefits Us All

Friday, June 11, 2004

Advisor, Advise Thyself!

My little tongue-in-cheek "advisory" bumper stickers for technology advocates last post has provoked some interesting comments that inspire me to respond and to amplify them a bit.

First, George warns me that Canada is not all that it may seem in my liberal dream of Canada, here in the America (unless of course California secedes -- anyone? anyone?) of the Killer Clown Administration. I follow George's excellent and often provocative blog Sentient Developments for (among other things) news of Canada, especially where the politics of technology development and civil liberties are concerned, so I know what he is talking about here. Anyway, I intended in part to evoke with the phrase "California Ideology" the very excellent essay by that title written by Richard Barbrook which diagnoses the conjunction of civil and negative (market fundamentalist) libertarianism among technophiles in a way matched in its perspicacity only by Paulina Borsook's excellent (and funnier) book Cyberselfish (how odd that their last names are so similar...), against which I meant to counterpose an "idea" of Canadian reasonableness and civic-mindedness that of course does not always play out on the ground, but which I hope will eventually prevail everywhere.

Now, Michael, in his very interesting and serious intervention, writes quite a number of things that deserve comment. I'll confine myself for now to just a few. He states at the outset: "I deeply feel your conservative approach misses the severe urgency of preparing for the imminent near-future arrival of what you might call 'superlative technologies'."

First, I will admit that there is a rich pleasure in being called "conservative" that I only get to experience in my conversations with radical technophiles, inasmuch as my atheistical antiwar queer ethical-vegetarian feminist environmentalist social-to-radical democratic world-federalist leanings disqualify me from the privilege in almost any other context I can think of!

But it's true, to the extent that technological development is the last remaining historical force at large in the world to which we might justly assign the label "revolutionary," I think that those of us who would grapple to articulate that force to achieve outcomes that will benefit us all can well use both the enthusiasm and hopefulness of progressive temperaments and the caution of more conservative temperaments.

Michael continues: "In the past few weeks I've been researching nanotechnology in depth. By any measure, this technology alone is 'superlative' relative to our present-day levels of technology."

Actually, I do not agree with this. "Superlative" technology is a term I use to describe technological developments that can only eventuate from long developmental processes (even if developmental acceleration makes "long" a term we can quibble about) each stage of which will impose its own set of technical, theoretical, political, regulatory quandaries.

My point is that it is almost never useful to focus on a superlative outcome to the cost of a focus on the more proximate developmental stages on which it depends, especially when these would appear to be particularly fraught (as they are with most of the technologies that interest radical technophiles -- biotechnology, genetic medicine, automation, nanoscale manufacturing, etc.).

I will expand on this point about "superlative states" in my next entry to Amor Mundi. For now, I want to caution strongly against a watering down of this point as happens when you say next -- "Heck, nukes are 'superlative' relative to throwing spears, aren't they? I realized that I could have easily replaced all mentions of Singularity and AI in our earlier arguments with references to nanotech and assemblers, and much of the fundamental shock factor would remain."

My point is not at all about a failure of imagination or nerve that might occur in the contemplation of certain kinds of technological transformations of our capacities. The argument is about repudiating a purely technocratic understanding of what technological development amounts to. For development to be progress requires politics and policy, both of which require a focus on developmental stage-management rather than any pining after transcendentalizing superlative states.

Michael continues: "For example, you link the Center for Responsible Nanotechnology from your site, but I have a sneaking suspicion (perhaps unfounded) that you have not read that site in detail." On the contrary, I believe CRN to be lodging itself more explicitly in the kind of developmental politics I am talking about than almost any other technology advocacy organization -- the possible exceptions being the Center for Cognitive Liberty and Ethics, the Converging Technologies Bar Association, and the Creative Commons (all of which I also link to).

"To make a long story short, what Mike and Chris are arguing is that, within the coming decade or two, either 1) billions of humans beings could be dead, or 2) our Earth could be in a state of 'near-utopia'. (Mike's interview with Dr. J includes that exact phrasing.)" I believe that in making "the long story short" in just this way, Michael has removed everything that is useful in what CRN says and retained only that which inspires panic and passivity to the cost of what they are arguing about and advocating for.

"Chris and Mike's arguments ultimately emerge from cut-and-dry number crunching and theorizing in the domains of chemistry, physics, and so on. But their arguments are shocking and horribly disturbing." But what is extraordinary about CRN is the extent to which their arguments emerge just as "ultimately" from the very sensible positions they take on questions of regulation, oversight, international relations, and the politics of war and development to which they conjoin their number-crunching. Why highlight just the one over the other dimension in their analyses?

Michael goes on: "I strongly suggest you read the follow[ing] 'Top 30 Essential Studies in Nanotechnology'." I second his recommendation -- and am in a position to do so because I have already read them myself. (Please, let us not drift into the common technocratic technophilic mistake of imagining that those who call attention to other dimensions than engineering in technological development do so because they know less than the technophiles do about the relevant science. While this may be true to a frustrating extent, there will be times when we do so because we know about more than just the science.)

"The 'rational state of mind' to be in with respect to nanotechnology, at this moment, is one of panic," Michael proposes. But I think this is ultimately an elitist and damaging assumption. I think there are reasonable ways to proceed in this moment with respect to molecular nanotechnology (and many of them, as it happens, are being advocated by the Center for Responsible Nanotechnology), and that these reasonable measures can be communicated intelligibly to mainstream constituencies and accountable authorities and undertaken to the benefit of all.

"I wish that saying 'less inevitability more choice' would make it so," Michael goes on to say. Remember, my slogan was directed to the technophiles here. However dire one's diagnoses, framing development in terms of inevitability will nudge you into the midset of panic or passivity, and either attitude can hinder your ability to respond effectively to your own concerns, come what may.

He continues: "But nanotech will inevitably be developed this decade or the next despite any wishes to the contrary. Human opinions cannot change the laws of physics." It may of course be true that a form of nanotechnology -- maybe even robust Drextech -- will emerge within twenty years, but it is absolutely wrong to say that this development is inevitable. And from my own perspective what matters more still is that it is far from "inevitable" what form this technology would take and with what consequences. And it is impossible for me to see how a focus on superlative outcomes can possibly better prepare us than a focus on proximate policy to accommodate the arrival of nanotechnology in whatever forms.

"If we approach the future with a linear (or soft exponential) view of progress, we will be forced to face harsh consequences when our model fails to map to reality." What Michael means by "non-linear" discussions of technology almost always become in my view a kind of quasi-religious poetry or a self-congratulatory technocratic discourse oblivious to the sociopolitical context of development. What counts as "linear" development from the perspective of those who actually inhabit it, might look to an external or retrospective observer "exponential," after a fashion. Who can say? The force of my point is that the discourses and practices of progressive technology advocacy should remain focused on the proximate and on developmental stage-management, on funding, oversight, and regulation to ensure costs, risks, and benefits are fairly shared by all the stakeholders to ongoing development.

Leave superlativeness (the sublime) to the poets. We've got work to do.

With depressing behavioral predictability there follows this claim: "There will be no politics, no economy as we know them after the arrival of MNT." I fear that only an impoverished understanding of politics would inspire this statement. I don't mean that as an insult. It's just as fair to say that I have an impoverished understanding of chemistry vis-a-vis Chris Phoenix.

Politics arises from the fact that we live among peers who differ from us. Even indefinitely more powerful and knowledgeable peers will have contending ends that must be reconciled should these peers continue to exist among one another.

Michael writes: "Our current paradigms are more fragile than they seem, and would be radically rearranged in a post-MNT world, nevermind a post-AI world. Life ain't going to be like it used to be, and many transhumanists still don't get this." But radicals never needed nanotechnology to "get this" kind of point.

You would be surprised how much stays the same when how much changes. Be that as it may, I think that the starry-eyed contemplation of total transformation is never productive for those of us who would have a hand in articulating the developmental flows that would eventuate even in such a total transformation, and so we should probably keep that sort of thing to a minimum.

(I will restrain myself from going deeply into my suspicions that, as often as not, the talk of total transformation among technophiles arises less from a hardboiled assessment of scientific realities the rest of us are too softhearted or softheaded to bear, but is inspired by a pining after such total transformation that bespeaks a profound social alienation that has nothing to do with science in the first place. I don't happen to think this is true of Michael, actually, since I know him to be an uncommonly genial and well-adjusted fellow, but I have to wonder sometimes about some of the company he keeps!)

Michael continues on: "Placing 'Singularity' opposite 'regulation' makes me think you associate the Singularity with disregulation. That's not what the Singularity is." As it happens, my primary inspiration was that the two words had "gul" in the middle, so that combining them was pleasantly euphonious. I meant that phrase to telescope this much more complex sensibility: Progressive technology advocates must focus on regulating technological development to facilitate outcomes that benefit us all, rather than on the conjuration and contemplation of superlative quasi-transcendental developmental outcomes that make people too panicky or passive to participate in the politics that will shape to an important extend whatever outcomes will eventually arrive.

He goes on: "The 'Singularity' means the creation of an intelligence smarter than you. It's dangerous because an intelligence significantly smarter than you could kill you (and all humans) quite quickly if it didn't want you around. Humans are just bags of flesh dotting the landscape, trivial obstacles to a superintelligence with nanotechnology." This is a much longer argument and we have had parts of it before. First, his definition of singularity here is not the default "acceleration of acceleration yielding total transformation" definition that has taken hold of the imagination of so many radical technophiles, however methodologically more clear his own happens to be.

But even on Michael's own terms I question the view of "intelligence" and the functional impact of "smartness" it seems to assume. I think that singularity-talk that focuses on the arrival of greater-than-normatively-human postbiological intelligence tends to overestimate the smooth function of technology and underestimate the sociopolitical contexts in which development occurs and disseminates its effects. I think that the singularitarian conjuration of an apocalyptic outcome here can only have the impact of causing us to act unreasonably now, when we should be making policy interventions that would limit the likelihood of outcomes like the one he mentioned.

And another comment: "'Less immortality more genetic medicine'. Hm. Is this an 'angle' you're arguing for - that we should be talking about genetic medicine as a subgoal of gaining more credibility as a subgoal of garnering support to make real immortality possible within a few decades?" Yes, that is a part of it. But I also think it is obscene to realize that bioconservatives will use a fear of "immortality-engineering," which to me is a pretty much meaningless and irrelevant sort of phrase in any case, to impede the arrival of a cure for Parkinsons Disease.

I think "immortality" is a term freighted with all sorts of associations that make people think unclearly about genetic medicine, genetic, prosthetic, and cognitive enhancement and even, frankly, engineered negligible senescence. I think that what would happen to a very long-lived person won't look much like what people tend to actually think of when they use the word "immortality." I think that long-lived beings would grapple with discontintuities and mergings of selfhood quite as traumatic to our existential inhabitation of personal selfhood as mortality here and now. And so, while I agree that the biological science for radically extending healthy lifespan may indeed be just as promising as the most enthusiastic "immortalist" technophiles say it is, I think it is still right to worry that there is an important element of the neurotic "denial of death" in the psychology of many advocates for technologically mediated extreme longevity, and that the damaging impact of this neurosis is not ameliorated by the fact that they have "done the math." This is a long and as yet undeveloped conversation that will no doubt continue on into the future on its own terms.

Michael continues: "Less clone more cure, less evolution more creation, less hype more reasons, less Bright more Separation of Church and State, less America more globe, less California ideology more Canada ideology, I agree with all of these things." Thank heavens!

And, finally: "As far as 'less AI more automation' goes, that's fine and dandy (I wouldn't horribly mind), but according to my current model, all it takes is one un-empathic AI that can improve its own hardware and abilities and we're all dead, dead, dead. You can't take precautions and you can't fight back against something that's smarter than you. You just die and that's it. I feel that AI is worth focusing on because it is potentially so lethal. A %0.001 chance of humanity being exterminated would be far more than enough to make this a really big deal." But, of course, we can all of us focus on only so much. If the chance of a devastating outcome is sufficiently negligible, then the scale of the disastrousness of the outcome isn't really my first concern in determining whether it gets my attention, when so many other disasters and promises impinge on my awareness. Part of what I mean when I propose a greater focus on automation, is that I would rather we devote more time to thinking about the sorts of issues that preoccupy Marshall Briain in the here and now (link left).

Thanks as always for a very provocative and clarifying discussion! My best to you and to all.

1 comment:

Anonymous said...

website design bathurst I've been looking everywhere for information on company development indianapolis site web