Using Technology to Deepen Democracy, Using Democracy to Ensure Technology Benefits Us All

Friday, September 28, 2007

More on Superlativity ("Technicality," "Feasibility," and "the Real World")

Brian Wang writes: You have indicated that there are clear and easy to argue limitations of Suplerative tech (which is molecular nanotechnology, AGI). Great then it would be easy for you to list them.

Once again, I have indicated that there are clear limitations of Superlative Tech discourses. There are no Superlative Techs to which I or you or anybody can point to list such limitations. Incredibly enough, you ask me to list what I mean by these limitations, right after you actually quote a list of these limitations of Superlative Technology Discourses which I did provide and to which I referred: “vulnerabilities to hype, tendencies to naive technological determinism, reductionisms and other oversimplifications of developmental dynamisms, disdain for developmental aspirations alien to your own.”

Am I missing a lot of information that you have provided on real world complexities?

In the topsy-turvy world of Superlativity I will be smugly chastised for my incomprehension of and inattention to “real world complexities” -- precisely because I am talking about the characteristic exaggerations, oversimplifications, distortions, and skewed priorities of actual technodevelopmental complexities facilitated by particular modes of discourse and their customary assumptions, metaphors, political associations, and so on -- and chastised for this by people who seem to mean by the “real world” their own discussions of the logical feasibility of projected and idealized technodevelopmental outcomes like Drexlerian nanotechnological post-scarcity abundance, biomedical or even digital immortalization of human selves, and the “urgent dilemma” of whether an imaginary entitative postbiological superintelligence will be friendly or not.

Brian quotes other material I have written here on Amor Mundi describing political campaigns and institutions that I champion, like a basic income guarantee, a more democratized United Nations, planetary environmental, labor, and military nonagression standards and laws enforced by actually respected world courts, and universal education and healthcare provision. Of course, these are ideals, they are not the same sort of thing as the surreally implausible "predictive" technodevelopmental projections of Superlative Technocentrics. I don't think it is hypocritical in the least to engage in the one while deriding the other. I would certainly never try to pretend that blue-skying about ideal institutions was some kind of engineer’s “feasibility study.” Neither can I claim to have high confidence that any of these pet political outcomes of mine will arrive in just the forms I am sketching here and now (in part just to make the point in the midst of our present distress that real democratic, peaceful, and sustainable alternatives are imaginable), especially not in my own lifetime -- techno-immortalist handwaving notwithstanding -- largely due to my awareness of the very kinds of technodevelopmental complexities and uncertainties, the unpredictable dynamisms I keep pointing out to the apparent exasperation of the Superlative Technocentrics in the first place.

Brian accuses that “He [me] wants to have his cake of not getting into a technical debate while at the same time (eating it) claiming the correctness that the issue is settled in terms of a superlative projection and the superlative projector of being wrong and fanciful and naive.”

Now, it seems to me that I am engaging in a kind of technical debate, as it happens, just one from a disciplinary location Brian is possibly unfamiliar with or perhaps uninterested in. If, however, by "technical debate" Brian means to designate only a much more circumscribed kind of discussion of logical and engineering feasibility of particular projected non-proximate outcomes, I hate to break it to him, but quite a lot of his own discussion fails to qualify as such, either, when we look at things clearly -- inasmuch, in my view, as "feasibility" discourse in its peculiar Superlative modes regularly tends to express symptomatically the kinds of psychological, cultural, social, and political assumptions and preoccupations I keep pointing to, all under cover of its assiduously asserted “technicality.”

5 comments:

brian wang said...

Not all idealized situations are equal.

There are idealized situations which are consistent with known historical cases and fact. Situations consistent with known and existing technology. There are known data points and an idealized situation and projection would unlikely to be correct in the future if it was inconsistent with known cases.

Your idealized situation (super-free healthcare + super welfare/guaranteed income + global courts = more democracy, more peace and more justice). Does not explain Cuba (free healthcare, and welfare) but no democracy. Other historical and current countries do not follow the idealized pattern. USSR, China etc...

===
>A lot of Brian discussions fail >to qualify.
What specific discussion has what specific problem ?

Dale Carrico said...

Not all idealized situations are equal.

Quite so. They differ generically in their ends, their assumptions, their reasonable warrants, and then they differ among alternative candidates available for reasonable belief within these various generic modes, differ in their logical consistency, their testability, their continuity with relevant knowledges in other domains, their adherence to proper forms, and many other things. I'm the last one to deny any of this. Nothing I've said suggests otherwise.

Your idealized situation (super-free healthcare + super welfare/guaranteed income + global courts = more democracy, more peace and more justice)...

Oh my god, read the actual words of the sentences and grasp the meaning expressed in their actual succession. I'm not making predictions when I champion these institutional ideals. As I have said many times before, I advocate open futures before I provisionally advocate particular institutional and technodevelopmental outcomes from my current vantage.

You are the one who seems not to grasp the differences between modes of idealization and the conditions under which they are more or less useful.

Or, wait, perhaps I'm really reading you wrong here. If you want to admit that nanosanta, techno-immortality, and robot gods really are short-hand political abstractions with which you are provisionally preoccupied for now, and that you do not expect to them to come to pass in the forms that preoccupy you now because you understand that the technodevelopmental social struggle that will articulate technoscientific change our whole lives through is socially, culturally, and politically contentious and unpredictable, then perhaps we are more in agreement than I thought.

But let me go out on a limb here.

I think what you really think is that your Superlative projections and preoccupations are actually the emblem of your superior scientificity.

I think that you think you are trotting out predictive calculations like an engineer contemplating a cable when you scribble away at your Superlative Technological sketches.

I think you think that the social, cultural, and political factors I'm spotlighting in my critiques are really some kind of quasi-poetic or quasi-mystical empty-talk that won't make any kind of contact with the hard realities you and your friends talk about (and it pays to remember that what is meant by "hard realities" here involves for you things like nanoscale robot swarms delivering post-political abundance, techno-immortalism via digital personality uploads or super-advanced medical treatments available within our lifetimes, and the likelihood of superintelligent postbiological robot ruler gods taking over the planet).

I think that you think that all this stuff about "discourse" is what people talk about who aren't smart enough to number crunch the shiny robot god-odds.

That's what I think you think.

Honestly, that's the impression I get from the way you talk to me. It's not, by the way, something I would feel comfortable deducing wholesale from the mere fact that you find Superlative Technology Discourse compelling. As I keep saying, there really is a difference between offering up structural and symptomatic readings of a discourse as such, and offering up personal assessments of people who indulge in that discourse (who knows how seriously, how deeply, how consistently, for how long, among other things). I do still feel that it is hard to resist ridiculing the ridiculous, but it would be inappropriate rationalization confidently to diagnose a character without exchanging dialogue with them.

I despair of the hope that you will hear what I am saying. But I could be wrong. As I could be wrong in my characterizations of what you think.

But I doubt it. I'd like to be shown that I am wrong. It's something I enjoy.

brian wang said...

Dale does not like me

; )

Dale Carrico said...

Less than I might have.

Anonymous said...

I hope this doesn't come across as arrogant or scientistic.

I think you think that the social, cultural, and political factors I'm spotlighting in my critiques are really some kind of quasi-poetic or quasi-mystical empty-talk that won't make any kind of contact with the hard realities you and your friends talk about

I don't think that. :-) I acknowledge that discourse is important and your critique is saying something intelligent. I just think you're at least as guilty of ignoring an important level of analysis as the Singularitarians, and very likely more so.

and it pays to remember that what is meant by "hard realities" here involves for you things like nanoscale robot swarms delivering post-political abundance, techno-immortalism via digital personality uploads or super-advanced medical treatments available within our lifetimes, and the likelihood of superintelligent postbiological robot ruler gods taking over the planet

The feasibility of these outcomes is a technical question (though a relatively fuzzy one) that deserves technical analysis, and I find the Singularitarian analysis of them persuasive. If they are feasible, and they can be feasibly attained by small groups with little societal deliberation, they are very worthy of discussion because of the vast potential impact.

Or, wait, perhaps I'm really reading you wrong here. If you want to admit that nanosanta, techno-immortality, and robot gods really are short-hand political abstractions with which you are provisionally preoccupied for now, and that you do not expect to them to come to pass in the forms that preoccupy you now

This viewpoint is consistent with Singularitarianism, to an extent. While I wouldn't (and probably no sane Singularitarian would) claim to predict the exact form future technological developments will take, I think it is reasonable to claim that the technologies you mock have a high probability of being developed in some form. And regardless of the uncertainty, they can still make good ideals and goals.

because you understand that the technodevelopmental social struggle that will articulate technoscientific change our whole lives through is socially, culturally, and politically contentious and unpredictable

Thought experiment: Suppose nuclear weapons could be built from readily available materials, using a technique that a few trained individuals could discover and use. That technical fact alone might not let you make any significantly detailed predictions, and exactly what would happen would depend sensitively on political and social factors. But wouldn't you say it's very likely someone would use it? And if you thought this was reasonably likely to be the case, wouldn't you consider it worthy of serious contemplation?