Using Technology to Deepen Democracy, Using Democracy to Ensure Technology Benefits Us All

Wednesday, October 24, 2007

Superlativity as Anti-Democratizing

Upgraded and adapted from Comments:

Friend of Blog Michael Anissimov said: Maybe "superlative" technologies have a media megaphone because many educated people find these arguments persuasive.

There is no question at all that many educated people fall for Superlative Technology Discourses. It is very much a discourse of reasonably educated, privileged people (and also, for that matter, mostly white guys in North Atlantic societies). One of the reasons Superlativity comports so well with incumbent interests is that many of its partisans either are or identify with such incumbents themselves.

However, again, as I have taken pains to explain, even people who actively dis-identify with the politics of incumbency might well support such politics inadvertently through their conventional recourse to Superlative formulations, inasumuch as these lend themselves so forcefully to anti-pluralistic reductionism, to elite technocratic solutions and policies, to the naturalization of neoliberal corporate-military "competitiveness" and "innovation" and such as the key terms through which technoscientific "development" can be discussed, to special vulnerability to hype, groupthink, and True Belief, and so on, all of which tend to conduce to incumbent interests and reactionary politics in general.

If a majority

Whoa, now, just to be clear: The "many" of your prior sentence, Michael, represents neither a "majority" of "educated" people (on any construal of the term "educated" I know of), nor a "majority" in general.

If a majority decides to allocate research funds towards Yudkowskian AGI and Drexlerian MNT, who would you be to question the democratic outcome?

Who would I be to question a democratic outcome? Why, a democratic citizen with an independent mind and a right to free speech, that's who.

I abide by democratic outcomes even where I disapprove of them time to time, and then I make my disapproval known and understood as best I can in the hopes that the democratic outcome will change for the better -- or if I fervently disapprove of such an outcome, I might engage in civil disobedience and accept the criminal penalty involved to affirm the law while disapproving the concrete outcome. All that is democracy, too, in my understanding of it.

In the past, Michael, you have often claimed to be personally insulted by my suggestions that Superlative discourses have anti-democratizing tendencies -- you have wrongly taken such claims as equivalent to the accusation that Superlative Technocentrics are consciously Anti-Democratic, which is not logically implied in the claim at all (although I will admit that the evidence suggests that Superlativity is something of a strange attractor for libertopians, technocrats, Randroids, Bell Curve racists and other such anti-democratic dead-enders). For me, structural tendencies to anti-democratization are easily as or more troubling than explicit programmatic commitment to anti-democracy (which are usually marginalized into impotence in reasonably healthy democratic societies soon enough, after all). When you have assured me that you are an ardent democrat in your politics yourself, whatever your attraction to Superlative technodevelopmental formulations, I have tended to take your word for it.

But when you seem to suggest that "democracy" requires that one "not question" democratic outcomes I find myself wondering why on earth you would advocate democracy on such terms? It's usually only reactionaries, after all, who falsely characterize democracy as "mob rule" -- and they do so precisely because they hate democracy and denigrate common people (with whom they dis-identify). Actual democratically-minded folks tend not to characterize their own views in such terms. Democracy is just the idea that people should have a say in the public decisions that affect them -- for me, democracy is a dynamic, experimental, peer-to-peer formation.

Because that [AGI/MNT funding] is what is likely going to happen in the next couple decades.

Be honest: if you were you as you are now twenty years ago, would you have said the same? What could happen in twenty years' time to make you say otherwise?

I personally think it is an arrant absurdity to think that majorities will affirm specifically Yudkowskian or Drexlerian Superlative outcomes by name in two decades. Of the two, only Drexler seems to me likely to be remembered at all on my reckoning (don't misunderstand me, I certainly don't expect to be "remembered" myself, I don't think that is an indispensable measure of a life well-lived, particularly).

On the flip side, it seems to me that once one has dropped the Superlative-tinted glasses, one can say that funding decisions by representatives democratically accountable to majorities are already funding research and development into nanoscale interventions and sophisticated software. I tend to be well pleased by that sort of thing, thank you very much. If one is looking for Robot Gods or Utility Fogs, however, I suspect that in twenty years' time one will find them on the same sf bookshelves where one properly looks for them today, or looked for them twenty years ago.


jfehlinger said...

Dale wrote:

> Friend of Blog Michael Anissimov said: Maybe "superlative"
> technologies have a media megaphone because many educated
> people find these arguments persuasive.
> There is no question at all that many educated people fall
> for Superlative Technology Discourses. It is very much a
> discourse of reasonably educated, privileged people
> (and also, for that matter, mostly white guys in North Atlantic
> societies).

L. Ron Hubbard Jr.: We promised them the moon and then
demonstrated a way to get there. . . We were
telling someone that they could have the power of a god --
that's what we were telling them.

Penthouse: What kind of people were tempted by this promise?

Hubbard: A whole range of people. People who wanted to raise
their IQ, to feel better, to solve their problems. You also
got people who wished to lord it over other people in the
use of power. . . What happens in Scientology is that
a person's ego gets pumped up by this science-fiction
fantasy helium into universe-sized proportions. . .
It is especially appealing to the
intelligentsia of this country. . . Fine professors, doctors,
scientists, people involved in the arts and sciences, would
fall into Scientology like you wouldn't believe. . .
You show me a professor and I revert back
to the fifties: I just kick him in the head, eat
him for breakfast.

-- _Penthouse_ interview with L. Ron Hubbard **Jr.**,
June, 1983

Michael Anissimov said...

I don't mean to say that democracy means "do question the outcome" -- I was using a figure of speech. I mean that if you support democratic decision-making, you must respect its outcome even if you disagree with it. I didn't mean you literally shouldn't argue, just that you would be "stuck" with the democratic outcome anyway. Sorry I wasn't clearer.

If we continue to lobby the public at large to gain widespread support for advanced nanotechnology and AI, and are successful, then besides being a smart thing to do, it would also demonstrate that we have no intention to circumvent democratic politics, but are in fact able to tap into it. (The way these technologies are applied to society as a whole will also be democratically guided, in democratic countries and preferably worldwide. If these societies choose to reject such technologies altogether, that is another legitimate decision.)

Of course the current supporters of advanced AI and nanotech are barely a majority of anything, although obviously we are trying to function as the seed of a larger movement, which embraces advanced technology (including some technologies, like SENS and MNT, that some mainstream scientists may be skeptical about).

As you've repeated often lately, I understand that you are not calling me or other ultratechnology enthusiasts explicitly anti-democratic, but that our philosophy promotes anti-democratic tendencies. I'm still a little personally insulted by some of the things you say, but I've cooled down on that merely due to being exposed to so much of it.

Be honest: if you were you as you are now twenty years ago, would you have said the same? What could happen in twenty years' time to make you say otherwise?

Not at all: twenty years ago the foundations of scanning tunneling microscopy has just begun to be laid down! We had barely even achieved imaging of individual atoms, much less positional control as we have today. There are literally thousands of enabling advances in nanotechnology which have only occurred since the NNI started pumping money into the area.

As for AI, the "AI Winter" only began to thaw in the late 90s, and cheap computers are still far short of human brain-equivalent computing power, but amazingly closer than 20 years ago. Cognitive science has progressed immeasurably in the last 20 years, as have the potential ecosystem/learning environments for artificial agents (online worlds).

I could go on and on, but my point is: no, I wouldn't have said the same thing 20 years ago. Notice that I'm not even saying that AGI or MNT will be achieved in the next couple decades (although either or both very well could be), just that I expect them to gain more and more funding and support throughout those decades.

In nanotechnology, many people do in fact take Drexlerian outcomes (or something approximating them) seriously, including those who draft some of the key documents for the NNI. For the first time a recent NNI report was friendly towards the possibility of MNT. Foresight has partnered with Batelle to create their productive nanosystems roadmap.

As for AI, many are taking AGI more seriously, and the situation seems somewhat similar to nanotech in the early 90s. DARPA is working on cognitive systems, and an engineer at IBM recently gave a talk at the Singularity Summit about IBM's efforts towards "toddler-level AI". By "Yudkowskian", I simply mean "human-equivalent and designed to self-improve without programmer help". It's not so much about the outcomes as the technology itself. More are indeed getting interested in self-improving AI, although I hope not too many people take it seriously because then the rogue AI danger goes up as more projects are started.

I am looking for "Robot Gods and Utility Fogs", and one or both could indeed be reality by 2030 or so, as many authors have argued. Even if not, then there are plenty of other interesting advanced technologies out there! That's the thing -- the "strong identification" you and Dr. Jones think transhumanists have towards particular technologies does not exist, at least for transhumanists who are reasonable. Just like anyone else, we would abandon our "pet projects" if they turn out to be untenable or forbidden by the laws of physics.

But until then, we will write about them to encourage awareness and thought on the topic. For instance, like Jamais Cascio's recent nanofactory ecosystem article. (Not sure why you praise Jamais but decry "Superlative Technocentrics" (Robert Freitas, Ralph Merkle, J. Storrs Hall) when their attitude with respect to MNT tends to be similarly reasonable and intelligent as his.)

Some national agencies are already funding advanced nanotech and AI, yes.

Richard Jones said...

Michael, I am puzzled as to where you find the evidence from for this assertion: "If a majority decides to allocate research funds towards Yudkowskian AGI and Drexlerian MNT, who would you be to question the democratic outcome? Because that [AGI/MNT funding] is what is likely going to happen in the next couple decades."

Of course, there are many publics, but from the public engagement activities and public attitude studies I know about there is no evidence for this at all. There is fairly strong support for nanotechnology aimed at certain instrumental outcomes, for medical advances, for example, but when people are exposed to "superlative technology" rhetoric the reaction is not generally, in my experience, very positive. It's true that there will be differences between Europe and the USA (and indeed Europe itself is very heterogenous in its attitudes), but while it may be true that uncritical enthusiasm for technology might be less common in Europe than in the USA, it's also true that opposition to technology based on religious points of view are much less influential in Europe than the USA.

The international science community itself, of course, represents a public of a kind, and funding decisions in science themselves largely flow from the formation of consensus in this community by peer-to-peer mechanisms rather than from top-down diktats from funding agencies. Here your admission that some mainstream scientists may be sceptical about MNT is a massive understatement.

You mention the importance of imaging and positional control of individual atoms. It's worth reminding ourselves what Don Eigler, the IBM scientist responsible more than anyone else for these developments, has to say about MNT:

“To a person, everyone I know who is a practicing scientist thinks of Drexler’s contributions as wrong at best, dangerous at worse. There may be scientists who feel otherwise, I just haven’t run into them.”

On the basis of knowing the international academic nanoscience community quite well, I don't think this overstates things.

I suspect that there's a tendency in the MNT community to see those mainstream scientists, like me, who have been vocal in criticising MNT as representing the portion of the scientific community that is most strongly opposed to MNT. In fact, the truth is precisely the opposite. It's the scientists who say nothing who are most deeply sceptical; they just don't see enough worth in the ideas of MNT even to bother refuting them. My kind of position, that Drexler's ideas are interesting and thought-provoking, even if, in all probablity, ultimately misguided (at least in the detail of the "hard" MNT project), is actually quite rare.

In fact, I find myself in the odd position of being painted as one of the most vocal opponents of MNT at the same time as having some responsibility for one of the most overtly Drexlerian publically funded research projects in the world, in my role as director of the UK Software Control of Matter project. This perhaps should make it clear why I find "superlativism" so frustrating. I want to say that the possibilities of advanced nanotechnology are fascinating, and that we should embrace an open future that takes us where the science allows us to go, meeting the democratically expressed aspirations of society. But instead I'm having to emphasise that we aren't on a set of tramlines inevitably proceeding to the predestined future of transhumanist imagination.

Roko said...

I've had some thoughts about this over on Transhuman Goodness.

Over all I think that Richard has a very good point about not succumbing to "predeterminism". But I think both Richard and Dale would do well to look at the value of speculative thought. In a fast-changing, accellerating world, people who speculate about important but uncertain scenarios are doing a valuble job.

Michael Anissimov said...


For your work on the UK Software Control of Matter and the Ideas Factory project, I consider to be a hero. I think your criticisms of MNT are the most constructive out of any criticism thus far, especially moreso than the superficial arrows slung by the late Richard Smalley.

I am aware of the skepticism of many mainstream scientists towards MNT, but I think much of this stems from the "superlative" claims made by Drexler rather than the actual science. I've looked at your list of MNT challenges and don't see why any of them are showstoppers. (But more experimental work is certainly needed.) The uniform dismissal of MNT may be more closely related to fears of associating one's research with the infamous "grey goo" threat than with skepticism on solid grounding. Also, the association of MNT with cryonics.

If mainstream nanoscientists as a whole are so skeptical of MNT, then why do they attend MNT-related conferences, like Ned Seeman at CRN's conference or dozens of big names at Foresight events? Why did Batelle cooperate with Foresight in making a nanotech roadmap? I know it is somewhat far to travel but you should consider coming over to the States for these enlightening (and very science-focused) conferences sometime.

Prior to the first explosion of an atomic bomb, many particle physicists would have seen the prospect so silly as to "not be worth thinking about", merely due to the incredibleness of the whole thing. Same with space flight, much of what Tesla accomplished, etc. Being skeptical rationalists, some scientists like to be overskeptical, just to make sure they aren't bamboozled. Like Michael Shermer arguing against cryonics, or a million other more mundane examples which I'm sure you're aware of.

In the end, I'm not even so excited to see progress for MNT, because I feel that MNT would give us orders of magnitude better computers all at once and massively increase the likelihood of rogue AGI, thereby being a net danger to the future of humanity. Not sure if this statement sounds odd or not, feel free to let me know.

As for defending my claim, no need to try to hard for it, we'll just have to wait and see!

Richard Jones said...

Michael, I have been to a Foresight meeting myself, a couple of years ago. I went because I like to hear views different to the ones I normally hear, and because I appreciate getting the opportunity to put my point of view across to people who don't think the same way as me. I'm sure many other scientists attend for similar reasons.

If I was putting a bet on what is most likely to lead to a big sudden jump in computing power, I'd back optoelectronic quantum computing in semiconductor nanostructures over MNT any time. Not that I, by any means, think this is a sure thing either.

We can agree that time will resolve these issues, anyway.