Using Technology to Deepen Democracy, Using Democracy to Ensure Technology Benefits Us All

Friday, June 15, 2007

Nanosantalogical Feasibility

Over on infeasible.org's endlessly amusing (in a good way) "Refuting Transhumanism" blog, Eric Drexler was described by the author in a post from a couple of days ago as having "proudly claimed that no one has ever disproved his ideas on molecular nanotechnology and that this means that his ideas are feasible."

I wish that a link to this specific claim had been provided if such a thing exists. But be that as it may, if it is true that Drexler actually seriously made an argument of the form cited in the complaint, then that appears to be an awfully straightforward example of the fallacy ad ignorantiam (sorry to be a pedant), mistaking the lack of a refutation as a substantiation of a claim, and I daresay even partisans for Drexlerian nanotechnology would strongly prefer arguments of his that aren't fallacious in this way.

I enjoyed reading Drexler's Engines of Creation back in the mid-eighties, right when it was published and when I was still something of a kid. To this day I may well personally find Eric Drexler's ideas more worthy of serious consideration in some respects than infeasible's author does. But I do share his perfectly proper disdain for the handwaving of technophiliacs in what I call the Nanosantalogical Variation of Superlative Technology Discourse.

Friend of Blog Michael Anissimov posted a comment to infeasible's post, asking the author, "Can you explain how the existence of living organisms doesn't validate Drexler's ideas? All he is really talking about are artificial, programmable ribosomes." Needless to say, I can't speak for the blogger, but I did have a response, and one that seemed helpful as a way of getting at what I mean by Superlative Technology Discourse more particularly.

In the posted quote (presumably) authored or paraphrased by Drexler, he obviously isn't claiming that the existence of living organisms means that the era of nanotechnology (in the "robust" Drexlerian sense of human specified and controlled, replicative molecular manufacturing) has already arrived, does he? That's surely the force of the "artificial" in Michael's own formulation of his question. And the gap between actually existing organisms and desired Drexlerian nanotechnologies is of course the same gap that distinguishes this analogy from a valid deduction. This obviously doesn't mean the analogy hasn't anything to recommend it, just that the analogy can't bear the weight with which Superlative Technology Discourse in its Nanosantalogical Variation would want to freight it.

I must say I do think it is interesting how technophiliacs often seem to treat philosophical arguments by analogy that properly function to illuminate incredibly broad theses as if they likewise constitute arguments demonstrating practical viability, or even inevitability, or even the technodevelopmental imminence of some superlative technology they are enthused about at the moment.

Thus polemicists for the Strong Program of Artificial Intelligence regularly seem to leap from the reasonable enough philosophical notion that [1] if human consciousness is not supernatural then it should be susceptible in principle to instrumentally adequate scientifically warranted description, to the radically different idea that [2] within 20 years (a time-frame thus far always deferred yet curiously never revoked with each failure of the prediction) human beings will have overcome all the practical, theoretical, and sociocultural hurdles that currently frustrate ongoing projects to create artificial intelligence.

As with the gap between living organisms and Drexlerian nanotech (not to mention the fantasies of a circumvention of the deep and abiding barriers to utopian, often literally libertopian, construals of a post-political abundance that characterize too much nanosantalogical discourse), hype-notized handwavers tend to discover that the historical, infrastructural, sociocultural complexities, as well as the caveats that tend to freight real-world lab results, all radically frustrate the superlative formulations that might seem logically compatible with general thought experiments and proofs of concept.

(For those who are interested in these things: Other variations of Superlative Technology Discourse include, in my view, the Singularitarian Variation, the Immortalist Variation, and the Technocratic Variation. These Variations of Superlative Technology Discourse are very much not to be confused with reasonable and urgently needed technoprogressive stakeholder discourses on actual and emerging quandaries of nanoscale toxicity, actual and emerging quandaries of molecular biotechnology, actual and emerging quandaries of network and software security, actual and emerging quandaries of genetic, prosthetic, cognitive, and longevity medicine, actual and emerging quandaries of accountability of elected representatives to warranted scientific consensus, and so on. The differences between Superlative Technology Discourses and Technoprogressive Discourses are complicated to analyze, but, honestly, pretty easy to spot. Some rules of thumb: Precisely to the contrary of Superlative Technology Discourses, Technoprogressive Discourses tend to [1] resist transcendental formulations, [2] emphasize the concrete social and historical contexts of technoscientific change, [3] stress the existence of a diversity of stakeholders to technoscientific research and development [4] as well as the priority of democratic institutions and accountable processes to ensure the proper regulation of and fairest distribution of the costs, risks, and benefits of technoscientific changes, [5] reflect the caveats of actual experimental science, and [6] provide little support or inducement for the formation of personal sub(cult)ural identifications with particular technodevelopmental forecasts, scenarios, or fetishized technologies, either existing or projected, nor for the curiously marginalizing and defensive membership organizations that seem to arise from such abstract identifications.)

5 comments:

Anonymous said...

Great essay, Dale. I'd toss in actual and emerging quandaries of molecular-scale manufacturing, and note that "quandaries" include the right way to fairly and broadly distibute the benefits that may derive from these technologies. I'd also include as a Superlative Technology variant the Fatalist Variation, which presumes the same kind of easy and rapid development of molecular/bio/AI technologies, but spins them into doomsday results.

Dale Carrico said...

Howdy, Jamais! I wonder how much molecular-scale manufacturing as it actually arrives will end up being described as molecular biotechnology when it comes to it. I actually included "benefits" in [4], and I agree with you that that's key, but the point may have been lost in the general wordiness of my formulation (wordiness, ma bete noire). Definitely I agree with you, too, about the Fatalist Variations -- I would incline to call them Apocaloid or Disasterbatury Variations (but I suspect that is because you're much nicer than me!)... Perhaps an extended typology and essay on Variations of Superlative Technology Discourse would be in order... Hm...

jimf said...

> . . .within 20 years (a time-frame thus far always deferred
> yet curiously never revoked with each failure of the prediction)
> human beings will. . . create artificial intelligence.

The rather infamous one-time net celebrity Mikhail Zeleny
once made an analogy between these failed predictions and
one of the Tales of Nasruddin
( http://en.wikipedia.org/wiki/Nasreddin ):

"There once was a Shah who developed a special fondness for his ass, and
expressed a desire that the animal be taught human speech. Nasruddin came
forth, declaring that he could do the job in twenty-five years, for 25
thousand gold pieces. The shah agreed, and Nasruddin led away the ass
loaded with a fortune in gold. Upon hearing about the bargain, Nasruddin's
friends came to his house, expressing great concern. 'Surely, -- they
said, -- you will fail to teach the ass to speak, and spend the gold, and
then the Shah will order his royal executioner to cut off your head.'
'Don't worry, -- replied Nasruddin, -- in twenty-five years the Shah will
die, or the ass will die, or I myself will die.'"

-- from an article in a Usenet thread entitled "Robotic Follies"
on comp.ai.philosophy, itself 15 years old.
http://groups.google.com/group/comp.ai.philosophy/msg/3005aba639f5b83f

jimf said...

Hm. In an adjacent message in the same thread, Zeleny
remarks:

> [I]t doesn't concern me in the least whether AI research chooses to
> focus its attention on formal logic or the study of behavior; what does
> concern me is [Marvin] Minsky's blowhard rhetorical attempts to dismiss the
> arguments of anyone, dead or alive, who dares to disagree with his
> preferred orthodoxy, while he presumes to encroach on the territory of every
> neighboring discipline, from formal logic to the theory of value, using his
> alleged authority as a battering ram against the very foundations thereof.

http://groups.google.com/group/comp.ai.philosophy/msg/50f51217e3f40dcd

Plus ça change. . .

jimf said...

Jamais Cascio wrote:

> I'd also include as a Superlative Technology variant
> the Fatalist Variation, which presumes the same kind
> of easy and rapid development of molecular/bio/AI
> technologies, but spins them into doomsday results.

Dale Carrico replied:

> I would incline to call them Apocaloid or Disasterbatury
> Variations (but I suspect that is because you're much nicer
> than me!)...

"Most cults follow a predictable progression of two distinct stages,
which indicates that what is involved is more a function of how
authoritarian structures work than of the particular teachings of a
given guru. . .

[The] first stage is messianic with the message being that all labors
of the organization, including the guru's, are aimed at a higher
purpose beyond the group, such as saving mankind. During this phase
the guru is confident that he will eventually be acknowledged as the
one who will lead the world out of darkness. . .

Eventually it becomes obvious that the guru is not going to
take over the world, at least not in the immediate future. . .
Then. . . the guru's message turns pessimistic or doomsday. . ."

-- Joel Kramer & Diana Alstad
_The Guru Papers: Masks of Authoritarian Power_