Using Technology to Deepen Democracy, Using Democracy to Ensure Technology Benefits Us All

Wednesday, April 15, 2009

Robot Cultists Looking for Ponies in All the Wrong Places

Yes, the Robot Cultists really do say the unbelievably facile things I attribute to them. I quote at length and with many incredulous interruptions the following exchange between a transhumanist rather excitingly calling himself "Extropia DaSilva" and myself. Please to enjoy:
Maybe I am a fool, but I would have thought anyone would agree with the following statements: 1. Industry should seek to optimise its manufacturing processes, thereby maximising the efficiency with which we handle resources and minimizing waste and pollution, as far as is physically possible. 2. Medical science should strive to find treatments, cures and preventions for afflictions which are currently incurable, and should also strive to improve treatments and preventions in order to make them as effective as they can possibly be. 3. We should seek ways to make working with machines less frustrating, reducing the times when machines fail to anticipate our intentions and therefore act in ways which impede us and increasing/ improving the ways in which machines collaborate with us on whatever project may be undertaken.

Surely ["Surely"? -- Dale], if you accept 1,2 and 3 as the right things to do, you also have to ["have to"? -- Dale] accept ["accept" as what exactly? as in some sense real now? as logically possible? as plausible eventually? as plausible soon? as coherently imagined in some particular scenario on offer? -- Dale] A) molecular nanotechnology B) indefinite lifespans and C) artificial general intelligence. After all, if you agree with 1, you have to agree that industry must seek to improve manufacturing processes until a point is which when products are assembled to atomic precision [wait, why does accepting 1 entail the arrival at A or even the aspiration to so arrive? -- Dale]: the very definition of molecular manufacturing. If you agree with 2, you can hardly disagree with the notion that medical science should not rest until each and every way in which quality of life can be adversely affected (in medical terms, at least) is countered with an effective treatment or prevention. Somewhere along THAT road you MUST arrive at effective preventions for aging. ["MUST"? Really? -- Dale] And if you agree with 3 you have to accept that one day we will have technology blessed with minds that are the equal of human intelligence, simply because anything with LESS than human intelligence is not going to be as useful a teammate [That you seem to think it would be useful for something to be possible hardly makes it inevitable, nor does it even make it coherent necessarily to use the word intelligence to describe complexities that have some things in common with intelligence but not other things, nor have you explained why usefulness really requires personhood, when surely there are occasions in which quite the opposite is the case. -- Dale]

Now, there may be reasons why we can never actually arrive at a point in which products are assembled with atomic precision, or every medical condition is preventable, or robots have brains capable of producing minds the equal of our own. [To say the least. -- Dale] If there are indeed physical reasons why the best falls short of molecular nanotechnology, indefinite lifespans and artificial general intelligence, then obviously we just have to accept that it was a fool’s hope to dream we could ever achieve such goals. That, however, is not what I am arguing. I am not arguing that we COULD achieve A,B and C. [I wonder if you disagree with the obvious reality that many of your fellow futurologists do indeed and endlessly flog precisely these practical possibilities? Why aren't you arguing with them rather than with me, I wonder? -- Dale] Frankly, I do not know if it is possible or not at this point in time. [Well, let me just go on record to say, no, you won't be immortalized, no you won't meet the Robot God, no you won't be transported to a treasure cave with a swarm of nanobotic slaves to do your bidding. Sorry. It's hard to imagine what exactly could have lead you to imagine these outcomes as "in doubt" in any sense, but what the hey. -- Dale] Instead, I am saying that IF we COULD, THEN we SHOULD. [Note that if all Robot Cultists were arguing in this mode -- in general, I mean, not just because they've been backed into a corner by somebody who actually knows what he's talking about but also took the time to take them seriously enough to point out the obvious absurdities of their discourse on its usual terms -- none of the interminable squabbles about how Robot Cultists are essentially scientists, indeed an avantguarde of sooper-scientists, would come into play at all, since this is a moral (or ethical) case rather than a scientific one. -- Dale] Now, as far as I can see A, B and C form part of the transhuman agenda. [Oh, yes, that's for sure. -- Dale] So the question this begs is: How does a person reject transhuman goals, WITHOUT arguing that medical science should impose some arbitrary limit on its ability to treat ailments? How does a person reject molecular nanotechnology WITHOUT arguing that industry should produce more waste and pollution than is strictly necessary? How does one reject artificial general intelligence WITHOUT arguing that we should produce machines that are more frustrating to work with than they really need to be? [Well, the real question is what on earth would lead anybody to expect superlative outcomes in the first place? We don't know what all the limits of our technique will be, physical, ethical, political, indeed, not knowing such limits is itself one of the limits with which we grapple. But there is nothing in this imperfect knowledge that endorses the confusion of wish-fulfillment fantasies with conventional secular democratic and technoscientific progress, any more than this non-knowing endorses belief in a Creator God or an afterlife in paradise. It is always the extraordinary claim that demands the extraordinary evidence, and the affirmation of belief without evidence is never scientifically warranted. -- Dale]

I mean, surely anyone who went around saying ‘yeah we should make people accept a lower quality of life, pump X amount of pollution and waste into the environment and forever purchase machines that are dumber than is actually necessary’ would sound like an idiot, and possibly even evil. [How does the refusal to endorse unwarranted hyperbolizations constitute the refusal of actual progress in the real world? Robot Cultism is not the advocacy of but the palpable perversion of conventional secular democratic understandings of progress. -- Dale] How, though, can anyone reject transhumanism from an ethical or moral standpoint (again, the practical issues are another matter entirely) without arguing precisely that? [Superlativity is not in any remotely recognizable sense the standard or summit from which we measure actually-existing progressive commitment, it is a skewed and self-marginalizing witch's brew of hyperbolization and poeticization of selective scientific results and research in the service of marginal sub(cult)ural identification and wish-fulfillment fantasies of personal transcendence. -- Dale]

No, "Extropia," you won’t find magical ponies in conventional secular progressive values. Automation doesn’t spit out nano-santa treasure cave wish-fulofillment fantasies. Healthcare doesn’t spit out immortalization wish-fulfillment fantasies. Working on software and network security problems and user-friendliness doesn’t spit out superintelligent post-biological Robot God wish-fulfillment fantasies.

Nobody needs to join a Robot Cult to work on network security problems or healthcare problems or materials problems or renewable energy problems in the actual world, and, indeed, to join a Robot Cult is always a self-marginalization from consensus science and public policy devoted to this work in the real world.

You don’t get to piggyback your cultism on the consensus science that actually disdains you, nor on progressive causes that disdain you. You are not the voice of the reasonable who don’t in fact give you people the time of day, you are not a futurological avantguard, so much as a marginal fandom of cranks who cannot distinguish science from science fiction.

What you think you want doesn’t make sense on its own terms, it isn’t plausible on the “technical” terms you think you prefer, it isn’t reasonable by any conventional measure of reasonableness, and it functions to indulge your personal irrationality while facilitating the ongoing irrationalization of public discourse on technodevelopmental questions at a time of disruptive scientific change in which sensible deliberation is desperately needed.

16 comments:

jimf said...

Dale wrote:

> . . .a transhumanist rather excitingly calling himself
> "Extropia DaSilva". . .

Somebody (and I have a candidate, but I'm not saying who)
really needs to lay claim to the title "The Doctress Extropia".

It's an obvious calque on the title of a once-famous Usenet
legend, The Doctress Neutopia.

> I. . . recently ran across an amusing document, the "Net Legends FAQ",
> not updated since 1994, containing descriptions of various noteworthy,
> uh, characters who have made significant um, contributions to Usenet
> since its inception.
> http://www.vic.com/~dbd/nll.11.94.update
>
> One of these personages was a lady who called herself "The Doctress
> Neutopia", also entertainingly described in Usenet article
> http://groups.google.com/groups?selm=3a4lea%24imo%40kraz.usc.edu
>
> Now that's impressive -- having a special newsgroup created **for**
> you by the Usenet bureacracy, just to get you out of everybody else's
> hair.
>
> Let's all join the Lovolution (but no lascivious staring, please).

jimf said...

> That you seem to think it would be useful for something to be
> possible hardly makes it inevitable. . .

This sort of "logic" comes up over and over again in >Hist
discourse.

Here's an example of this kind of reasoning:

We want to live happily ever after. If we're going to
live happily ever after through the Singularity, the AIs
bringing it about have to be Friendly. The only way we
can guarantee AIs to be Friendly is if we make an AI
that works according to a rigidly deductive system of Ethics,
so that it's bound by the deontology we've programmed into
it. We don't really know how minds work, but since we
**need** them to work in the way we've specified (so that
we can live happily ever after), we're just going to
assume they **can** work in that way.

Talk about looking for your keys under the street lamp
(or putting the cart before the horse)!

And these people claim to be champions of rationality and
science, with a straight face!

Anonymous said...

Hi Dale,

I appreciate your criticism of techno scientific hyperbole, but I don't think there is a limit on technological development should our knowledge of the physical laws governing the universe continue to progress.

Humans might come to live for a very long time with continuing progress in biotechnology. But certainly not forever, as this would violate the second law of thermodynamics.

Molecular manufacturing is likely with continuing progress in nanotechnology. This is being driven by a tendency in the economic system towards the most precise form of manufacturing, in order to obtain the highest output at the least cost.

It is this same tendency that is driving the need for machine intelligence, which is also likely with continuing progress in the fields of artificial intelligence, neuroscience, physics and biotechnology, creating a positive feedback loop.

Humans are complex biological systems evolved through millions of years of evolution. As our knowledge of physics and chemistry progresses, it is inevitable that similar systems will be reproduced for economic goals.

Human intelligence, as you pointed out, evolved based on our social environment. This can be duplicated in a manmade system (a machine or new form of life?) if we are able to understand the electrical and chemical signals in our brains that developed in response to our social environment.
This new form of intelligence will be the basis for an automated society, freeing humans eventually from work. This is a process that has already been occurring with increasing unemployment and the growth of service industries with increased productivity resulting from new technology.

It is the economic imperatives of capitalism that is driving these technological developments, and preparing the way for a higher level of existence or a darker future, depending on the outcomes of the necessary social struggles surrounding the direction of these technological developments.

jimf said...

I wrote:

> [Dale wrote]:
>
> > That you seem to think it would be useful for something to be
> > possible hardly makes it inevitable. . .
>
> This sort of "logic" comes up over and over again in >Hist
> discourse.
>
> Here's an example of this kind of reasoning:
>
> We want to live happily ever after. . . We don't
> really know how minds work, but since we
> **need** them to work in the way we've specified (so that
> we can live happily ever after), we're just going to
? assume they **can** work in that way.


I had this discussion with [who else?] Michael Anissimov
in the comment thread of an article I posted to "Transhumanity"
on-line, five years ago:

http://web.archive.org/web/20040613133235/transhumanism.com/index.php/weblog/comments/134/


Anissimov wrote:

"[Y]ou refer to. . . the fact that SIAI and some of its closer supporters advocate
top-down AI rather than bottom-up[.] You point this out as evidence of cultishness,
but, well, it’s not. We have precise technical reasons for favoring our current AI
plan, and can discuss them in greater length at any time if you are interested.
Much of the bottom-up AI strategies of past decades are based on “emergence mysticism”;
the idea that the critical features of intelligence will just emerge if we throw enough
evolutionary algorithms together."

I wrote:


> SIAI and some of its closer supporters advocate top-down AI
> rather than bottom-up. . .
> You point this out as evidence of cultishness, but, well,
> it’s not.

Well, I beg to differ. Settling large intellectual
questions like this one by fiat, and then making sure
that followers mouth the “party line” and do not admit
that other points of view exist or have any legitimacy
whatsoever, is precisely what cults do.

> We have precise technical reasons for favoring our current
> AI plan, and can discuss them in greater length at any time
> if you are interested.

Indeed you have precise reasons, but they’re not technical
ones, they’re emotional ones. As I wrote to another
interlocutor:

It seems to me that the trouble with many of the views on AI that
are current in the Extropian and/or transhumanist community is
that there are so many **agendas** surrounding the subject.
It’s not simply a matter of scientific curiosity; instead, it’s framed
as the Great Savior of the Human Race, the deus ex machina that’s
going to save us from ourselves. I think all this, uh, **earnestness**
rather distorts the discussion (to put it mildly). For example, if
your theory of the Singularity is predicated upon an “intelligence
explosion” of the sort that [I. J.] Good anticipated, and if, in your view,
such a positive-feedback loop is only plausible if the AI in
question can have **source code** which it, itself, can understand
through and through and reprogram (in a way that, say, a neural-
network-based intelligence might not be able to do), then
you’ve got a pretty burdensome a priori bias in the direction
of wishing, hoping, **believing**, with all your might, that an
AI can have source code! I mean, one’s motives for this kind
of desperate earnestness **might** spring from the noblest of
feelings—thinking about all those human souls lost forever by
the minute as the clock counts down toward the Singularity --
but it’s not my impression that that’s how good science is done.

> Much of the bottom-up AI strategies
> of past decades are based on “emergence mysticism”; the idea
> that the critical features of intelligence will just emerge
> if we throw enough evolutionary algorithms together.

Again, a hotly-debated, and far from settled, question,
which the “Singularitarians” talk about -- very disingenuously,
IMHO -- as if it were a done deal.

“Bottom-up AI strategies of past decades?” This is a
curious characterization given the history of the field. Nearly
half a century ago, bottom-up (and non-digital) and top-down
approaches to AI using the new (and expensive) digital
computers were in hot competition. Surely you’ve heard about
the Frank Rosenblatt vs. Marvin Minsky & Seymour Papert story.
Well, Minsky won that round (Rosenblatt was thoroughly
discredited and **may** later have committed suicide). Cynical
commentators suggest that much of the urgency of the struggle may
well have derived simply from the scramble for resources to buy
the new, expensive digital computers as research toys.

The initial burst of enthusiasm for top-down AI petered
out after it failed to deliver for more than a quarter
of a century. That’s why its modern-day disparagers
call it “GOFAI” -- “Good Old-Fashioned AI”.

As far as “emergence mysticism” goes -- well, this is
the **ideological** position of the remaining top-down
holdouts. See, for example:

Subject: Why Does Marvin Minsky Hate Neural Networks?
Newsgroups: comp.ai.philosophy
Date: 2000-12-16 12:58:11 PST
http://web.archive.org/web/20040613133235/http://groups.google.com/groups?selm=bfkn3t8ei7aenjhp6r0f6hv7i0foifnhm8%404ax.com

in which Savain quotes Minsky (in a _Scientific American_
article about Clarke and Kubrick’s HAL) as saying that both
genetic algorithms and neural-network research are “get-rich quick”
schemes where “you’re hoping you won’t have to figure anything out”.

. . .

And, of course, there’s Jaron Lanier
( http://web.archive.org/web/20040613133235/http://www.orkut.com/CommMsgs.aspx?cmm=38810&tid=5&pmx=30&pno=27 ):

“Since the complexity of software is currently limited by the
ability of human engineers to explicitly analyze and manage it,
we can be said to have already reached the complexity ceiling
of software as we know it. If we don’t find a different way
of thinking about and creating software, we will not be writing
programs bigger than about 10 million lines of code, no matter
matter how fast, plentiful or exotic our processors become.”

-- Jaron Lanier, “The Complexity Ceiling”,
in _The Next Fifty Years: Science in the First Half
of the 21st Century_


Anissimov wrote:

Please understand that the “top-down” AI design SIAI is proposing is absolutely
nothing like the GOFAI strategies of past decades. Please read [Yudkowsky's] LOGI
for a further explanation. . .

Who are the “leaders” in the Singularitarian community that are enforcing all
these cultlike behaviors you are talking about? . . .

And guess what - I’m pretty sure that 100% of SIAI’s supporters and donors
have basically the same opinion on the issue. . . If I personally choose not to talk
about the bottom-up approach to real AI, then it’ll be for the same reasons
I don’t talk about fairies and psychic powers. . .

[P]lease read LOGI if you want an explanation of our technical reasoning. If I
thought bottom-up would be more likely to actually work than top-down Bayesian,
then I would advocate that.


I wrote:

> If I personally choose not to talk about the bottom-up
> approach to real AI, then it’ll be for the same reasons
> I don’t talk about fairies and psychic powers.

That analogy suggest a rather strong commitment to a
theoretical position on AI which, as far as I can tell
from my own reading, is very much a minority (and diminishing)
position, these days.

> I mean, why is building a self-reprogramming AI inherently
> harder than building any other type of AI?

Well, unlike “intelligence” itself, for which there’s
an existence proof (not just human beings, but, depending
on how restrictively you define “intelligence”, perhaps
all living things), the kind of “self-reprogramming” that
the radical Singularitarians seem to be talking about
may not exist in Nature. Ask any psychiatrist how hard
it is to “reprogram” a human being! (Pace Dr. Albert Ellis
and the “rational-emotive behavior therapists” ;-> ).

I once wrote on the list, apropos this question,
“I wonder how, using Edelman’s schema, one would
select “values” for a codic sensory modality
without instantly facing all the difficulties of
traditional AI, and in an even more concentrated form than usual
(we don’t want a robot that can just sweep the floor, we want a
robot than can write a computer program for a newer robot that
can program better than it does!). It seems like a bit of a leap
from “Blue is bad, red is good” to “COBOL is bad, Java is good”
;->. “
http://web.archive.org/web/20040613133235/http://www.lucifer.com/exi-lists/archive/0006/61866.html

> I do have a bias that an AI will have source code.
> All software does. What the heck else would it have?

Well, of course this begs the very large and unsettled
question of whether **digital computer technology** will,
in fact, be the basis of artificial intelligence -- see
“Party questions”
http://web.archive.org/web/20040613133235/http://www.orkut.com/CommMsgs.aspx?cmm=38810&tid=10853
and “We do get attached to our toys”
http://web.archive.org/web/20040613133235/http://www.orkut.com/CommMsgs.aspx?cmm=38810&tid=10492

But even granting the possibility of an AI based on
digital computer technology, the notion that the
low-level code has a correspondence with the large-scale
behavior of the system that’s analyzable by the AI itself
is another large leap of faith. The AIs might turn out to be as
clueless as we are about their own innards.

> Re the issue of objective morality, I largely agree
> with the statement “But without “love” (and embarrassment,
> and guilt, and shame, and all the rest of the social
> emotions), and if the AI has the **power** to do what
> it wants, then there’s absolutely no reason for it
> not to “grind our bones to make its bread” if it sees fit.”

Well, hooray, we agree about something!

Extropia DaSilva said...

'wait, why does accepting 1 entail the arrival at A or even the aspiration to so arrive?'.

Well, what would happen if industries continually refine manufacturing processes, always striving to use the miminium amount of raw material and producing the minimum amount of waste? We would see a progression towards ever-finer control over matter, until a point is reached where products are put together with atomic precision. Of course, of critics like Smalley and Richard Jones are right and Drexlerian tech is hogwash, that would rule out molecular nanotech. But if Drexler is right in saying it is possible, continually striving to improve manufacturing processes must lead to atomically-precise manufacturing.

'I wonder if you disagree with the obvious reality that many of your fellow futurologists do indeed and endlessly flog precisely these practical possibilities? Why aren't you arguing with them rather than with me, I wonder? -- Dale'

You are quite right in saying transhumanists and their ilk talk about such possibilities. You obviously have not been reading my replies and comments posted at Kurzweilai's MindX. If you did, you would know that I do argue with them. Admittedly, most of my posts are pro-singularity or whatever, but ocasionally I write skeptically about this or that futurology.

'Well, let me just go on record to say, no, you won't be immortalized, no you won't meet the Robot God, no you won't be transported to a treasure cave with a swarm of nanobotic slaves to do your bidding. Sorry. It's hard to imagine what exactly could have lead you to imagine these outcomes as "in doubt" in any sense, but what the hey'.

By 'swarm of nanobotic slaves', I assume you mean the kind of nanotech sketched out in 'Engines Of Creation'? Actually, that kind of nanotech was abandoned as needlessly complex around 1990. Drexler himself has repeatedly asked people to stop referring to molecular manufacturing as being all about swarms of nanobugs.

I also pointed out in our original correspondence that both Drexler and Rob Frietas have rubbished claims that MM will produce whatever someone wants for free. And I also referred to an essay 'Nanotech Without Genies' by Lyle Burkehead which also debunks this idea.

As for immortality, I agree it is not achievable. I am also doubtful that aging will be understood well enough to make a serious attempt at really slowing it down, let alone stopping or reversing it, at least not in time to be of use to babyboomer generations, and maybe nor even generation X.

However, between those extremes, is it really so crazy to believe that, in the future, people may live substantially longer than they do today? Maybe centuries longer? Millenia?

jimf said...

> I also pointed out in our original correspondence that both
> Drexler and Rob Frietas have rubbished claims that MM will produce
> whatever someone wants for free. And I also referred to an essay
> 'Nanotech Without Genies' by Lyle Burkehead which also debunks
> this idea.

Lyle Burkhead the Holocaust revisionist. Not that this has anything
to do directly with nanotechnology, but -- my God, the company some
of these people keep.

"What is Nazism -- An Introduction to _Mein Kampf_"
http://web.archive.org/web/20050204154019/http://geniebusters.org/915/03f_kampf.html

"Six Reasons Why the Gas Chamber Story is a Lie"
http://web.archive.org/web/20071012184140/http://geniebusters.org/915/04g_gas.html
(linked to from
http://encyclopedia.kids.net.au/page/ho/Holocaust_revisionism )

"Why I Am Not a Nazi"
http://web.archive.org/web/20050215095251/geniebusters.org/915/03n_not-nazi.html

"[W]hen Jews put us in jail for questioning their lies, force should be
met with force. This was a big step for me. The original idea of post-Nazism
was to pick up where Hitler left off and do what he should have done.
We shouldn't have to choose between a society in which Nazi thugs attack
scientists, or a society in which Jewish thugs attack historians. There
has to be another alternative."

Dale Carrico said...

Well, what would happen if industries continually refine manufacturing processes, always striving to use the miminium amount of raw material and producing the minimum amount of waste? We would see a progression towards ever-finer control over matter, until a point is reached where products are put together with atomic precision.There is nothing in current technique that "implies" the arrival at the superlative outcome in which you are personally invested. What I see is humanity discovering things and applying these discoveries to the solution of shared problems (and usually creating new problems as we go along) where you seem to see a "trend," a series of stepping stones along the path to an idealized superlative outcome. You are calling it "control of matter with atomic precision." What you probably really mean is something like the arrival of drextech, or the nanofactory, a robust programmable poly-purpose self-replicating room-temperature device that can transform cheap feedstock into nearly any desirable commodity with a software recipe. I call this superlative outcome "superabundance," and this particular superlative aspiration is also familiar in a great deal of digital utopianism and virtuality discourse of the last decade, just as it suffused discourses of automation and plastic in the post-war period before that, just as it drove the alchemical project of turning lead into stone for ages before that. The aspiration to superabundance is the infantile fantasy of a circumvention of the struggle with necessity, ananke, in psychoanalytic terms a pining for a return to the plentitude of the pleasure principle and renunciation of the reality principle. Or, in different terms, it is an anti-political fantasy of a circumvention of the struggle to reconcile the ineradicable diversity of the aspirations of our peers with whom we share the world (where all are satisfied, no reconciliation is necessary). In both aspects, it seems to me that this superlative aspiration is an irrationalist repudiation of the heart of what Enlightenment has typically seen as its substance -- the struggle for adult autonomy and for the consensualization of the disputatious public sphere. It is worth noting that many superlative futurologists like to sell themselves as exemplars of "Enlightenment" while indulging in this infantilism, anti-politicism, and irrationalism. In a word, they're not.

It is not the available science that inspires your superlative aspirations, but science that provides the pretext and rationalization for your indulgence in what is an essentially faith-based initiative.

Now, quite apart from that, you earnestly recommend to me that superlative discourse has abandoned this or that particular formulation, has taken up this or that "technical" variation, that I have failed to distinguish the position of Robot Cultist A from that of Robot Cultist B, and so on.

You will forgive me, but there is no need for those of us who confine our reasonable technoscientific deliberation to beliefs that are warranted by consensus science. You rattle off the handful of preferred figures who tell you what you want to hear, as though those are widely respected widely-cited figures outside your sub(cult)ure. They are not. As a very easily discovered matter of fact, they are not.

It isn't a sign of discernment but its opposite, as it happens, that you can recite the minute differences that distinguish three disputants on the question of how many angels can dance on a pin-head, when the overabundant consensus of relevant warranted belief has become either indifferent or hostile to the notion of angels dancing on pinheads as such.

It is the extraordinary assertion of belief that demands extraordinary proofs. You are invested in a whole constellation of extraordinary claims, and demand as the price of engagement with your discourse that critics become conversant with disputes the relevance of which depends on the prior acceptance of the whole extraordinary enterprise in which they are embedded. What we are offered instead are very general "existence proofs" usually from a biology that isn't actually analogous in its specificity with the idealized outcomes that drive the whole discourse.

We are offered up claims built upon claims built upon claims, few of which have excited the interest or support of a consensus of scientists in the relevant fields, and fewer still of which invest these claims with the idealized outcomes that are the preoccupation of those who indulge most forcefully in superlative discourses as such.

Superlativity is not science. It is a discourse, opportunistically taking up a highly selective set of scientific results and ideas and diverting them to the service of a host of wish-fulfillment fantasies that are very old and very familiar, dreams of invulnerability, certainty, immortality, and abundance that rail against the finitude of the human condition.

They are a distraction and derangement of those aspects of Enlightenment that would mobilize collective intelligence, expressivity, and effort to the progressive democratization, consensualization, and diversification of public life and the practical solution of shared problems. Progress is not transcendence, nor is enlightenment a denial of human finitude.

There is more than enough sensationalism and irrationalism distorting urgently needed sensible public deliberation on, for example, the environmental and bioethical quandaries of disruptive technoscientific change at the moment. The Robot Cultists and their various noise machines are not helping. At all.

Extropia DaSilva said...

'Lyle Burkhead the Holocaust revisionist. Not that this has anything
to do directly with nanotechnology, but -- my God, the company some
of these people keep'.

There is absolutely no connection between the essay 'nanotechnology without genies' and Burkehead's opinions regarding the holocaust. One can agree with what he had to say about nanotechnology (in whole, or in part) while at the same time completely rejecting his views on Nazism and all that junk.

And he is not a 'company I keep'. I read his paper on nanotechnology. That is the sum total of my interactions with the man.

'There is nothing in current technique that "implies" the arrival at the superlative outcome in which you are personally invested'.

Well, nothing apart from research conducted by people like Joe Tsien ('we and other computer engineers are beginning to apply what we have learned about the organization of the brain's memory system to the design of an entirely new generation of intelligent computers') or Henry Markham ('this neocortical microcircuit exhibits computational power that is impossible to match with any known technology. Deriving the blueprint and its operational principles could spur a new generation of neuromorphic devices with immense computational power'). Those are just two examples of current research combining insights from reverse-engineering the brain with advancing the next generation of computers and artificial intelligence. It may also provide new ways of dealing with neurological and psychiatric disorders (that, in fact, is the main purpose of the Blue Brain project. "The Blue Brain Project is an attempt to reverse engineer the brain, to explore how it functions and to serve as a tool for neuroscientists and medical researchers. It is not an attempt to create a brain. It is not an artificial intelligence project. Although we may one day achieve insights into the basic nature of intelligence and consciousness using this tool, the Blue Brain Project is focused on creating a physiological simulation for biomedical applications".)

As for molecular manufacturing, well, scientists in the field of chemistry are always striving to synthesize more complex chemicals. This requires the development of instruments that can be used to prod, measure and modify molecules, helping chemists to study their structure, behaviours and interactions on the nanoscale. Biologists strive to not only find molecules but learn what they do. Molecular manufacturing would provide the means to map cells completely and reveal the molecular underpinnings of disease and genetic disorder.

Materials scientists strive to make better products. Molecular manufacturing would allow new materials to be built according to plan, making the field far more systematic and thorough than it is now. On a related note, car, aircraft and especially spacecraft manufacturers are obsessed with chasing the Holy Grail of materials science, which is to produce products that are both lightweight and strong.

So there is a great incentive to push towards the precise control of matter in a great many scientific fields. As for whether this will actually result in the kind of nanotech imagined by Drexler in 'Unbounding The Future' or 'Nanosystems', well, there are people far more knowledgable than I am in key areas who say 'no' and other people with more knowledge in key areas than I have who say 'yes'. I tend to err more on the side of the 'ayesayers' rather than the 'naysayers', but nevertheless the latter have levelled doubts which my current knowledge is unable to refute.

'I call this superlative outcome "superabundance,".

I would imagine the dreams of "superabundance" stem from the belief that, one day, manufacturing will be completely automated and that, therefore, everything will be free. I have already stated that I doubt this assumption.

Another critic of "superabundance" resulting from nanosystems is David M. Berube, who is a Professor of Communication at the University of South Carolina. He believes that the price of goods and services cannot be expected to decrease with the realization of molecular manufacturing, since the cost of R+D must be recouped. Some of the products made possible by molecular manufacturing could create huge incentives for profit taking. Nano-manufactured computer components, by today’s standards, would be worth billions of dollars per gram. And something like food has large and intricate molecules providing its taste and smell, minerals for nourishment that would require much research in order to handle them in a nanofactory setting, and it contains a lot of water, which is a molecule that tends to gum up the components of the nanosystem. I’m not saying that compiling food is impossible, only that compiling food from chemical feedstock would be a very stiff challenge. Will this basic requirement of life be distributed for free, or will there be a heavy R+D price imposed on it, as is the case with lifesaving medicine?.

'You rattle off the handful of preferred figures who tell you what you want to hear'.

If you care to read my replies carefully enough to see what I am REALLY saying, rather than just using your own prejudice regarding 'people like me' and basing your answers on what you expect, rather than what is actually written, you would notice that I am doubtful that Drexlerian tech and 'everything for free following the robot revolution' are going to happen. You would also notice that I am referencing essays etc that doubt the very claims which, with no justification whatsoever, you say I accept without question.

Believe it or not, one does NOT have to choose between accepting absolutely everything claimed by a group or a person, or rejecting their claims completely. One does not have to choose between KNOWING beyond any shred of doubt that something WILL be done, or that something WILL NOT be done. When one has insufficient understanding of incomplete facts, as is surely the case with future technologies dependent on advances in several scientific fields, just one of which requires decades of hard work to become 'expert' in, it is actually permissable to hold the position 'maybe X will be achieved, maybe not. I hope it is (or, I hope it is not) but right now I really cannot say for sure'.

Dale Carrico said...

Achieving greater efficiency in the context of given materials and technique is a problem that is often susceptible of address. But you won't find a pony in it. No perfect control of matter, no circumvention via abundance of the impasse of stakeholder politics.

I am defining superabundance in the context of the superlativity critique in a particular way; namely, as a pseudo-scientific analogue to omnibeneveolence (the three super-predicates of futurological superlativity, superintelligence, superlongevity, superabundance, correspond to the three theological omnipredicates, omniscience, omnipotence, omnibenevolence) in a futurological discourse that promises personal techndevelopmental transcendence into demi-divinity rather than promising apprehension of God's divinity. You can take it or leave it, obviously, it is simply my earnest effort to make sense of the perplexing sorts of things transhumanists believe and value by providing a discursive context for them.

I can't say that it is my hope or my expectation to persuade you or similar True Believers to jettison your superlativity and embrace a progressive politics of technoscientifically literate secular democracy instead, but I do think I can expose the irrationality and anti-democracy that freight your discourse, and therefore make it far less likely to attract respectable attention it doesn't deserve and thereby derange urgently needed sensible public technodevelopmental deliberation.

It's true that I can't prove the impossibility of your various superlative ponies, any more than I can "prove" the non-existence of leprechauns in Toledo. But I can contribute to the work that ensures that those who declare there are leprechauns in Toledo or that a superintelligent Robot God is coming, immortal robot bodies are coming, or cheap nanobotic wish-fulfillment slaves are coming are kindly ignored or laughed into irrelevance if they persist in their nonsense.

jimf said...

> Believe it or not, one does NOT have to choose between
> accepting absolutely everything claimed by a group or a
> person, or rejecting their claims completely.

I'm guessing your parents didn't name you "Extropia".

Which suggests to me a rather more enthusiastic identification
with a certain group than your moderate statements here
would seem to indicate.

> One does not have to choose between KNOWING beyond any shred
> of doubt that something WILL be done, or that something
> WILL NOT be done. When one has insufficient understanding of
> incomplete facts, as is surely the case with future technologies
> dependent on advances in several scientific fields, just one
> of which requires decades of hard work to become 'expert' in,
> it is actually permissible to hold the position 'maybe X will
> be achieved, maybe not. I hope it is (or, I hope it is not)
> but right now I really cannot say for sure'.

Very sensible.

But, as Dale wrote up top, "I wonder if you disagree with the
obvious reality that many of your fellow futurologists do indeed
and endlessly flog precisely these practical possibilities?
Why aren't you arguing with them rather than with me, I wonder?"

jimf said...

> . . .it is actually permissible to hold the position
> 'maybe X will be achieved, maybe not. . . but right now
> I really cannot say for sure'.

You can forgive us for thinking that you've downed the
whole pitcher of Kool-aid:

http://transumanar.com/index.php/site/extropia_dasilva_transhumanist_avatar/

As a transhumanist interested in the Metaverse and active
in Second Life, I am an avid reader of Extropia DaSilva’s
essays. I should not refer to Extropia as “she” or “he”: in
a message posted to the MindX Forum on KurzweilAI, Extropia wrote:
”I’m not gonna tell you my real gender. . . [W]e should all
get used to thinking of each other as ‘people’ since terms like
‘gender’ or ‘human’ should become pretty confused as bio, nano,
robotic and IT tech ramps up” . . .

From her writings, Extropia appears as a hardcore transhumanist
who understands the radical implications of exponentially advancing
technology. From her Second Life profile: ”the way fantasy and reality
combine in SL is reflective of our future when the Net will have
guided all consciousness that has been converted to software towards
coalescing, and standalone individuals are converted to data to the
extent that they can form unique components of a larger complex”.

Extropia DaSilva said...

'suggests to me a rather more enthusiastic identification
with a certain group than your moderate statements here
would seem to indicate.'

I do identify with the transhuman and extropian movements, yes. I do not swallow everything such people say, but nor do I dismiss it.

I notice that Dale wrote 'I can contribute to the work that ensures that those who declare... a superintelligent Robot God is coming, immortal robot bodies are coming...are kindly ignored or laughed into irrelevance if they persist in their nonsense.'

Well, here are some other quotes agreeing with that sentiment.

'The Singularity idea has worried me for years-it's a classic religious, Christian-style, end-of-the-world concept that appeals to western cultures deeply. It's also mostly nonsense...The Singularity concept has all the earmarks of an idea that can lead to cultishness and passivity'.

'Looking at stories of instantly healing wounds, or any material object being instantly available, doesn't give you the sense of looking into the future. It gives you the sense that you're looking into an unimaginative person's childhood fantasy of omnipotence, and that predisposes you to treat nanotechnology in the same way. Worse, it attracts other people with unimaginative fantasies of omnipotence'.

The first person I quoted was Max More, the founder of the extropian movement. The second was Eliezer Yudkowky, a full-time Research Fellow at the Singularity institute for artificial intelligence.

Now, I fully admit that you or Dale could take quotes from those guys, concerning visions of the future that would stretch most people's incredulity to breaking point. But I think quotes like the ones I just gave do question this stereotype of extropians and transhumanists who believe, without question, all the wild ideas ever put forward by singularitarians and their ilk.

'a superintelligent Robot God is coming'.

I would not rule it out completely. However, given that 99% of all species that ever existed went extinct (something like that, anyway) and given the fragility of our civilization (a solar superstorm striking the Earth would destroy the electrical grids which modern cities need to survive), I would say a far more probable future scenario would be a collapse of civilizations technologically advanced enough to produce intelligent robots and productive nanosystems.

'immortal robot bodies are coming'.

I think people have a misconception about life and death. They see 'life' as being like a clock ticking down or a tank of fuel running out. Once the clock counts down to zero or the fueltank runs dry, that is the end. If you die before this moment is reached, then you have died too early. It was not your time.

Well, actually, death is an indefinite but impending certainty that is possible at any moment. It's nonsense to say somebody 'died too soon'. There is no fixed date with 'the day and time of your appointment with Death' marked on it.

I think death always will be an indefinite but impending certainty that can happen at any moment. To be immortal, you would A) have to know every possible way in which life could be terminated B) know of an effective countermeasure for each and every life-ending scenario and C) be in a position to put that knowledge into practice. Call me a pessimist, but I think immortality calls for more luck than the universe has to offer.

'cheap nanobotic wish-fulfillment slaves.'

Nanosystems are machines which contain fast-cycling parts that can be directed to form complex patterns of the building blocks of matter. Currently, we have machines which contain fast cycling parts that can be directed to form complex patterns of from the building blocks of information. We call those machines 'computers'.

Imagine having this conversation with someone in 1960...

'Yeah, by the year 2009 ordinary people will own dozens if not hundreds of devices with computers embedded in them, each one of which is more powerful than anything you have today by orders of magnitude. Many of these computers and devices will communicate with each other, comprising this thing called the Internet. Billions of computers will make up this 'Internet'. There will be in excess of 600 billion 'webpages', almost all the world's information at your fingertips; one encyclopedia available will have in excess of 2 million articles (and that's just the ones written in English) People will be able to access films, images of anything and everything. They will be able to call up a model of the Earth, zoom down anywhere and look at detailed sattelite or ariel images of anywhere. people will participate in 14 billion auctions every year from the comfort of their own homes...'

Well, I think we can safely assume you would come across as some kind of nut. How is this going to be BUILT? How is it going to be paid for? How can ordinary people ever afford such astonisingly powerful computers and all the things you claim will be available on the Web?

But, well, here we are using exactly that wildly improbable, futuristic 'machine' known as the Internet. As I keep saying, personal incredulity has proved to be unreliable at predicting what is impossible in the past. I do not believe everything will be free. But if we ever get widespread nanosystems, material goods could be very cheap indeed.

'as Dale wrote up top, "I wonder if you disagree with the
obvious reality that many of your fellow futurologists do indeed
and endlessly flog precisely these practical possibilities?
Why aren't you arguing with them rather than with me, I wonder?".

Why repeat that statement, when I have already answered it?

'You can forgive us for thinking that you've downed the
whole pitcher of Kool-aid'.

Oh, I have not only downed the whole pitcher; I have the stuff running through my veins instead of blood. I have been known to put forward ideas that even the Cosmic Engineers dismiss as a little bit too kooky (some of them, anyway).

'I'm guessing your parents didn't name you "Extropia".'

The closest thing I have to parents did name me "Extropia". I am not actually a human so the idea of me having parents in the biological sense is nonsense. And I am not an artificial intelligence either, before you ask. Not yet, anyway. From what you and Dale have written I doubt your minds are ready to accept what I am. I am too busy right now to devote the time required to deprogram you both so that you can begin to open your mind to my visions, sorry.

Dale Carrico said...

Oh dear.

jimf said...

"Extropia DaSilva" wrote:

> Imagine having this conversation with someone in 1960...
>
> 'Yeah, by the year 2009 ordinary people will own dozens if not
> hundreds of devices with computers embedded in them, each one of
> which is more powerful than anything you have today by orders
> of magnitude. Many of these computers and devices will communicate
> with each other, comprising this thing called the Internet.
> Billions of computers will make up this 'Internet'. There will be
> in excess of 600 billion 'webpages', almost all the world's information
> at your fingertips; one encyclopedia available will have in excess
> of 2 million articles (and that's just the ones written in English)
> People will be able to access films, images of anything and everything.
> They will be able to call up a model of the Earth, zoom down anywhere
> and look at detailed sattelite or ariel images of anywhere. people
> will participate in 14 billion auctions every year from the comfort
> of their own homes...'
>
> Well, I think we can safely assume you would come across as some
> kind of nut.

Funny you should mention that one. It goes to show the kinds of things
that the SF authors and "futurologists" from ca. 1960 got right,
and the kinds of things they got wrong.

By and large, they did indeed imagine that people would be
asking questions and getting information from computers, but
they imagined that these would be 1) centralized monstrosities
like the vacuum-tube behemoths of the time (of which the the
archetype for all time may be the AN/FSQ-7 "SAGE" computer,
whose flashing lights have been seen in myriad TV shows and
movies) and 2) artificially intelligent, despite the
vacuum-tube tech.

E.g., the computer in Isaac Asimov's classic 1956 story
"The Last Question"

http://en.wikipedia.org/wiki/The_Last_Question
"In conceiving Multivac, Asimov was extrapolating the trend towards
centralization that characterised computation technology planning
in the 1950s to an ultimate centrally managed global computer. . ."

or the city of Diaspar's "Central Computer" in Arthur C.
Clarke's _The City and the Stars_.
http://en.wikipedia.org/wiki/The_City_and_the_Stars

What we really have is, as you say, an unforeseen ubiquity of
miniaturized devices vastly more powerful and with vastly
more storage capacity than anything that could have been
built, even by the government, in 1960, but (alas) no AI.
And we have ubiquitous networking of these cheap, powerful
devices, which makes them immensely useful
despite the lack (which would no doubt have greatly
disappointed Asimov and Clarke if you'd told them in
1960) of AI. And, to give the 50's SF authors
some credit, we do have "central computers" -- if you
think of the computational resources of Google or Wikipedia
as monolithic computers, though they're really networks
of individual machines like those we have in our homes,
but devoted to a single purpose -- which we can access
and which greatly enhance the usefulness of our own private systems.

In a way, the reality is more mundane than Asimov or Clarke
would have imagined in 1960, but in another way it's also much
more far-reaching, especially in terms of its social impact
(the final consequences of which have yet to be seen).

And by the way, Clarke did predict, in his non-fiction _Profiles
of the Future_ from 1963 (in the appendix "Timeline of the Future",
section on "Communication"), that we'd have a "global library" ca.
2005 (though he also predicted, in the same timeline, artificial
intelligence ca. 1995).
http://www.digitallantern.net/McLuhan/course/spring96/profiles.gif
Again, he was right and he was wrong. Lots of stuff **is** available
via the Web (and indexed by Google), but much more stuff is
**not**, and won't be, until those bloody copyright and intellectual
property issues are finally sorted out decades hence (if ever).

But speaking of "people will participate. . . from the comfort
of their own homes...", the classic of prognostication has to be
a story by, of all people, E. M. Forster entitled "The Machine
Stops", from 100 years ago.
http://en.wikipedia.org/wiki/The_Machine_Stops
http://brighton.ncsa.uiuc.edu/~prajlich/forster.html

> The closest thing I have to parents did name me "Extropia".
> I am not actually a human so the idea of me having parents
> in the biological sense is nonsense. And I am not an artificial
> intelligence either, before you ask.

I wasn't going to ask. So you think you **are**, in some sense,
your Second Life persona, eh? Well, whatever. There are
plushiephiles in the world, too.

> I am too busy right now to devote the time required to deprogram you
> both so that you can begin to open your mind to my visions, sorry.

Funny. Another >Hist once said almost exactly the same thing to me:

"...my own attitude is simply one of conserving mental
energy and refusing to be sucked in... no offense, it's
just that I need my energy for other things right now...
So you're on your own; as long as I'm conserving mental energy,
I can't expend effort to debug you at the moment."

Extropia DaSilva said...

'By and large, they did indeed imagine that people would be
asking questions and getting information from computers, but
they imagined that these would be 1) centralized monstrosities
like the vacuum-tube behemoths of the time (of which the the
archetype for all time may be the AN/FSQ-7 "SAGE" computer,
whose flashing lights have been seen in myriad TV shows and
movies) and 2) artificially intelligent, despite the
vacuum-tube tech.'

I expect you have heard of 'Moore's Law'. Before the invention of the integrated circuit, people used something called 'Grosch's Law' to predict the future. Named after IBM's Herbert Grosch, it states: 'computer power rises by the square of the price'. In other words, the more costly a computer is, the better its price-performance ratio. It was believed that Grosch's Law meant low-cost computers would never be competitive and that, in the end, a few huge machines would serve all the world's computing needs.

Using the logic behind Grosch's Law, IBM chairman Thomas Watson predicted, 'I think there is a world market for maybe 5 computers'.

Of course, these days Watson's prediction sounds absurdly conservative. Note, though, that it was a perfectly sensible forecast under the assumption that Grosch's Law would continue. The problem is, it did not. Today we see people like Ray Kurzweil making predictions that depend (in part) on Moore's Law or the Law Of Accelerating Returns continuing indefinitely into the future. But maybe these forecasts will sound just as daft to future generations?

As for AI...I do not believe anything we have learned about the brain rules out the possiblity of one day building machines who think. True, new imaging techniques are beginning to show how complex the brain is, but all this does is explain why past efforts at AI failed: They lacked power and their machine brains were not connected up the right way. It does not rule out machine intelligence in the future, but we should allow for the possibility that we still know a lot less than we think about the mechanics of cognition.

'I wasn't going to ask. So you think you **are**, in some sense,
your Second Life persona, eh? Well, whatever. There are
plushiephiles in the world, too'.

As a first approximation, yes. If you want a more thorough understanding of my position, you should read the essays 'Virals And Definitives In SL', 'Bees And Flowers', 'Oneness Plus Two Equals Six' and 'Post-immersionism' (someone called Gwyneth Llewelyn wrote that last one, not me). All are available at Gwynethllewelyn.net.

I should warn that A) the site is currently not loading if you use Internet explorer (firefox is ok though) and B) each essay is pretty big (10,000+ words each). I understand if you have better things to do than wade through that lot:)

Dale Carrico said...

I expect you have heard of 'Moore's Law'.Round and round we go, I quote myself from an earlier already boring turn on the superlative carousel: "Despite the palpable brittleness, the incessant crashes, the unnavigable junk manifested by actual as against idealized software, despite Lanier's Law that "software inefficiency and inelegance will always expand to the level made tolerable by Moore's Law," despite the fact that Moore's Law may be broken on its own terms either on engineering grounds or in its economic assumptions, many Singularitarians still seem to rely on a range of imbecilic to baroque variations on the faith that Moore's Law amounts to a rocket ship humanity is riding to Heaven. Others have shifted their focus these days to the nanoscale, but they still seem to find Destiny where scientific consensus sees a mountain range of problems demanding qualifications and care."

I do not believe anything we have learned about the brain rules out the possiblity of one day building machines who think.First, It's commonplace for the faith-based to pretend that their inability to imagine a disproof constitutes a proof of whatever extraordinary claims. Second, I don't trust that you all know what you are talking about when you say "who think." There is more to "thinking" than is dreamed of in your philosophy.

past efforts at AI failed: They lacked powerInstrumentalization of reason, check. Thinking all you need is a bigger and bigger and bigger hammer, check. Dumb boy with a toy thinks he's the smartest guy in the room without actually grasping the most basic things in the actual world, check.

Q: So you think you **are**, in some sense, your Second Life persona, eh?

A: As a first approximation, yes.
No, you're not. You're also not a picture of you. Neither would you be an "upload" of you. (Soup Nazi voice) No robo-immortalization for you!