Using Technology to Deepen Democracy, Using Democracy to Ensure Technology Benefits Us All

Monday, October 18, 2010

Must I Really Weigh In On "The Cult Debate"?

For me personally, it is hard to imagine a more surreally irrelevant distraction from the substance of my critique of superlative futurology than debating whether or not my derisive use of the phrase "Robot Cultists" to describe superlative futurologists is strictly correct according to somebody's dictionary definition of what a "cult" is. I have pointed out that I could always use, after all, the less concise but to me roughly synonymous phrase for "Robot Cult" instead "defensive-evangelizing-sub(cult)ural-membership-formation-organized-around-highly-marginal-but-strongly-held-ideological-beliefs-involving-personal-and-historical-techno-transcendence-which-are-expected-to-sweep-the-world-lead-by-would-be-gurus-few-of-whom-are-known-outside-the-sub(cult)ure-itself-but-is-not-a-cult-according-to-the-letter-of-your-dictionary-definition-so-stop-saying-that!" But would the Robot Cultists really like that any better, I cannot help wondering?

Sometimes I find it difficult to determine whether an interlocutor's turn to this (to me) rather trivial non-question is a result of an unserious person literally incapable of taking my serious questions seriously, or an effort at distraction on the part of an organizational opportunist trying to divert attention away from a threat in a fairly obvious PR move, or simply the sort of thing that happens when perfectly likeable but earnestly dull people don't know exactly how to deal with substantive critiques that happen to be sprinkled with little bits of irony and facetiousness and wit and are therefore a little harder to read than is, say, People Magazine.

Were the Extropians a cult? Is cryonics a scam? Are singularitarians a Kurzweil fandom or engaged in some geek headgame variation of a kind of silly eXtreme sport for boys? What about people who call themselves transhumanists, who declare themselves to be part of a "movement," to have a transhumanist "identity," some of whom are literally members in "transhumanist"-identified membership organizations and so on? Are they more like a science fiction fandom for folks who prefer the quasi-nonfiction futurist subgenre of science fiction? or more like members of a marginal not-particularly-coherent fledgling school of philosophy? or a noisy flashy sub(cult)ure that has attracted attention from mainstream media outlets out of proportion to its size? or an ideology trying to make a political movement or a political party but just unusually inept in these efforts? or a marketing scheme for a handful of wannabe gurus slash public intellectuals?

Are the ferocious fans of Ayn Rand's screeds and romance novels strictly speaking a cult, given their ongoing organized existence and annoying inability to talk sense? Are Scientologists still a cult once they have arrived at a certain number of adherents and garnered a certain amount of real estate and legal resources? If yes, is Mormonism a cult, if no, is Mormonism a cult? What about rabid pop fandoms and online conspiracist sub(cult)ures? What are they and is transhumanism whatever it is that they are?

These questions are all interesting questions, I suppose, but I can't say that these are the questions about superlative futurology as a discursive phenomenon to which I have devoted the lion's share of my own critical attention. A debate about none of them would provide the grounds for a substantive response to my critiques of superlative futurology as far as I can tell.

I do think there are things about especially organized transhumanist discursive formations which get a little bit culty, certainly enough so to upset (in a good way, to my mind) especially the real True Believer types or defensive organizational figures who tend to gravitate into conversation with me here on this blog. To be honest, it's hard for me to see how a sub(cult)ural ideological futurological formation freighted with explicit promises of personal and historical transcendence (even if "techno-transcendence") is not going to have some culty paraphernalia about it, after all, especially to the extent that it remains marginal and defensive, as the transhumanists-singularitarians-technoimmortalists-etalia certainly all are. If pointing out that obvious sort of thing freaks the Robot Cultists out, so much the better.

But setting all that aside, for the moment, it should be plain to the meanest intelligence devoting any time or attention at all to my many critiques of superlative and sub(cult)ural futurological formations (both organized and discursive), many of which are both topically and chronologically archived at the sidebar for anybody who actually wants to know what it is they are talking about if they are excoriating me for my so-called distortions and dishonesties, I tend to say a few basic things, over and over again:

First of all, I describe futurological marketing and promotional discourse as the prevailing, definitive discourse of contemporary capitalism in what is otherwise described as its current neoliberal/neoconservative corporate-military developmental-networked mode, and I declare that superlative futurology is most usefully understood as an especially illustrative and structurally clarifying extreme set of variations on -- or symptoms of -- that prevailing or mainstream futurology.

In the introduction to the Superlative Summary (the most sprawling -- also, admittedly, daunting and, after all, sometimes repetitive -- chronological archive of my critiques of superlative futurology over the years) I write, for example, that "[t]here is considerable overlap between… mainstream and superlative futurological modes, [since] both share a tendency to reductionism conjoined to a (compensatory?) hyperbole bordering on arrant fraud, not to mention an eerie hostility to the materiality of the furniture of the world (whether this takes the form of a preference for financialization over production, or for the digital over the real), [as well as] the materiality of the mortal vulnerable aging body, the materiality of the brains, vantages, and socialities in which intelligence is incarnated, among many other logical, topical, and tropological continuities."

In a piece I posted just yesterday, I made (yet again) the second, substantial claim that recurs in my actual critique:
[W]hatever its insistent but superficial scientificity, the substance and primary work of superlative futurology remains, as it always has been primarily:
one -- either ideological, consisting in prophetic utterances in the form of hyperbolic threat/profit assessments and marketing/promotional discourse wrapped in superficially technoscientific terminology providing incumbent-elite corporate-industrial interests rationales to justify continued profit-taking at the expense of majorities

two -- or theological, consisting in priestly utterances in the form of apocalyptic warnings of looming total catastrophes but also promises to the faithful of a techno-transcendence of mortality via super-longevity, error and humiliation via super-intelligence, and stress and worldly defeat via super-abundance providing both reassurance and consolation especially in the midst of the economic and ecologic distress of neoliberal-neoconservative technodevelopmental planetary precarization.

To return yet again to my Introduction to the Superlative Summary, I elaborate this second substantive point there as well, saying:
The characteristic gesture of superlative, as against mainstream, futurological discourses will be the appropriation of worldly concerns -- such as the administration of basic healthcare, education, or security, say -- redirected (in a radically amplified variation on conventional marketing and promotional hyperbole) into a faith-based discourse promising not just the usual quick profits or youthful skin but the promise of a techno-transcendence of human finitude, a personal transcendence modeled in its basic contours and relying for much of its intuitive plausibility on the disavowed theological omnipredicates of a godhood (omniscience, omnipotence, omnibenevolence) translated instead into pseudo-scientific terms (superintelligence, superlongevity, superabundance).

Again, I can see how a discussion of the relative cult-likeness or not of the various sects or flavors or genres of transhumanist-singularitarian-technoimmortalist-nanocornucopiast-geoengineering discourses, organizations, subcultures, whatever might lead us to nibble around the edges of some of my actually stated concerns about superlative futurology, but, frankly, it is hard to see how an exclusive or sustained focus on the cult debate is anything but a failure of intelligence, honesty, or nerve. As I said yesterday, I continue to welcome any serious engagement with my actual critique and especially welcome evidence of the dishonesty and distortion I regularly get accused of by some of the most foolish and most culty of the Robot Cultists (insert longer, unwieldy but just as damning phrase provided above here if so inclined, it makes no difference to me) in the Moot.

39 comments:

Dale Carrico said...

I wonder if even the best superlative futurologists really are too stupid and/or too dishonest to engage with this post on its terms. I hope not, I expect so. Each day they refuse to engage, I will post crickets chirping here. If they do make the attempt and their efforts are ridiculous, they should anticipate exposure to ridicule. I am truly and earnestly eager to be shown wrong that these are the only two non-responses I am likely to receive.

Martin said...

It's a difference of style. In science, words have precise meanings and you don't use them loosely. When a New Age type talks about "energy" or "vibrations," they can't really define what they mean by that. But in science, you must precisely define (and even be able to quantify) the many forms of energy.

You have a background in rhetoric, so you use words in different ways, and for different purposes, than scientists. Your long time readers understand that. Someone like Ben doesn't.

Dale Carrico said...

So, when Giulio Prisco declares that I make the equivalent of the statement that 4+4=2 and not only deny that 2+2=4 but insult those who say so, this is a difference created by the "imprecision" of my playful usage of terms as opposed to Prisco's precise use as a "serious scientist"? This imprecision of mine and not theirs accounts for why Ben Goertzel quoted that assertion by Prisco and affirmed it as the reason he has decided I distort arguments with which I disagree and so there is no reason to engage with my criticisms? Can I assume that the precision of scientific terminology is in evidence in the documents collected at the Order of Cosmic Engineers, which both Goertzel and Prisco helped found together? It seems to me that it is precisely New Age rhetoric that one is reminded of when reading those materials by these oh so strict-brained scientists. Now, you obviously haven't asserted that, I am just wondering if such a position is entailed by what you do indeed seem to be claiming on their behalf -- in asking this question am I outrageously distorting your views, am I insulting you somehow? Do you think I say these things to mistreat you or to understand the substance and implications of your claims? If Prisco and Goertzel are indeed capable of loose talk in their advocacy as "Cosmic Engineers," when and where else can that be true without beginning to threaten the firewall you are erecting between their strictness and my own playfulness as the supposed source of our differences? Is it only in the argument about whether or not I can reasonably notice sub(cult)ural formations of futurology can get a bit, er, "culty" that my own "loose talk" becomes a problem? Was strict scientific thinking similarly in evidence in Goertzel's arguments about transtopian Nauru, the ones that actually occasioned these exchanges as well as the charges that I am distorting their views in drawing from them entailments or satirizing them in ways they dislike? All that aside, I still do not agree that Ben Goertzel and Giulio Prisco deserve to be regarded as scientists more than rhetoricians in making their futurological claims. Although transhumanists and singularitarians are eager to paint such refusals of mine as signs of my menacing humanities relativism or woozy illiteracy it is in fact precisely because I respect the role of consensus science in the administration of a world equal to our shared planetary problems that I refuse to grant superlative futurology the status of scientificity it craves for its religious and ideological promises. I think you may underestimate a bit the precision that drives no small amount of my rhetorical formulations, as it happens. But, come what may, I certainly see the sense of the sort of style difference you are proposing here as the source of certain mis-communications -- it's a venerable problem, after all, Snow's Two Culture's again, or even more venerably Huxley versus Arnold again -- but I honestly don't think that it is really in play so much in the case at hand.

Mitchell said...

"Each day they refuse to engage, I will post crickets chirping here."

Who can resist a challenge like that? But let me respond by just summing up where I agree and disagree with your critique, Dale.

I agree that transhumanists can say and do foolish and crass things. I agree that some of their characteristic notions about reality and the future will prove to be naive or just wrong. But I have to endorse their belief in the possibility of "superintelligence, superlongevity, superabundance", as you put it, as a consequence of scientific understanding of the brain, the gene, and the atom, respectively.

Since your focus is on Actually Existing Superlative Futurology, you engage more with the cranks and visionaries who right now think they can see a clear path to transhumanity, and not so much with the broader question of whether such things are in principle possible or impossible. Still, the implicit judgment I get from you is: impossible. And I disagree, obviously.

Mitchell said...

(continued)

Whatever hype and exaggeration may surround the unfinished scientific models of today, in the long run we are going to have an understanding of life and mind that stretches from the basic molecules all the way up to the recursive intricacies of intersubjectivity, without cheating, with nothing hidden, and with all the depth and awesomeness of the truth out in the open. And since human life and human mind are not divine archetypes fashioned complete and perfect from their first instantiation, but rather highly contingent structures produced by a blind process of ruthless competition, and since the material processes which produce them will not only be conceptually accessible to us but also materially accessible, capable of being modified and redesigned - it just seems incredibly unlikely that we can't do better, once we know what we're doing.

The real problem with today's recipes for transhumanity is just that we don't yet know what we're doing; we increasingly have the capacity to conduct experiments which meddle with or imitate those basic processes, but we don't have a lucid understanding of the consequences. The role of science with respect to transhumanism is not to debunk the concept in its totality and for all time, but rather to help us make a better, reality-based transhumanism, mostly by clearing away wishful thinking about how transhumanity might be attained by some simple formula.

Martin said...

I was referring to the use of the word "cult", which you by your own admission don't use according to a strict dictionary definition. But I agree that futurology makes claims that are not well defined, can't be tested empirically, and don't constitute science. Which is why so many scientists are skeptical of transhumanist claims (re the Technology Review challenge).

Martin said...

BTW, it's noteworthy that the main objection of the reviewers in the TR challenge was that SENS is so speculative that it can't be evaluated scientifically. In other words, it doesn't constitute science.

Martin said...

Sorry for the multiple posts, but a great example of something that's poorly defined in transhumanism is the Singularity itself.

Dale Carrico said...

Transhumanists aren't scientists any more than are boner pill hucksters or financial fraudsters peddling -- in highly technical terminologies -- bundled debts transubstantiated into sound investments. Neither are the prophetic and priestly utterances about superlongevity, superintelligence, and superabundance scientific hypotheses any more than is the promise of a priest that faith in the blood of Christ is the key to eternal paradise or a used car salesman's promise that a blood-red sports car will make a tired pudgy boring stock broker youthful and sexy.

The problem with transhumanists isn't that we don't know what the future will be, but that transhumanists are conducting themselves in the present, they are responding symptomatically in the present, their effects are present-effects, and it isn't hard to know at all what is afoot with them, it is in fact clear as day.

If superlative futurology were fumigated of all its exaggeration, hyperbole, excess, techno-transcendentalizing mumbo-jumbo it would just turn into conventional progressive scientifically-literate advocacy for a harm-reduction policy model for healthcare, drug policy, policing, advocacy for increased education spending and science research, the eschewal of panoptic models for network and software security, more stimulus for renewable energy, mass transit, reforestation, and polyculture, and so on.

That's just boring sensible indispensable mainstream-legible social democracy focused on technoscience issues. Nobody ever had to (nor ever did nor ever will) join a Robot Cult to advocate anything sensible.

That's not the draw of the "transhuman," and quite clearly not -- it isn't about science, it's about the derangement of science and development policy in the service of infantile reassurance provoked by the distress of ineradicable human finitude (mortality, dis-ease, error, humiliation, loss, precarity). At any rate, that's how I see it.

Dale Carrico said...

Hi, Martin -- many good points. I did understand that you referred to my derisive use of the term "cult" in particular. What I didn't understand was why a reaction to just that word would be adjudicated in terms of scientific strictness when otherwise transhumanists, singularitarians, techno-immortalists, nano-cornucopiasts, et al -- although forever loudly handwaving about their superior scientificity -- are clearly engaging in effusively rhetorical, ideological, transcendentalizing discourses that seem very much more my neck of the woods as targets of analysis than proper science.

jimf said...

Robin Zebrowski weighs in on her blog.
http://www.firepile.com/robin/?p=556

This isn’t really how criticism works
October 18, 2010 at 2:23 pm

I. . . noticed that Humanity+’s undercover branch the IEET is sponsoring
a 1-day workshop. . . called “The Problems of Transhumanism.” I assumed
that meant there would be a critical eye cast upon the (sadly, many)
real problems in both theory and practice with “transhumanism,” but
then I saw the speaker schedule. At least 4 of the 9 speakers are either
on the board of H+ or IEET, known cheerleaders for the cause, and for
all I know as many as 8 of them are (I recognized one name as a known
“bioconservative,” so he’s unlikely to have an affiliation.)

This is not how criticism of a movement or theory works. You don’t
get the board members to market their position while masquerading it
as an academic conference. You also don’t temper the self-promotion
with people (a person?) whose views are so strongly ideologically
opposite as to almost be a straw man parody of the view that’s
supposed to be under scrutiny. I’m profoundly disappointed to see
that very little actual criticism is likely to occur at this “workshop”. . .
I’ve seen enough discussion of some of these topics in academic circles
to know they *could* have found unaffiliated people to do this work,
so what it really means to me is that they didn’t want to. . .

[T]hey unceremoniously fired their friendly critic **because** he
was doing criticism. . . [Emphasis mine. There's a link to
http://amormundi.blogspot.com/2008/03/unperson.html ]

-------------------

Ah, well. So what else is new?

"Humans are inclined to evaluate the world around them with some
bias. . . [O]rganized religion, particularly Mormonism, tends
to reinforce and exploit them. An understanding of human biases
was tremendously helpful to me in my recovery period from the
Mormon groupthink. . .

_Standing for Something More: The Excommunication of Lyndon Lamborn_
http://www.amazon.com/Standing-Something-More-Excommunication-Lamborn/dp/1438947437

Chapter 10, "Cognitive Human Biases"
p. 87

Irving Janis [in _Victims of Groupthink_] devised eight symptoms
that are indicative of groupthink:

1. Illusions of invulnerability
creating excessive optimism and encouraging high risk taking

2. Rationalizing warnings
that might challenge the group's assumptions

3. Unquestioned belief
in the morality of the group, causing members to ignore
the consequences of their actions

4. Stereotyping
those who are opposed to the group as weak, evil, disfigured,
impotent, or stupid

5. Direct pressure
to conform placed on any member who questions the group,
couched in terms of "disloyalty"

6. Self censorship
of ideas that deviate from the apparent group consensus

7. Illusions of unanimity
among group members, silence is viewed as agreement

8. Mindguards
self-appointed members who shield the group from dissenting
information"

Chapter 8, "Mind Control, Part 2"
p. 80

"The reader is. . . left to decide if Mormonism qualifies as a
destructive cult based on the evidence presented. . . Steven Hassan
[_Releasing the Bonds_] clarifies the cult judgment criteria:

'It is not necessary for every single item on [the] list to be
present. Mind-controlled cult members can live in their own
apartments, have nine-to-five jobs, be married with children, and
still be unable to think for themselves and act independently.'"

Martin said...

Dale: Well, obviously it's convenient to speak in generalities for yourself but demand rigidity from your opponents.

Mitchell:

"I agree that some of their characteristic notions about reality and the future will prove to be naive or just wrong."

And yet so many of them are organizing their lives around ideas that are incredibly speculative. We know that rationality isn't just about being right, but about having confidence that scales with the evidence.

Your greatest proponent of Rationality suffers from this irrationality more than anyone. You wave it off as passion and idealism.

Whatever hype and exaggeration may surround the unfinished scientific models of today

You're calling transhumanist ideas "unfinished scientific models"? Real scientists have concluded that they are so incomplete that they aren't scientific.

Think of it this way: it is as unscientific to claim that a Singularity or radical longevity will happen as it is to claim that aliens exist in the Andromeda galaxy. It's certainly physically possible, but has no scientific basis whatsoever.

The real problem with today's recipes for transhumanity is just that we don't yet know what we're doing

Which is why they remain idle fantasies.

Martin said...

it is as unscientific to claim that a Singularity or radical longevity will happen as it is to claim that aliens exist in the Andromeda galaxy.

Also, btw, this means that it is irrational to organize your life around the belief that the aliens will come in 2029 or 2045, or to engage in a "research program" to build a better radio telescope to communicate with the Andromedans. But this is what transhumanists are doing.

Dale Carrico said...

Martin gets the gold star today.

Michael Anissimov said...

So sad, Martin. The people really pursuing the Singularity don't predict a specific date. I know dozens of them. Only one book gives those dates, no one takes them that seriously.

What's your issue? There's something motivating you you aren't telling us about.

Dale Carrico said...

Cultist has claws?

jimf said...

Martin wrote:

> [A] great example of something that's poorly defined in
> transhumanism is the Singularity itself. . .
>
> [I]t is irrational to organize your life around the belief
> that the [Singularity] will come in 2029 or 2045. . .

And Michael Anissimov replied:

> The people really pursuing the Singularity don't predict a specific date.
> I know dozens of them. Only one book gives those dates, no one takes them
> that seriously.

Y'know, I've been reading Lyndon Lamborn's
_Standing For Something More: The Excommunication of Lyndon Lamborn_
http://www.amazon.com/Standing-Something-More-Excommunication-Lamborn/dp/1438947437

Toward the end of the book, Lamborn mentions the late
Gordon B. Hinckley, the 15th prophet of the LDS (Mormon) church.
http://en.wikipedia.org/wiki/Gordon_B._Hinckley
and how he managed to finesse and fuzz out the more controversial
doctrines of Mormonism when he was asked about them
in public.

"His life's work and legacy may be summed up by recounting some
revealing moments in his life. The reader is left to draw his/her
own conclusion. . .

Hinckley comments on a key doctrinal question [in an interview
with the religion writer of the _San Francisco Chronicle_]

Q: There are some significant differences in your beliefs. For
instance, don't Mormons believe that God was once a man?

A: I wouldn't say that. There was a couplet coined, 'As man is,
God once was. As God is, man may become.' Now that's more of a
couplet than anything else. That gets into some pretty deep
theology that we don't know very much about. . .

Q: Is this the teaching of the church today, that God the Father
was once a man like we are?

A: I don't know that we teach it. I don't know that we emphasize
it. . .

Compare these responses to the actual teachings of Joseph Smith. . .

'God himself was once as we are now, and is an exalted man,
and sits enthroned in yonder heavens! That is the great secret.
If the veil were rent today. . . you would see him like a man
in form -- like yourselves in all the person, image and very
form as a man. . .'

It is also clear that this doctrine is still taught today. The first
chapter of the 1992 edition of the the Latter-day Saint teaching
manual. . . quotes directly from the above passage."

PR was so much easier to spin before the Web!

jimf said...

Dale,

No doubt you've seen Robin Zebrowski's latest blog post
http://www.firepile.com/robin/?p=556

This isn’t really how criticism works. . .

I. . . noticed that Humanity+’s undercover branch the IEET is sponsoring
a 1-day workshop. . . called “The Problems of Transhumanism.” I assumed
that meant there would be a critical eye cast upon the. . . real problems
in both theory and practice with “transhumanism,” but then I saw the
speaker schedule. At least 4 of the 9 speakers are either on the board
of H+ or IEET, known cheerleaders for the cause, and for all I know as many
as 8 of them are (I recognized one name as a known “bioconservative,” so
he’s unlikely to have an affiliation.)

This is not how criticism of a movement or theory works. You don’t get the
board members to market their position while masquerading it as an academic
conference. You also don’t temper the self-promotion with people (a person?)
whose views are so strongly ideologically opposite as to almost be a
straw man parody of the view that’s supposed to be under scrutiny.
I’m profoundly disappointed to see that very little actual criticism is
likely to occur at this “workshop” . . .
(I’ve seen enough discussion of some of these topics in academic circles
to know they *could* have found unaffiliated people to do this work, so
what it really means to me is that they didn’t want to. . .)

IEET. . . unceremoniously fired their friendly critic because he was
doing criticism
[link to http://amormundi.blogspot.com/2008/03/unperson.html ]

----------------------

Also from Lyndon Lamborn:

"Irving Janis [in _Victims of Groupthink_] devised eight symptoms that
are indicative of groupthink.

1. Illusions of invulnerability
creating excessive optimism and encouraging risk taking

2. Rationalizing warnings
that might challenge the group's assumptions

3. Unquestioned belief
in the morality of the group, causing the members to ignore the
consequences of their actions

4. Stereoptyping
those who are opposed to the group as weak, evil, disfigured,
impotent, or stupid

5. Direct pressure
to conform placed on any member who questions the group,
couched in terms of "disloyalty"

6. Self censorship
of ideas that deviate from the apparent group consensus

7. Illusions of unanimity
among group members, silence is viewed as agreement

8. Mindguards
self-appointed members who shield the group from dissenting
information"

Martin said...

Michael: Out of everything I said, that is what bothered you?

And your response is to accuse me of having secret motives? Yeah, Leon Kass is paying me. :)

jimf said...

So I happened to land on Russell Blackford's blog article
from a couple years ago
"Transhumanism still at the crossroads"
(April 22, 2008)

And I noticed an amusing remark in the comment thread:

http://metamagician3000.blogspot.com/2008/04/transhumanism-still-at-crossroads.html?
showComment=1209694080000#c6143086532518228180

Michael Anissimov said...

I am skeptical that there are really so many people which have
"overconfident faith in magical self-modifying AI", as Robin Hanson
argues. If they exist, can we find a few quotes? The only person
that comes to mind edging in that direction is Hugo de Garis.

As a S^ transhumanist, I welcome having my beliefs double-checked
and moderated by non-S^ transhumanists (and practically anyone who's
interested). . .

---------------------------

Now see, I could've sworn that "magical self-modifying AI"
was going to be the **engine** of the "S^".

But maybe that was, uh, more of a couplet than anything
else, and gets into some pretty deep theology that we don't
know very much about. . . Or something. I'll have to
consult Minitrue Recdep and get back to you.

(God, I'm glad I don't work in PR.)

Mitchell said...

Dale said

"Transhumanists aren't scientists... Neither are the prophetic and priestly utterances about superlongevity, superintelligence, and superabundance scientific hypotheses..."

Some transhumanists *are* scientists, of course. But transhumanism isn't science, it's an anticipation of technologies made possible by science.

The idea of landing on the moon wasn't exactly a scientific hypothesis, either. It was an engineering hypothesis, but it became thinkable because of science, and it was empirically verified by the act itself. The same may be said of the various superlatives, except that they haven't been "verified" yet.

"If superlative futurology were fumigated of all its exaggeration [etc]... it would just turn into... boring sensible indispensable mainstream-legible social democracy focused on technoscience issues...

"[T]he draw of the "transhuman" ... [is] about the derangement of science and development policy in the service of infantile reassurance provoked by the distress of ineradicable human finitude (mortality, dis-ease, error, humiliation, loss, precarity)."

I can almost agree with this last statement - and this is one reason why I think the critique of superlativity has something to teach transhumanists - but there's no way that the limits of the possible fit within the confines of "boring mainstream social democracy". The political culture of a social democracy would have to become radically futurist by current standards (or explicitly luddite and anti-futurist) if it were to meet the triple challenge of artificial intelligence, nanotechnology, and outer space, while still hanging on to its political forms. As I said, we are increasingly in a position to conduct utterly unprecedented existential experiments, creating new forms of life and mind which may be our evolutionary successors, and that is a situation and a responsibility for which almost no-one is prepared.

Mitchell said...

Martin said

"it is as unscientific to claim that a Singularity or radical longevity will happen as it is to claim that aliens exist in the Andromeda galaxy. It's certainly *physically possible*, but has no *scientific basis* whatsoever."

The basis for such claims is somewhat different. In the absence of direct evidence, someone who says there must be life in Andromeda is making a guess about how often life develops in the universe, e.g., at least once per galaxy. Such estimates are highly speculative, but we do at least know that life is physically possible, because it exists right here on Earth.

On the other hand, we have no such existence proof for the physical possibility of superintelligence and superlongevity, but if they *are* physically possible, then isn't it extremely likely that humanity will aim to achieve them? For the superlative conditions, whether or not they are possible really is the crux of the debate. As I said to Dale just now, they are engineering hypotheses, not scientific hypotheses, but the plausibility of an engineering hypothesis usually depends on its consistency with science.

In my book, the case for superlongevity rests primarily on our demonstrated capability to gradually understand how living matter works, and our demonstrated capability to intervene in its processes even at the most elementary level. The long-range implication is that we will be able to recreate in an old body the conditions which originally produced a young body.

As for superintelligence, I believe it's possible because of the theory of algorithms in computer science, the properties of computer hardware (speed) and software (exactness, duplicability, analysability), and the evidence from cognitive and computational neuroscience that *human* intelligence also has an algorithmic basis (e.g. that we recognize objects because our brains perform highly specific transformations and categorizations). I believe consciousness plays a role in our cognition but has not been a feature of any artificial computer so far, so there are a few conceptual breakthroughs still to be made in this area, but what can be achieved just with unconscious computation is already enough for me to expect an intelligence "singularity".

Dale Carrico said...

Liking science fiction doesn't make you an engineer any more than it makes you a scientist -- especially if you can't tell the difference between science and science fiction.

Dale Carrico said...

In a surprise move, AI dead-ender mis-identifies organismic brain as a digital computer then offers this confusion as evidence that digital computer can become intelligent. I do wish futurologists would preface their remarks with -- stop me if you've heard this one before. Because we always have. So, you know, stop.

Martin said...

Mitchell: That's nice, but I think you missed the thrust of my argument, re currently practicing transhumanists. The question isn't whether AGI or radical longevity are possible someday, far in the future, but whether there is any rational justification for organizing your life around such expectations today (ie, being a self-professing and practicing transhumanist).

There is a nonzero probability that a technologically advanced civilization lives in the Andromeda galaxy and that they will visit us in my lifetime, therefore constituting a different kind of Singularity. But I have no reason to believe that claim, and I certainly have no rational justification for organizing my life around that expectation.

If you are an activist for transhumanism, if you write books or blogs advocating transhumanism, if you are a member of a transhumanist organization, certainly if you hold an office in such an organization, or if you do "research" ostensibly in pursuit of specific transhumanist goals, then you are to one extent or another organizing your life around certain expectations, and you must believe those outcomes are imminent -- ie, they will happen in your lifetime, and probably sooner rather than later. It is this lifeway which I claim to be irrational and unjustified (not just idle speculation about these things), because you have no idea when (or if) that stuff will happen.

So you have two options. Either transhumanists are merely engaging in idle speculation -- some fun brainstorming about what the future could be like -- in which case transhumanism is a futurist fan club, or they are seriously pursuing "transhumanist goals", in which case they are blinded by an irrational hope and faith in the imminent technological transformation of society. Which camp do you belong in?

Luke said...

"algorithmic" not "digital"

Impertinent Weasel said...

"Pink" not "purple" flying unicorns.

Mitchell said...

Dale said

"AI dead-ender mis-identifies organismic brain as a digital computer then offers this confusion as evidence that digital computer can become intelligent."

I chose my words carefully. A brain doesn't learn, retrieve memories, plan an action, or produce a sentence just by being organismic. The performance of omplex abstract tasks requires complex abstract methods, and the brain accomplishes its basic cognitive tasks by specific computational methods. Those methods are implemented organismically, but the *reason* why the neurons get the job done is because they are performing some appropriate algorithm, such as temporal difference learning or Gaussian derivative filtering.

Consciousness and intentionality pose a profound challenge to science, but there is an unconscious mechanistic level of intelligence for which an analysis in terms of algorithms is entirely appropriate; and even the role of consciousness, on a formal and causal level, must admit a similar characterization. Whatever functional role consciousness plays in cognition, it is able to play that role because it has the right cause-and-effect relationship to the unconscious processes. Even conscious cognition must admit an algorithmic or meta-algorithmic description - that is, it must achieve what it does in a particular way.

Mitchell said...

Martin said

"The question isn't whether AGI or radical longevity are possible someday, far in the future, but whether there is any rational justification for organizing your life around such expectations today"

Surely it's possible to be irrational about it. But hello, we already share the world with giant distributed AIs which perform pattern matching tasks (search engines) and with organisms that were grown from a blueprint assembled artificially (Venter's microbe). How extreme do things have to get before it finally becomes rational to take it personally?

jimf said...

> So you have two options. Either transhumanists are merely engaging
> in idle speculation -- some fun brainstorming about what the future
> could be like -- in which case transhumanism is a futurist fan club,
> or they are seriously pursuing "transhumanist goals", in which case
> they are blinded by an irrational hope and faith in the imminent
> technological transformation of society.

It gets worse than that. Not only is "seriously pursuing 'transhumanist
goals'" a case of being "blinded by an irrational hope and faith
in the imminent technological transformation of society", it:

1. Creates a fertile field for "gurus" who claim to have
the knowledge to lead the way to this technological transformation
to arrogate power and publicity for themselves, independently
of whether a sober observer would consider their qualifications
and achievements deserving of that kind of influence.

2. Tempts the naive and starry-eyed followers of these gurus to
abdicate independent thought and judgment, in the name of saving
the world (and also thereby saving themselves -- from death,
from uncertainty and fear, from meaninglessness, from
loneliness, from sheer boredom, whatever).

3. To the extent that the orthodoxies that the guru(s) pump out
are taken seriously by followers and promulgated as an
ideology among the general public, the **actual science**
that may be taking place in the fields "commandeered" by
the gurus and their enthusiastic followers,
and the understanding of that science among the general
public, together with the political processes that fund
allocate resources to scientific research, stand to be distorted and
skewed by the ideologically-toned certainties prematurely
touted as "rationality" by the gurus. To that extent, the activities
of the transhumanists may actually be **counterproductive**
to their ostensible goals. Anybody, for example, beginning
serious study of a field involving the workings of the human brain
and mind has to pass through (and overcome the actively
misleading distractions of) a gauntlet of noise churned
out nowadays by the transhumanists (among
others). Ask Robin Zebrowski!

4. To the extent that the breathless discourse surrounding
these things (securing eternal life, facilitating the
spread of "superintelligence" throughout the universe,
saving the world, etc.) whoops up both irrational hopes
and fears, it's an invitation to fanaticism. In the
extreme, who cares about harassing, or even knocking off,
a few people if the stakes in the success or failure of
the movement are transcendental? Worked for the Mormons,
works for the Scientologists (both of whose origins
had science-fictional overtones, though the former
happened in the 1820s before the literary genre had
been invented, whereas the latter in the 1950s was
explicitly involved with the SF community, much
like transhumanism today), works for countless
other more penny-ante cults.

5. Guru-led cults **always** turn authoritarian and anti-
progressive. And they are ripe for appropriation and
manipulation by incumbent interests, as Dale is
forever pointing out. Transhumanism is no different
in this regard.

Luke said...

IW: Seriously?

Martin: If your argument is about how stimulating ideas (gods, aliens, superintelligent robots, etc.) can skew probability estimates, I agree. Just being fun/scary to think about doesn't make the aliens more likely to land. However the weirdness of the subject doesn't make it *less* likely either.

Dale Carrico said...

Mitchell, who "chooses his words carefully": hello, we already share the world with giant distributed AIs.

It has always seemed to me that the primary impact of the pointless and over-eager over-application of the term "intelligence" to that which is not is to render us all ever more insensitive to the richness of experience and actual concomitant demands of the precious beings who are.

AI discourse produces especially in its advocates, but also in the cultures in which its frames and figures become prevalent, nothing short of a kind of widespread artificial imbecillence.

From a related Futurological Brickbat: XXXI. Computer science in its theological guise aims less at the ultimate creation of artificial intelligence than in the ubiquitous imposition of artificial imbecillence.

Dale Carrico said...

Superlative Futurological discourses are not just "fun" "scary" idle speculation. Else, transhumanists, singularitarians, techno-immortalists, nano-cornucopiasts, and the rest would admit they are simply a kind of science fiction fandom (perhaps a fandom fixated on that lamest and least demanding genre of science fiction, pop technoscience/futurology) rather than peddle themselves as engaged in techno-transcendentalizing variations of serious science or serious developmental policy discourse.

Superlative Futurology is, of course, an ideological formation with an undeniably theological coloration, an extreme form of the prevailing, blandly fraudulent futurological marketing/promotional discourse that suffuses neoloberal-neoconservative global developmentalism.

It is in its sub(cult)ural organization as a defensive marginal "identity-movement" with tendencies to underqualified pseudo-scientific enthusiasms, pseudo-science peddled to True Believers by guru wannabes that the superlative futurologists are vulnerable around the edges (to be generous) to derisive charges of cultishness.

My "Condensed Critique of Transhumanism" is here if you want reminding of it.

jimf said...

> Mitchell, who "chooses his words carefully": **hello, we already
> share the world with giant distributed AIs**.

Shaun Farrell interviews Vernor Vinge
http://www.farsector.com/quadrant/interview-vinge.htm

SF: You do have a character in _Rainbows End_ called the Rabbit,
or Mysterious Stranger, and it’s hypothesized by some of the characters
in the book that it could be an AI, but you never state implicitly
whether or not it is.

VV: That’s true.

------------------------

Compared to what readers assume to be meant by "AI" in the context of a Vernor Vinge
(or any other SF author's) novel (or for that matter in the context of Dr. Vinge's
1993 intendedly non-fiction "The Coming Technological Singularity: How to Survive
in the Post-Human Era"
http://www-rohan.sdsu.edu/faculty/vinge/misc/singularity.html )
or indeed in Mitchell Porter's TRANSHUMANISM AND THE SINGULARITY
http://transtopia.tripod.com/semper.html
(and I'm willing to bet that the author of that essay is the
the same "Mitchell" as the one contributing to this comment
thread), who remarks in that essay,

"The ability to put atoms where you want to has major consequences. . .
These include. . . 'Santa Claus machines' which will make anything
possible upon request (growing an android or a starship in your backyard. . .)
[which could] in turn could lead to abundance and long life for all. . .
superhuman artificial intelligence, and dangers worse than nuclear warfare. . .",

a more accurate description of the present might be (echoing "Mitchell"),
"Hello, we already share the world with giant distributed adding
machines."

This sort of equivocation surrounding the term "artificial intelligence", the
alternation ad libitum between (a) the dubious application of the phrase to
current technologies -- telephone networks, or digital computers, or
even networked digital computers, and (b) the putative non-fictionalizing
of science-fictional tropes exemplified by the Vinge and Porter essays, is (in the most
generous construal) symptomatic of genuine confusion on the part
of those who switch(eroo) the usage in this way, or at worst
a deliberate attempt at flim-flam.

It has its roots in the 40s and 50s journalists who described early
(and monstrous in both size and expense) digital computers (simulations
of which, ironically, will fit in the tiniest corner of a modern
consumer PC, an appliance which no sane person would consider "intelligent"
in the usual sense of the word) as "thinking machines".

The tendency was further lambasted in the 60s when Joseph Weizenbaum
wrote "Eliza" to demonstrate how easy it is to bamboozle naive people
into attributing "intelligence" to a fairly unsophisticated automaton.

It is not helping "Mitchell"'s argument.

Mitchell said...

I notice that no-one has chosen to dispute or otherwise comment on my observation that the human brain gets things done, not just by virtue of being "organismic" (or embodied or fleshy or corporeal), but because its constituent neurons are arranged so as to perform elaborate and highly specific transformations of input to output, which correspond to specific cognitive functions like learning and memory, and which, at the mathematical level of description, fall squarely within the scope of the subfield of theoretical computer science which studies algorithms.

Under other circumstances, I'd be happy to have a freewheeling discussion about the subjective constitution of imputed intentionality in the practice of programming, or the right way to talk about the brain's "computational" properties without losing sight of its physicality, or exactly why it is that consciousness presents a challenge to the usual objectifying approach of natural-scientific ontology.

But however all that works out, and whatever subtle spin on the difference between natural and artificial intelligence best conveys the truth... at a crude and down-to-earth level, it is indisputable that the human brain is full of specialized algorithms, that these do the heavy lifting of cognition, and that such algorithms can execute on digital computers and on networks of digital computers.

That is why you can't handwave away "artificial intelligence" as a conceptual confusion. If you want to insist that the real thing has to involve consciousness and the operation of consciousness, and that this can't occur in digital computers, fine, I might even agree with you. But all that means is that the "artificiality" of AI refers to something a little deeper than the difference between being manufactured and being born. It does not imply any limit on the capacity of machines to emulate and surpass human worldly functionality.

Dale Carrico said...

My response to Mitchell was too long for the Moot, so I'll post it along with his comment on its own.

jimf said...

Mitchell wrote:

> [Y]ou can't handwave away "artificial intelligence"
> as a conceptual confusion.

The conceptual confusion mentioned above has to do with
the equivocation between the phrase "artificial intelligence"
as used in science fiction stories and among transhumanists
"entre eux", and the phrase **very** loosely (and
misleadingly) applied to, say, what Google does.

> If you want to insist that the real thing has to involve
> consciousness and the operation of consciousness. . .

Nobody said this, but it does seem likely to me (a nonexpert) that
any entity exhibiting anything like "intelligence"
in the (admittedly imprecise) usual meaning of the word
would also be likely to have "consciousness" (an equally
imprecise term, despite the efforts of generations of
philosophers) imputed to it, even by sophisticated observers.

But that point is certainly not central to **my** "beef"
with the >Hists.

> . . .and that this can't occur in digital computers. . .

It almost certainly can't occur in anything with "Intel Inside",
or in IBM's Blue Gene, or in anything else currently on
the drawing boards or even in anything remotely on the
horizon. It's a difference (from the run-of-the-mill
>Hist breathlessness on the topic) that makes a difference.

> . . .fine, I might even agree with you. But all that means
> is that the "artificiality" of AI refers to something a little
> deeper than the difference between being manufactured and
> being born. It does not imply any limit on the capacity of
> machines to emulate and surpass human worldly functionality.

Ah, now we're talking about "machines" (meaning, presumably,
some future technology of an unspecified nature, rather than **digital
computers** as we currently know and love them).

A quote from my archive:

"[Are] artifacts designed to have primary consciousness...
**necessarily** confined to carbon chemistry and, more specifically,
to biochemistry (the organic chemical or chauvinist position)[?]
The provisional answer is that, while we cannot completely
dismiss a particular material basis for consciousness in the
liberal fashion of functionalism, it is probable that there will
be severe (but not unique) constraints on the design of any
artifact that is supposed to acquire conscious behavior. Such
constraints are likely to exist because there is every indication
that an intricate, stochastically variant anatomy and synaptic
chemistry underlie brain function and because consciousness is
definitely a process based on an immensely intricate and unusual
morphology"

-- Gerald M. Edelman
_The Remembered Present_, pp. 32-33
http://www.amazon.com/Remembered-Present-Biological-Theory-Consciousness/dp/046506910X

jimf said...

Some more quotes:

"At every stage of technique since Daedalus or
Hero of Alexandria, the ability of the artificer
to produce a working simulacrum of a living
organism has always intrigued people. This desire
to produce and to study automata has always been
expressed in terms of the living technique of
the age. In the days of magic, we have the bizarre
and sinister concept of the Golem, that figure of
clay into which the Rabbi of Prague breathed in
life with the blasphemy of the Ineffable Name of
God. In the time of Newton, the automaton becomes
the clockwork music box, with the little effigies
pirouetting stiffly on top. In the nineteenth
century, the automaton is a glorified heat engine,
burning some combustible fuel instead of the glycogen
of the human muscles. Finally, the present automaton
opens doors by means of photocells, or points guns
to the place at which a radar beam picks up an
airplane, or computes the solution of a differential
equation."

-- Norbert Wiener, _Cybernetics_ (1948)

--------------------------------------

> But computers are not just another technology; they are a new paradigm, a new
> way about thinking about the relationship between humans and nature.

No, actually, they're just another technology.

Comparing minds to computers is a metaphor. In the 18th century, they used to
compare human beings to clockwork. That was a metaphor, too.

SF often 'literalizes metaphors'; however, one should avoid doing this in real
life as it leads to misunderstanding."

-- S. M. Stirling, on Usenet
http://www.google.com/groups?selm=20000330040451.03525.00005408%40ng-cm1.aol.com

--------------------------------------

"Ethics of Human Speciation: Sapience, Intolerance and Volitional Freedom",
by Reilly Jones ( http://home.comcast.net/~reillyjones/speciation.html ):

"Humans-as-Computer Metaphor is Appearance not Reality

The idea that we are hepped-up computers is strictly fashion.
The scientific community has a long and somewhat vain history of
picking whatever technological marvels are current to be the model
of human consciousness; from clocks to heat engines to cybernetic
feedback loops to powerful CPUs. The more historical overview you
can achieve of the Western scientific enterprise, the more silly
this tendency looks."

--------------------------------------

Usenet curmudgeon Mikhail Zeleny,
replying to Fiona Oceanstar, in
http://groups.google.com/groups?selm=1991Nov15.160741.5495%40husc3.harvard.edu

From: Mikhail Zeleny (zeleny@walsh.harvard.edu)
Subject: Re: Daniel Dennett (was Re: Commenting on the posting
Newsgroups: rec.arts.books, sci.philosophy.tech, comp.ai.philosophy
Date: 1991-11-15 14:16:41 PST

> I read enough mind-brain books, that I'd like to
> hear other people's guidelines for telling the wheat
> from the chaff.

My guideline is very simple: if you see someone offer a reductive argument
purporting to explain the properties of mind, such as consciousness,
cognition, and intentionality, in terms of the alleged computational
properties of the brain, you may conclude that he is a charlatan or an
ignoramus. This conclusion might be justified historically, by observing
the earlier attempts to explain the functioning of human mind by reference
to the capabilities of the dominant contemporary technology (e.g. clockwork
mechanisms, chemistry, steam engines, etc.). . .

Dale Carrico said...

Hey, Jim -- you may want to re-post these observations and arguments under the new post I created for Mitchell's comment, since I think this deep down the blog-scroll and Moot few are still reading, but more will likely benefit from your points when they are prominent on a fresh posting.