Using Technology to Deepen Democracy, Using Democracy to Ensure Technology Benefits Us All

Wednesday, March 07, 2012

Ten Things You Must Fail To Understand If You Want To Be A Transhumanist For Long

One: Enjoying science fiction is not the same thing as doing science or making science policy.

Two: Indulging in wish-fulfillment fantasies is not the same thing as analysis.

Three: Extrapolating from speculations and stipulations mistreated as data will yield serially failed predictions, none of which amounts to foresight.

Four: There is nothing brave or useful or distinguished or progressive about saying magic would be cool if it were real, especially since there are so many real problems and real possibilities in the world that need all our bravery, pragmatism, special effort, and progressive struggle.

Five: Promoting as “experts” people with no training in actual professional or academic disciplines, celebrating the “genius” of high-tech billionaires of no real distinction, who have simply appropriated the invention and effort of countless uncelebrated others, and providing rationalizations for the "indispensability" of corporate-military elites who will presumably deliver us medical immortality, offer us nano-abundance, geo-engineer away our environmental catastrophes, and code for us perfect software god parent-substitutes, is not even remotely the same as having real thoughts, doing true philosophy, or making serious policy.

Six: Subcultures that remain very static, very small, very marginal, very megalomaniacal, and very defensive tend to look and conduct themselves more like cults than subcultures.

Seven: People who buy a Volkswagon, an Apple computer, or Diesel Jeans aren’t actually joining a political movement no matter what advertising executives say to the contrary, nor are people who watch BSG marathons, write Janeway shipper fanfic, work on a Steampunk casemod, or enjoy CLAMP cosplay actually engaging in political agitation no matter how personally resonant and edifying their experiences may be, or how interesting to ethnographers, nor are people who are invested in “The Future” of the futurologists -- which amounts in some respects precisely to such marketing phenomena and in others precisely to such fandom phenomena -- really joining or sustaining a political movement or engaged in political agitation in any remotely serious way.

Eight: “The Future” is not Narnia, it is not Middle Earth, it is not the United Federation of Planets, it is not Hogwarts, it is not Heaven, it is not Hell -- it will be a shared present attesting to stakeholder struggle just as this present is.

Nine: What we mean by life happens in biological bodies, what we mean by intelligence happens in biological brains in society, what we mean by progress happens in historical struggles among the diversity of living intelligent beings who share the present -- and to say otherwise is not to be interesting but to be idiotic.

Ten: We are all vulnerable, we are all promising, we are all more ignorant than we need to be, we are all more capable than we can know, we are all error-prone, we are all interdependent, we are all subject to chance, and we are all going to die.

11 comments:

jimf said...

> One: Enjoying science fiction is not the same thing
> as doing science or making science policy.

Oh, Dale, that's sooo 00's.

Let's get quantitative!

From (another list at)
http://www.acceleratingfuture.com/michael/blog/2009/12/singularity-institute-for-artificial-intelligence-2009-accomplishments/
---------------------
Singularity Institute for Artificial Intelligence
2009 Accomplishments
Saturday, Dec 26 2009

. . .

8. In December, a subset of SIAI researchers and volunteers finished
improving The Uncertain Future web application to officially announce
it as a beta version. The Uncertain Future represents a new kind
of futurism — futurism with heavy-tailed, high-dimensional probability
distributions. The purpose is to provide a tool for use by futurists
and the informed public to input probability distributions over
quantitative questions like, “how much computing power would be
necessary to implement neuromorphic AI?”, combining them into a
“picture of the future according to you”. Another goal of the project
is to provide an alternative to the futurist methodologies of
storytelling and scenario building, which dominate the field even
though they often cause futurists to overestimate the probability
of precise, vivid stories at the expense of a wider space of
neglected possibilities.
---------------------

So what's your input to the "picture of the future according
to you?"

jollyspaniard said...

I've just reread this post and I don't find it mean spirited as far as debunkings go. I can't imagine many people getting hurt feelings over it.

If reading that post bothers someone and gives them food for thought then you've done something admirable.

Anonymous said...

I find number 9 to be very very unsubstantiated. I don't see why intelligence can't happen electronically. Long ago, we would have thought "life" happens only on Earth, because it was all we could imagine then. If you think life has to be biological (as opposed to electronic), please say why. Calling people who disagree "idiotic" isn't an explanation.

Dale Carrico said...

The burden of substantiation falls on the one who wants to say there certainly is a reference for non-biological intelligence or extra-terrestrial life. Imagining otherwise isn't the same thing as providing a factual counterexample. Futurologists seem to have no end of trouble with this. As for what I am willing to entertain as logically possible in the way of differently materialized intelligence and what the impact such meditations seem to me to have on our thinking of actually-existing lifeways you might be interested in this piece of mine.

Anonymous said...
This comment has been removed by the author.
Anonymous said...

Dale, do you think any of the current artificial life software produce life?

If not, is it because you adhere to the current definition of life, or because software objects that 'look kinda like they're living' are qualitatively different from 'real life', other than because they are software objects.

Dale Carrico said...

To call current software "life" is to denigrate actual life. To call current software "intelligent" is to denigrate actual intelligence. I do not deny the logical possibility of non-evolved living constructs, nor of differently-materialized intelligences, but these words have urgently real references and one should deviate from "the widely accepted definition" of each by subsuming radically different events under each only when high standards have been met. Evacuating these terms of their substance to facilitate wish-fulfillment fantasies of techno-transcendence is not progressive but absolutely reactionary. Today's champions of software "life" and artificial "intelligence" are mostly dealing in death and facilitating artificial imbecillence in my view. No true progressive can hold otherwise, at least for now.

Muhammad al-Khwarizmi said...

Where do I even begin with this essay?

On the one hand, you act like transhumanism is nothign but a neckbeard fantasy. On the other hand, you seem genuinely terrified by such things as the US military is doing to expand the capacities of human and non-human agents.

In this thread, you linked to an essay about the importance of embodiment. Do you think that's lost on me, just because I'm a transhumanist? No. Nor was that point lost on the first transhumanists, the Futurists. A number of them even served in the Arditi, who specialized in hand-to-hand trench warfare. Presumably they put a high price on being in the world, and weren't neckbeards getting fat watching Star Trek re-runs.

This post is so utterly bad I can't possibly list every little thing wrong with it. To start with, the title should perhaps be changed to "Ten Things You Must Fail To Understand If You Want To Be A Less Wrong User For Long"; this is clearly not about transhumanism as a whole.

Dale Carrico said...

You will be unsurprised to hear that you are not the first transhumanoid to pout and stamp at hearing I find you ridiculous and say why in a sustained way. The Robot Cult is nonsensical in many ways on its own terms but also illustrative and symptomatic of pernicious but more prevailing reductionism, determinism, eugenicism, productivsm, consumerism. Even this highly schematic essay addresses your perplexity on this score -- and I have written many others that are far more elaborated than this rather introductory one if you are actually interested in criticisms of your supremely luminously powerful futurological ideology. You will be much more convincing in your effort to demonstrate how much more sophisticated you are in your own personal transhumanism if you reveal an ability to read at a basic level.

Armands Skutelis said...

I doubt that anyone is calling current software "intelligent". They call it narrow AI in some cases, but it has nothing to do with intelligence, it's just a term. I do think though that intelligence can arise just from algorithms, but they would have to be far more sophisticated than what we have now and furthermore intelligence, as we have it in humans, relies on numerous kinds of inputs related to biological processes, so I doubt it can be copied 100% in a computer, in a way it's also a result of some "broken" or imperfectly working algorithms, we tend to break down, be overzealous, ignore some facts and put others on pedestal... we are prone to search for patterns everywhere, faces on Mars, aliens who built pyramids etc. So I would agree that it would be very hard task to replicate human consciousness in a computer, because it's inherently flawed and a jumbled mess...

Dale Carrico said...

I doubt that anyone is calling current software "intelligent". They call it narrow AI in some cases, but it has nothing to do with intelligence, it's just a term.

The application of the term intelligent is what calling it intelligent means surely? But I fancy you have stumbled on some subtlety that eludes me.

I do think though that intelligence can arise just from algorithms

Your belief in that possibility is not yet an argument for it, I fear. Given that you admit it hasn't happened and could only happen if greater "sophistication" of an unspecified character happened as well, I cannot say things are looking up for your article of faith as yet. But do keep your chin up. Just because generations of cocksure True Believers have been nothing but serially wrong on this score as long as this discourse has been around is no reason to entertain any doubts about it or suggest any qualifications of it, right?

intelligence, as we have it in humans... [i]s also a result of some "broken" or imperfectly working algorithms, we tend to break down, be overzealous, ignore some facts and put others on pedestal... we are prone to search for patterns everywhere, faces on Mars, aliens who built pyramids etc. So I would agree that it would be very hard task to replicate human consciousness in a computer, because it's inherently flawed and a jumbled mess...

It seems strange to me to describe as "broken" the only intelligence on offer, as compared to an intelligence which doesn't exist to be broken or otherwise. But I have never been able to follow religious logic particularly well in any of its forms: They tell me god is all good even when He is bad and all-knowing even when She is all-powerful and hence should be able to do anything including that which They doesn't or can't know? Oh, what a pickle! I am fairly sure mine is an intelligence too "broken" to make sense of such things.

I do want to say I am sorry that you seem to have such a low opinion of your own intelligence and that of your fellow humans. This insecurity and, sometimes, even self-disgust is commonplace among techno-transcendental futurists, I have found. I do hope that you will come to terms with your fears and hostility to your limits as a living, error-prone, aging, mortal being -- if that is what is happening here -- as you become a more experienced adult sort of person.

There is a suggestion in your phrasing that perhaps you actually identify in some way with the non-existing machine intelligence you regard as not only possible but superior despite its non-existence -- hence you seem to describe the errors and passions and ignorance that articulate the play of human thought as rather inferior, as though you observe them from an alien or Olympian height.

Of course, the artificial intelligence futurists so dote on in its imaginary perfection often looks to be a projection of their own errors and passions and parochialisms, after all, as the objects of human faiths tend so often to be -- perhaps a futurist scared of aging and disease likes the idea of an intelligent selfhood that is not tied to the frail vulnerable body one cannot command, perhaps a shy or thoughtful person who has been frustrated or derided in company likes the idea of an intelligent selfhood capable of a compelling super-logical argumentation immune to the humiliations of emotion and error and derision from others one cannot control -- in which case I daresay we might all of us have a bit of a laugh at such human, all too human, follies as their and yours and (however different they may be) mine, together.