Using Technology to Deepen Democracy, Using Democracy to Ensure Technology Benefits Us All

Tuesday, August 18, 2009

Another Robot Cultist Compares Self to Wright Brothers

I know this isn't a substantive post or anything, I just can't help myself. I can scarcely count the number of times over the years in which one after another Robot Cultist sniffs disdainfully at my critique of superlative futurology and then declares some variation on the theme of: Well, they laughed at the Wright Brothers too! I mean, it happens over and over and over again. I don't think these people are citing one another's rejoinders, I suspect it is a spontaneous and symptomatic upwelling out of the pathology of superlativity itself. From the Moot, one "Extropia" (get it? it's like Extropian, you know, from the Extropian transhumanist sect founded by Ayn Raelian Robot Cultist Max More) assures me:
Your ridiculing does not bother me in the slightest, just like the pioneers of aviation were not put off by the fact that many people thought airplanes were absurd flights of fancy

Dude, neither you nor any of your little white sf fanboy friends are the Wright Brothers. You are not Leonardo or Einstein or Tesla. You are somebody's crazy uncle, you know the one, who stays in the shed out back most afternoons, railing about his genius theories, smelling of his own pee, working on his perpetual motion machine.

12 comments:

jimf said...

> You are not Leonardo or Einstein or Tesla.
> You are somebody's crazy uncle. . .

My analyst told me
That I was right out of my head
The way he described it
He said I'd be better dead than live
I didn't listen to his jive
I knew all along
That he was all wrong
And I knew that he thought
I was crazy but I'm not
Oh no. . .

They say as a child
I appeared a little bit wild
With all my crazy ideas
But I knew what was happening
I knew I was a **genius**...
What's so strange when you know
That you're a wizard at three?
I knew that this was meant to be. . .

They all laugh at angry young men
They all laugh at Edison
And also at Einstein
So why should I feel sorry
If they just couldn't understand
The idiomatic logic
That went on in my head
I had a brain
I was insane
Oh they used to laugh at me
When I refused to ride
On all those double-decker buses
All because there was no driver
on the top. . .

(What, no driver on the top? Man, this chick
is twisted! Whoop-shoobie! Flip city!)

-- Joni Mitchell, "Twisted"
(from _Court and Spark_)

Athena Andreadis said...

Don't swat at mosquitoes with SAM missiles, Dale! Waste of time, energy and valuable ammunition.

Extropia DaSilva said...

>I can scarcely count the number of times over the years in which one after another Robot Cultist sniffs disdainfully at my critique of superlative futurology and then declares some variation on the theme of: Well, they laughed at the Wright Brothers too!'.

And I can scarcely count the number of times that a conceptual design or an idea for a new technology is denounced as akin to a perpetual motion machine. It's the 'X does not exist today so it must be impossible' argument.

Perpetual motion machines clearly fall into the 'Uso No' imaginary only catagory. Either our understanding of the laws of physics is drastically wrong, or such things are entirely ruled out.

Now, immortality is ruled out if life of any kind requires information processing and we accept the theory that if dark energy increases the rate at which space expands to the point where a Desitter horizon forms, that then limits the amount of time in which information processing can be performed in our universe. Well, you could go into speculations about tunelling into another universe but let's not...

But just because the theoretical lifespan is limited to 10^117 years, that is hardly justification for calling any hopes for longevity beyond the 'natural' limit of 120 years pie-in-the-sky, is it?. And it does not affect the argument that medicine combined with whatever other sci-tech should be about helping people become all they can be, rather than merely elevating the sick to some minimal level of comfort.

Apart from immortality, and perpetual motion machines, what else is definitively ruled out? Artificial general intelligence? How can it be, when people are patterns of matter and energy that provide proof-of-principle that the laws of nature do not rule out the possibility of mind emerging in patterns of matter and energy? Superintelligence? I know of no law that sets the absolute limit of intelligence to 'human' levels. Drexlerian nanotechnology? The ongoing debate between Drexler, Frietas, Merkle and Richard Jones suggests this is 'Mirai No'- a possible goal with a great many obstacles to overcome before it can be reached- rather than 'Uso no'.

>"Extropia" (get it? it's like Extropian, you know, from the Extropian transhumanist sect founded by Ayn Raelian Robot Cultist Max More'.

No, it is Extropia like "Extropia", or 'an open, evolving framework allowing individuals and voluntary groupings to form the institutions and social forms they prefer', which I believe to be the ideal which online worlds should strive for.

>Dude, neither you nor any of your little white sf fanboy friends are the Wright Brothers. You are not Leonardo or Einstein or Tesla. You are somebody's crazy uncle, you know the one, who stays in the shed out back most afternoons, railing about his genius theories, smelling of his own pee.'

Yes you said so before. Presumably Dale is very proud of this statement. Why else would he repeat it? Personally, I would be ashamed to be possessed of his intellect and yet exhibit behaviour more suited to a scruffy little urchin who lurks behind bikesheds ready to bully passers-by, but that is just me.

Dale Carrico said...

We'll take your points one after another, starting with this last one and moving backwards (very suitable when talking to Robot Cultists):

Extropia indicates that he would personally be ashamed to use the sort of naughty language I do. Well, I'm not.

By all means, I encourage "Extropia" and like-minded orchids among the Robot Cultists to bring their smelling salts to their next readings of my blog in case I say anything indecorous that might despoil their rarefied sensibilities.

I would hate for anybody's crinoline to get soiled.

I certainly wouldn't recommend they ever try to read Cicero or Twain or Parker, else their talcum-powdered vision of what intelligent discourse must look like would likely require unwelcome revision.

For myself, I think "Extropia" should worry much more about how crazy and how wrong he is, rather than how suitable his tone may be to those whose understanding of propriety seems stuck absurdly in Victoriana (funny how retro futurists often end up being when talk turns to morals or politics, I must say).

I disagree that intelligent people don't say fuck and laugh at poop jokes like everybody else. I think pretending otherwise is stupid and pointless, and demanding otherwise in the service of "professionalism" is deranged.

Dale Carrico said...

I pointed out to readers who might not know the associations that your handle "Extropia" was "(get it? it's like Extropian, you know, from the Extropian transhumanist sect founded by Ayn Raelian Robot Cultist Max More'."

To which you replied: No, it is Extropia like "Extropia", or 'an open, evolving framework allowing individuals and voluntary groupings to form the institutions and social forms they prefer', which I believe to be the ideal which online worlds should strive for.

Typing your quoted "definition" into google I was directed here, that is to say, to precisely the Extropian Institute, with a picture of dull-eyed techno-libertopian self-help guru Max More smiling up at me exactly as I thought I would.

You're a Robot Cultist, "Extropia," and those of us who aren't, often know one when we see one, and know what to think of those who are.

Own up to your own choices or make new ones.

Honestly.

Dale Carrico said...

You continue on in several places to insist that the topics that so fixate your attention, like techno-immortality and nanobotic Anything Machine, are not "ruled out" by our broadest understanding of the physical universe.

I really do think you ought to give some thought to how often this is the justification you fall back on.

Is it really sensible to be preoccupied with so many things that have going for them nothing but that they are not logically impossible, however disconnected with scientific consensus, actually urgent problems, actually proximate hopes?

What is it that drives you repeatedly back to faithful assertions about superlative outcomes that just happen to mirror so conspicuously the attributes already conventionally associated with the furniture of theology in its most simpleminded guises?

Dale Carrico said...

I think it is not because you are such interesting scientists and thinkers that you Robot Cultists are so fixated on remote outcomes that are at best logically possible rather than actually ongoing and proximate technoscientific problems, I honestly think more often than not that it is because you are scared to die, scared of vulnerability, scared of diversity, scared of contingency in life, and so you invest your time in wish-fulfilment fantasies of omnipotence, omniscience, and omnibenevolence (translated into the pseudo-scientificity of superlongevity, superintelligence, superabundance as I have elaborated elsewhere) in the usual manner of weak-kneed would be order-givers/order-takers in the Priests/Believers authoritarian circuit, looking for some way out of the responsibility of actual adulthood in the world.

Of course, I don't know any of you intimately enough to treat that as a clinical diagnosis, I daresay many of you are nice enough, interesting enough people in your private lives, kind to small animals, and so on.

And maybe some of you are preoccupied with superlative discourse not because it speaks to deep-seated problems you have coping with the fact of your finitide after all, but just because you are not knowledgeable enough, critical-minded enough, or bright enough to have put into perspective these highly marginal conceptually confused pseudo-scientific discourses that caught your eye in some random sort of way and captivated your imaginations to the exclusion of sense.

Who's to say? These things happen, I suppose.

Dale Carrico said...

hardly justification for calling any hopes for longevity beyond the 'natural' limit of 120 years pie-in-the-sky

Why would I say such a thing?

As far as I know, there is already a documented case of a person living beyond 120 years, Jeanne Calment, in France lived to be 122, I believe? Even if that is wrong there are surely others who have done.

And as I regularly say, genetic, prosthetic, and cognitive therapies are intervening in profound ways into the capacities, morphologies, processes that have hitherto defined the "natural" (a word that just means "customary" in this usage after all) limits.

To advocate medical research and universal basic healthcare provision as I do is to take up and take on what I have described elsewhere as a shift of the medical imaginary from normative remediation to consensual prosthetic self-determination and lifeway diversity.

It's true I don't go from there into enthusing about living for thousands of years, and all the rest of that nonsense, because I see no reason at all to treat any of that hyperbole as sufficiently proximate to justify attention misdirected from urgent ongoing scientific and policy questions, and also I discern in those who do talk about it clear indications of pathology and unseriousness.

One arrives at a point when a death-fixation (including a fixation on death-overcoming) just becomes a way to be dead in life even before death ends life -- as it will, you know. Every single person reading these words is going to die, and not after living for thousands of years in tech-heaven.

Honestly, it's not that difficult to be a grown up about these things and move on to funding and regulating research to solve real healthcare problems in the world.

Dale Carrico said...

And I can scarcely count the number of times that a conceptual design or an idea for a new technology is denounced as akin to a perpetual motion machine. It's the 'X does not exist today so it must be impossible' argument.

Perhaps you should re-read my actual argument.

I don't deny that some of the outcomes superlative futurologists declare their faith in do not seem to contradict in principle our broadest scientific understanding -- I do deny that it is sensible to hang one's hopes or one's hat on so diffuse a hat-hook, inasmuch as it is not logical possibility but stepwise development -- always involving both discovery and social struggle -- that issues in outcomes.

Also, I have taken pains elsewhere to draw your attention to the recurring frames and metaphorizations through which you Robots Cultists strive to render plausible (not least to yourselves) what upon closer scrutiny seem enormously confused notions -- the dependency of the very notion of "immortalization" via "uploading" or "transferring" actually biologically-incarnated "minds" "onto" software, for example, or the dependency of Drexlerian nanotechnology (rather than the actually real nano-scale interventions that are interesting enough on their own terms, but which don't deliver superlative handwaving outcomes and so don't interest Robot Cultists very much) on biological analogies that actually fail to obtain at the level of specificity, and so on.

Of course, my deepest concerns are with what I take to be profoundly anti-democratizing impacts of futurological discourses (both in the crazy-extreme Robot Cult forms you indulge, and in the more prevailing global developmental discourses for which superlativity represents a clarifying reductio), its substitution of intrumental amplification for the freedom of meaning-making peer-to-peer without which, in my view, human life is not worth living. I talk about these things elsewhere at length.

Extropia DaSilva said...

'I certainly wouldn't recommend they ever try to read Cicero or Twain or Parker, else their talcum-powdered vision of what intelligent discourse must look like would likely require unwelcome revision.'

You forgot the key difference. They were great wits. You are not.

'Typing your quoted "definition" into google I was directed here'...

Here being the page for the principles of extropy, where you find statements such as 'The Principles of Extropy do not specify particular beliefs, technologies, or policies. The Principles do not pretend to be a complete philosophy of life....the Principles do not claim to be eternal truths or certain truths.'

Which I take to mean any person is free to reject parts of the system, or to modify it. Therefore, any person who says 'so and so quoted from the extropy website, so obviously that person can be pigeonholed as an 'Ayn Raelian Robot Cultist' (or whatever silly stereotype), has clearly not grasped the idea that the system is flexible enough to defy stereotyping of those who see worth in some aspects of its guidlines.

'You're a Robot Cultist'.

You continue to use a rather innacurate definition of 'cult'. A cult is not a group or organization whose members have beliefs that you think are weird, or silly, or delusional. A cult is defined as having the following characteristics:

VENERATION OF THE LEADER: Glorification of the leader to the point of virtual sainthood or divinity.

INNERANCY OF THE LEADER: Belief that the leader cannot be wrong.

OMNISCIENCE OF THE LEADER: Acceptance of the leader's belief and pronouncements on all subjects, from the philosophical to the trivial.

HIDDEN AGENDAS: The true nature of the group's beliefs and plans is obscured from or not fully disclosed to potential recruits and the general public...

Well, the definitions go on but already it aught to be clear to anyone who claims any kind of knowledge about such things, that transhumanism or extropianism just do not fit those definitions. The beliefs of transhumanism is not 'hidden' or 'obscured', it is there as plain as day on any pro-H+ website FAQ. There is no 'leader' who is never wrong; whose every pronouncment must be accepted under pain of rejection from the group. Anybody is free to question anything at all about transhumanism, whether they include themselves as one of the group (like me) or someone who would not touch it with a ten foot pole (like you).

So whatever else it may be, (psuedoreligion?) it just is not a cult. Period.

Extropia DaSilva said...

'Is it really sensible to be preoccupied with so many things that have going for them nothing but that they are not logically impossible, however disconnected with scientific consensus, actually urgent problems, actually proximate hopes?'.

I would say that the chances of any vaporware technology ever seeing the light of day, increases as the motivations for success diversify.

Smart robots would be useful for industry, convenient as home appliances, tactically decisive in military conflicts. Reverse-engineering the functions of the human brain could well lead to insights into how to build artificial general intelligence, but it would also provide clues on how best to deal with neurological disorders. And then there is the fact that, since time immemorial, the human race has wanted to know how the mind works. Well, one to investigate such a question is to build a working model that captures the salient details.

So there you go, multiple reasons why the pursuit of AI should go on. Many groups from various fields are pursuing their own ideas of how the mind works, how to encode that into AGI and have different reasons for doing so. Perhaps most are on the wrong path, but some may be right, or at least will acquire knowledge that will point us in the right direction.

We should also consider convergent knowledge, whereby a seemingly unconnected area of research comes to our aid. A recent example of this comes from optogenetics, a technique combining lasers, neurology, surgery, and genes taken from certain microorganisms, which all together produces a direct control mechanism for neurons. Team leader, Dr Karl Deisseroth, commented, “these microorganisms were studied for decades by people who just thought they were cool. They didn’t have a thought for neurology, much less neuroscience…[but] without that, we would not be able to build what we did”. Something like human-level AI may not happen just because some robotics lab is trying to build such a thing, but because many seemingly unrelated areas of research converged on the solution to this notorious problem.

Nanotech? It would be tremendously useful to chemists, to biologists, to materials scientists. There is also the rather pressing concern that we have finite resources, which necessitates a push in the direction of increasingly fine control over manufacturing techniques. That logically leads to the building of materials with atomic precision-assuming there is no fundamentally unavoidable showstopping roadblock lurking ahead.

'Honestly, it's not that difficult to be a grown up about these things and move on to funding and regulating research to solve real healthcare problems in the world'.

There is little to argue with in Dale's 1:23 PM post. I am sure any sane person would agree that medicine should be concerned much more with helping the sick and dying, and not so much about enhancing the healthy. The fact that some people do not have access to clean drinking water clearly needs to be addressed far more urgently than access to nanotech 'anything machines', (which is a silly parody of what nanosystems offer, but never mind).

But what was it that James Hughes said? People are not just interested in helping the sick. If that were so, the paralympics would get 100% of finances, leaving the olympics with no financial backing whatsoever. No, people are also interested in seeing what heights the most talented, the most physically fit, the most mentally-gifted, are capable of reaching.

I dunno, maybe the world aught to be populated exclusively by people who are only interested in delivering the basic necessities of life to everyone, who never let their imaginations wander into 'superlative' possibilities.

Actually, no. I do not think the world would be better off if that were the case.

Dale Carrico said...

Shorter Extropia:

One: Dale thinks he's so smart but he's not, so there!

Two: You may think my weird beliefs are like a cult, but you'll be sorry when the Robot God comes barreling down one day!

Three: Your problem is that you just think too small, but me and the boys here, we've got this vision ya see, immortality, wealth beyond the dreams of avarice, and robots to solve every problem, answer to your every whim, c'mon aboard, what's not to like?

Bored now.