Using Technology to Deepen Democracy, Using Democracy to Ensure Technology Benefits Us All

Sunday, October 28, 2007

My Sadly "Outdated Heuristics"

On a blog called Transhuman Goodness my Superlative Technology Critique has come in for rebuke and in terms that are becoming depressingly familiar. From the post, "Imagination Is Banned," I give you Roko:

I take issue with... Dale and [likeminded others] when they want to stop people from letting their imaginations run wild

Once again, my sad failure of imagination is exposed. You should know that I am actually trying to institute a national holiday to precisely this effect, Stamp Out All Imagination Day. "Ban" is exactly the right word to evoke when confronted with my critiques of sub(cult)ural futurisms, for, indeed, my plans call for Black Helicopters and ThoughtCrime tribunals and the whole nine. Oh! If there's one thing I just can't stand it's somebody with an original thought or any stray insight that might warp my fragile little mind. (As with yesterday's post, I fear you will have to wade through some snark to get to argumentative substance. For substance, you might do better to peruse the Summary texts.)

and instead focus attention only onto things which will happen for certain (or almost for certain) and which will happen soon. They are telling us to take our heads out of the clouds and get our noses back down to the grindstone… But I think that they should acknowledge the value of what they have started calling the "superlative" perspective.

If people confined their conversation to certainties we'd all have a dull time of it indeed. This response to criticism certainly feels quite odd, directed at a lover of William Burroughs and Donna Haraway, and at a blog the most consistent feature of which is the posting of paradoxical aphorisms by Oscar Wilde. For whom but the faithful does it feel like censorship to encounter criticism? For whom but the cultist does the exposure to rejection threaten the sense of self so much that it feels like imagination itself is under threat? For whom but the snake-oil salesman does the exposure of pretensions to knowledge where they have not been earned provoke such hysterical defensiveness?

By all means keep your heads in the clouds, it's not for me to straightjacket your wild hopes or flights of fancy, even the ones that I think are stupid. I don't have the power to "ban" anything, nor obviously would I covet such a power. What an odd thing to say! What a weird response to criticism! By all means be a poet, a pervert, a pleasure hound, please, but just don't expect me to pretend that any of that makes you a policy wonk or, far worse, a Priest who deserves his collection plate. Superlative Technocentrics should realize that if what they are looking for are the pleasures of a literary salon or an amateur futurological blue-skying convention they should just own up to that honestly and run with it, drop the outmoded and self-marginalizing identity politics, the policy think-tank pretensions, the obvious cult paraphernalia, and the endless defensive pseudo-science. If it's Imagination you really think you're defending, just try being an aesthete or a philosopher for real, and stop sounding so much like a corrupt lobbyist trying to squirrel some cash for a Bridge to Nowhere or a salesman hawking boner pills and 80s virtual reality rigs in Vegas.

If you accept that technological change is accelerating, you must also accept that our prediction horizon is becoming ever shorter. 1000 years ago there was no need to wonder about what technologies may (or may not) arrive in the next 5-10 years, because technology moved so slowly that one would always have ample warning before anything even moderately new arrived on the scene…

First of all, no, I do not accept that technological change is "accelerating." Indeed, as I have often written before, this actually seems to me a manifestly absurd thing to say when quite obviously some "lines" of technoscientific change are accelerating for now, some are stalling, some appear to be failing altogether, some are combining in unexpected ways yielding jolts and leaps, some are ripe for opportunistic appropriations that will send them off who knows where, and so on.

The Superlative obsession with Acceleration and even "Acceleration of Acceleration" (which really cracks me up as an especially egregious line in hype) may well reflect what the deepening instability of expanding neoliberal financialization of the economy looks like to its beneficiaries (whether real, imaginary, or just short-term), but I don't think it is a very good figure to capture the actual dynamics and complexities of contemporary technodevelopmental churn. All these technodevelopmental arrows hiking hyperbolically up up up the futurological charts at futurological congresses depend for their morphologies on all sorts of definitional hanky-panky, absurd levels of technological determinism and autonomism, utter indifference to the always absurdly uneven distribution of actual developmental costs, risks, and benefits involved, and so on.

I quite understand that the whole Accelerationalism move does conjure up a kind of providential current you can pretend to be riding if you happen to hanker after reassurance in the face of unintended technodevelopmental consequences or want a rationale for conquest or prefer not to have to explain yourself too much or clean up after your own messes, and it has a nicely bolstering ring to it, kinda sorta like Manifest Destiny did to the people with their hands on the triggers. Believe me, I get it, I get it.

I myself am not much interested in the whole acceleration of acceleration model so much (except for a laugh) as I am in the idea of an ever deepening democratization of technodevelopmental social struggle. I want the costs, risks, and benefits of technodevelopmental change to reflect the stakes and the say of the actual diversity of stakeholders to that change. I want this because it is right, because it expands the responsiveness of emerging developmental compromise formations we must cope with together come what may, and because it expands the intelligence of those formations precisely because it better reflects a diversity of valuable perspectives. The emphasis on "acceleration" too easily becomes an alibi for elitist circumventions of democracy (skip down to your own metaphorical evocation of your role at the head of the vanguard, at the top of the mast, among the visionaries, and so on), the risks are too urgent, the benefits too great, ends justify means, it's easier to ask forgiveness than to get permission, blah blah blah, all the same old tired reactionary elitist shit -- except, of course, you know, it's the future! Yes, I propose that there is a loose complementarity between this Superlative acceleration fixation and anti-democratization, between a democracy emphasis like my own and a suspicion of general accelerationalism.

If you refuse to try and look over that horizon, you may end up getting a very nasty shock. Dale is quick to belittle the concept of AGI as a “Robot God”… so presumably he thinks we should not waste our time working on it or thinking about it. After all, it’s over the prediction horizon. It’s “Idle speculation”. But Dale is using outdated heuristics; if we follow his advice, we might find out that he is mistaken the hard way.

If Professor Zed's goofog exhales its brittle brown breath from the sinister stack poking up from his deep underground jungle lab a billion innocent scouts might well die. How can you justify not devoting resources to finding Zed and stamping out the goofog or working up a decent anti-goofog goophage? Sure, this is all probably bullshit, but just think about the scale of destruction I'm invoking here… Even if the goofog has only a 5% chance of coming true won't you feel stupid as you gasp your last breath that you didn't spend a billion on the Technotastic Institute for Stamping out Goofog and Other Arbitrary Awfulnesses? A billion dollars is chicken feed compared to the destruction of every mammal and mollusk on planet earth, even you can see that surely?

Look, did I say one billion? Dig, this is a thought-experiment, I can ratchet up the death count interminably. Yay! It's Imagination! What if I said two billion? What about five billion? Five billion souls might be spared by one billion Dead Presidents sliding over to these bright boys from the TISGOAA here.

If we follow the advice of those timid types who shun the shattering scenarios of TISGOAA and devote their energies instead to neglected treatable diseases in the overexploited world, switching to decentralized renewable energy provision, ending war profiteering, bolstering up international law and standards, and providing basic income guarantees… well we might just find out they are mistaken the hard way when that goofog comes barreling down our asses and then, boy howdy, won't we all wish we had listened more to the Brain Trust over at TISGOAA!

Most people will just throw out ideas that sound silly to them without a moment’s thought. As a mathematician and scientist, I have been trained not to do this. Theory of mechanics where time is not absolute? That sounds silly, but it’s actually true. Atoms which are in two places at once or even everywhere at once? Sounds silly but is also true.

Yes, yes, I know, I know, you're all Einstein and Tesla and the Wright Brothers and possibly Ayn Rand, too, all condensed into one radioactively brainy scientastic package. What "sounds silly" to me after the less than "a moment's thought" that elapsed while I was writing thousands upon thousands of words over years of time on these subjects of Murderous Robot Gods, nanoscale swarm weapons reducing the earth to goo, or nanoabundance owned by the rich nevertheless installing a post-scarcity gift-society for all, rejuvenation pills offering sexy immortal lives to billions now living, and uploads into digital networks bequeathing anybody who craves eternity an angelic informational existence, all that stuff that "sounds silly" to me sounds instead like the soundest science and the most serious foresight to them as knows the Truths of the Elect.

Transhumanists look over the horizon… In the ship of society, we are like the man in the crow's nest. If we say that we see something like AGI or Advanced Nanotechnology over that horizon, don’t take it as a certainty, because there’s a good chance that we’re wrong. But at least take the idea as a serious possibility, and start making contingency plans.

You're right, you guys really are awesome, if you do say so yourselves.

59 comments:

Anonymous said...

Even if the goofog has only a 5% chance of coming true won't you feel stupid as you gasp your last breath that you didn't spend a billion on the Technotastic Institute for Stamping out Goofog and Other Arbitrary Awfulnesses?

I assume you agree that if something actually does have a 5% chance of killing a billion people, it deserves quite a lot of attention. (Irrespective of whether any particular technology people are excited about has this property.)

Dale Carrico said...

What do you think?

jimf said...

Dale wrote:

> [S]top sounding so much like a corrupt lobbyist trying to squirrel
> some cash for a Bridge to Nowhere or a salesman hawking boner pills
> and 80s virtual reality rigs in Vegas.

"Transhumanist Queen Natasha" on the Bridge to Nowhere:
http://transumanar.com/index.php/site/saturday_20_transhumanist_meetings_in_second_life/

No boners in Second Life, though, they say.

> [Y]ou're all Einstein and Tesla and the Wright Brothers and possibly Ayn Rand. . .

Only Norman Einstein, though.
http://www.normaneinsteinbook.com/

jimf said...
This comment has been removed by the author.
jimf said...

> Look, did I say one billion? Dig, this is a thought-experiment,
> I can ratchet up the death count interminably. Yay! It's Imagination!
> What if I said two billion? What about five billion? Five
> billion souls might be spared by one billion Dead Presidents sliding
> over to these bright boys from the TISGOAA here.

"Total casualties amounted to 851.4 billion (+/- 0.3%) sentient creatures,
including medjel (slaves of the Idirans), sentient machines and non-combatants,
and wiped out various smaller species, including the Changers. The war resulted
in the destruction of 91,215,660 (+/- 200) starships above interplanetary,
14,334 orbitals, 53 planets and major moons, 1 ring and 3 spheres,
as well as the significant mass-loss or sequence-position alteration
of 6 stars.

Despite the relatively small scale -- in comparison with the rumoured
conflicts of the past as referred to by the sublimed species
of the galaxy -- the Idiran-Culture war is considered one of the most
significant events in (Iain M. Banks) galactic history."

http://en.wikipedia.org/wiki/Idiran-Culture_War

Anonymous said...

Dale,

Potential casualties can't be scaled up indefinitely since the extinction of humanity (and any human-derived intelligences) provides a cap. In a world with known risks such as asteroids, supervolcanoes, etc, ultra- improbable risks like Zed's Goofog or Christian Armageddon can't rightly command our attention until the better-substantiated ones are dealt with.

However, I don't see a justification for ruling out the development of human-level AI in our lifetimes with extraordinarily high confidence. Despite decades of hype and failure (combined with the development of improved hardware and knowledge) I can't rule out (assign a probability of less than 1% to) the development of cost-effective fusion power over the next 30 years. In order to do so I would have to have a tremendous amount of physical and chemical knowledge that no one in fact possesses today.

Given economic incentives from various industries, increasing knowledge of the brain, greatly increased hardware power, accumulating improvements in computer science/software, etc, and the very limitations of our current understanding of intelligence (we don't have a theory with which to rule out all approaches that may be tried in coming decades) I don't see any reasonable way to get 99+% certainty that AI will not be developed in our lifetimes. And you do seem to make the substantive claim that there is a negligible probability of AI development, in addition to your (sometimes penetrating) psychosocial and ad hominem critiques. If the probability of AI is non-negligible, then the appropriate response to cultish tendencies and flawed organizations pondering the topic is to supplant them with more sensible approaches, not to take them as a justification to entirely dismiss the underlying issues.

Environmentalist organizations have consistently made grossly exaggerated and often outright bogus claims about coming environmental apocalypses or dangers of various technologies. They are riddled with people who take a religious attitude towards 'Nature' and reject science whenever it conflicts with their preconceptions. Nevertheless, these flaws do not mean that smart people should not work to protect the environment, and to do so in ways that many doctrinaire environmentalists initially find objectionable, e.g. cap-and-trade systems, which many environmentalists saw as abominably 'selling the right to pollute,' or the creation of crops genetically engineered to require less water or chemical fertilizers. In my view, the same logic applies to weird people discussing AI.

James,

What's the point of the Banks quote?

jimf said...

"Utilitarian" asked:

> What's the point of the Banks quote?

What, you mean it isn't obvious? :-/
All right.

For one thing, there's always been a subtext --
among the **unapologetic** superlatives, dontcha know --
that the events surrounding the Singularity on this planet
may well determine the entire remaining course of intelligent
life in the universe (yes, universe).

Second, the quote is an allusion to the fact that Banks's imagination
(unlike Gibson's) is one that one is permitted to appreciate
if one is a proper >Hist (not that I hold that against
Banks -- but then again, the majority of SF authors seem
to be brighter than the majority of their fans,
which isn't surprising, I guess). The best parts of
the >Hist imaginary comprise a pastiche of the scenarios cooked
up by these authors (together with a few less savory elements
that can't be blamed on Banks or Egan).

And finally, it's always tickled me that the error range
in that number encompasses (and, no doubt, the numbers
were chosen so that it should encompass) the current
population of the earth.

Anonymous said...

"that the events surrounding the Singularity on this planet
may well determine the entire remaining course of intelligent
life in the universe (yes, universe)."
Given the Fermi paradox, the fact that we can't reach the very distant galaxies because of the expansion of the universe (barring really weird new physics), and the 2nd Law of Thermodynamics, it seems pretty plausible that our future light-cone (the *accessible* universe) is otherwise uninhabited. Of course, for any multiverse theory or big spatial universe there will be lots of other intelligent life, but we won't have access to it.

ZARZUELAZEN said...

That really takes the cake that does - it's the Robot cultists that are determined to 'crush imagination', not Dale.

Yudkowsky and co have made it very very clear that they're not interested in hearing from any-one who is not 'smart' (ie thinks exactly like them). Wild ideas from any-one that contradict SIAI views are met with the meanest, nastiest derision. When I finally got sick of being kicked around by those guys and started answering them back the same way they treated me, they just got me kicked off the transhumanist lists.

So they're actually much worse than Dale's parody suggests. It's more like:

'Hey we have all the answers, we're smarter than you and unless you've got an IQ of at least 150 we're not interested in hearing from you. Don't have any independent thoughts at all because you're too stupid.

Oh, but look you little monkey, you still help us by sending me all your money - I await your cheque. Thanks. Yours- Robot Cult'

Anne Corwin said...

Superlative Technocentrics should realize that if what they are looking for are the pleasures of a literary salon or an amateur futurological blue-skying convention

Hmm. I guess my impression from the beginning has been that transhumanism is something basically equivalent to "a literary salon, or an amateur futurological blue-skying convention".

The idea of taking it more seriously than that seems...well, frankly bizarre. And I say this as someone who is another cheerful "bottle washer" (well, technically, an envelope-stuffer, but you know what I mean) for the WTA.

I take a lot of things quite seriously (longevity/healthcare advocacy, neurodiversity, disability rights, etc.) and sometimes I attempt to extrapolate into the future using "transhumanist blue-skying" to make points about these things (e.g., noting how anyone who conceives of a world rife with cognitive modification technology would do well to familiarize himself with existing neurological variation).

That is, transhumanism is a particular lens through which principles and policy can sometimes be looked at, but it's not a principle or a policy in and of itself. And it doesn't make sense to pretend that it is.

I honestly (and I am not trying at all to be sarcastic or snide here) think that people with Very Superlative Ideas ought to devote their energies toward trying to write some good science fiction.

That would allow them to exercise their imaginations in full spectral bloom (and there are some impressive superlative imaginations out there), and it would relieve them of the burden of having to "prove" that people ought to be really concerned about advanced AI, etc. When people read science fiction, they suspend their disbelief for the sake of the story, and if superlative sorts want more people to actually think about their ideas, they might seriously want to think about working on a few good novels.

(and once again, I hope this does not come across as my being arrogant or patronizing -- I don't think I'm "better than" people with more superlative mindsets, or that my own ideas wouldn't sometimes be better suited to the pages of a fiction book. I'm just making a suggestion based on the content of some of the arguments I've been reading lately.)

Dale Carrico said...

Anne: I honestly (and I am not trying at all to be sarcastic or snide here) think that people with Very Superlative Ideas ought to devote their energies toward trying to write some good science fiction.

Hear, hear!

ZARZUELAZEN said...

Great idea AnneC. I have an updated hot-list of some of the best sci-fi sites on the web - including critical analysis - came in around 126 links at last update. Some really great links in there. Check 'em out all:

http://marged.100webspace.net/SciFi_Fantasy_Internet_Directory.htm

---

Science fiction of course is one of the *outputs* of *reflection in the volitional domain* - Art communicates *teleological ideas*, as for instance compared to computer modelling languages, which communicate *logical ideas*.

Effective *commmunication* of teleological ideas conveys a representation of the harmonious interaction of agents to form an integrated whole - this is Beauty - and Beauty is after all, what its all about. Art is food for the imagination and the spirit in the Soul!

Now go feast your imagination at some of my hotlinks! Learn about the wonderful card game of 'Magic':

http://en.wikipedia.org/wiki/Magic:_The_Gathering

Make sure you learn about how the gorgeous yet simple graphics of the computer game 'Myst' swept the world:

http://en.wikipedia.org/wiki/Myst

Visit the fantasy art of 'Elfwood':

http://www.elfwood.com/

or learn why millions recently mourned the passing of American fantasy writer Robert Jordan, who died before he could complete his famous fantasy epic 'Wheel Of Time':

http://www.tor.com/jordan/

Ah the rich exquisiteness of beauty and the imagination to be found in science fiction and fantasy!

Giulio Prisco said...

Anne: "I honestly (and I am not trying at all to be sarcastic or snide here) think that people with Very Superlative Ideas ought to devote their energies toward trying to write some good science fiction.

Anne, I don't have what it takes to write like Clarke, Rucker, Egan and Stross! The only thing I can do is to read some good science fiction, and to tell others that it is good.

"transhumanism is something basically equivalent to "a literary salon, or an amateur futurological blue-skying convention""

I can certainly live with this definition. But our friend Dale seems to think that members of this salon are automatically disqualified from participating in "serious" political and social initiatives. Which is what I call chickenshit.

jimf said...

> But our friend Dale seems to think that members of this salon are
> automatically disqualified from participating in "serious"
>political and social initiatives. Which is what I call chickenshit.

Dale is not saying that SF lovers are thereby disqualified from
being responsible citizens. He **is** saying that SF lovers
who have come to believe that their enthusiasm for the tropes
of SF -- aliens, spaceships, God-like or Satan-like "superintelligent"
beings, or magical matter replicators -- **constitute**
"'serious' political and social initiatives", are full of it.

And they are. And dangerously so. Most of the danger
derives not from the content, which could just as well be
Ouija boards and lizard beings from the Sixth Density, but from
the bubble-universe guru/True Believer social structure that's
immune to criticism and the normal controls of common sense
and compromise. The fact that this particular proto-cult
is attractive not to trailer-park inhabitants but (as were
Scientology and Objectivism before it) to "Silicon Valley
wealthoids" (some sporting pretty nasty reactionary politics)
makes it all the more ugly.

VDT said...

Dale is not saying that SF lovers are thereby disqualified from
being responsible citizens. He **is** saying that SF lovers
who have come to believe that their enthusiasm for the tropes
of SF -- aliens, spaceships, God-like or Satan-like "superintelligent"
beings, or magical matter replicators -- **constitute**
"'serious' political and social initiatives", are full of it.


And they are. And dangerously so. Most of the danger
derives not from the content, which could just as well be
Ouija boards and lizard beings from the Sixth Density, but from
the bubble-universe guru/True Believer social structure that's
immune to criticism and the normal controls of common sense
and compromise. The fact that this particular proto-cult
is attractive not to trailer-park inhabitants but (as were
Scientology and Objectivism before it) to "Silicon Valley
wealthoids" (some sporting pretty nasty reactionary politics)
makes it all the more ugly.


EXACTLY!

Why some people are unable to understand this obvious point boggles my mind...

jimf said...

"Utilitarian" wrote (to Dale):

> I don't see a justification for ruling out the development
> of human-level AI in our lifetimes with extraordinarily high confidence. . .
>
> Given economic incentives from various industries, increasing knowledge
> of the brain, greatly increased hardware power, accumulating improvements
> in computer science/software, etc, and the very limitations of our current
> understanding of intelligence (we don't have a theory with which to
> rule out all approaches that may be tried in coming decades) I don't see
> any reasonable way to get 99+% certainty that AI will not be developed
> in our lifetimes. And you do seem to make the substantive claim that there
> is a negligible probability of AI development, in addition to your (sometimes
> penetrating) psychosocial and ad hominem critiques. If the probability
> of AI is non-negligible, then the appropriate response to cultish tendencies
> and flawed organizations pondering the topic is to supplant them with
> more sensible approaches, not to take them as a justification to entirely
> dismiss the underlying issues.

Look, suppose we assume for the sake of argument that human-level AI
is on track for 2040. (I suspect that if you got that RAND Corporation
report on the likelihood you'd be disappointed, but lets put that
aside for the sake of argument).

In order to go on to argue that SIAI (or a similar outfit) should be
funded to the level of, say, the Centers for Disease Control, or
even the Defense Department, and take a similar front-stage role
in public policy, you'd also have to buy their argument that the
achievement of human-level AI would necessarily entail the
"recursive self-improvement" bootstrap that would almost immediately
inflate the AI into God-like (or Satan-like) superintelligence,
and would thereby constitute the "existential threat" that
they go on about. You'd also have to buy the argument (and all
the question-begging it implies) that a mathematical-deductive
approach to "friendliness" is an adequate response to this
"threat".

There's a lot more baggage here than just "not ruling out the possibility
of the development of human-level AI in our lifetime". If you're
**that** attached to the entire package, then you'll have to forgive
me for suspecting that you've already developed a hankering for
that particular flavor of kool-aid.

gp said...

"Dale is not saying that SF lovers are thereby disqualified from
being responsible citizens. He **is** saying that SF lovers
who have come to believe that their enthusiasm for the tropes
of SF -- aliens, spaceships, God-like or Satan-like "superintelligent"
beings, or magical matter replicators -- **constitute**
"'serious' political and social initiatives", are full of it."

Not so. I must have said hundred of times that I do NOT believe that SF scenarios **constitute** serious political and social initiatives. Others have said similar things. Dale will say that we do not really know what is going on in our minds, discard our arguments because he knows better, and go on with his broken disk talk.

VDT said...

Not so. I must have said hundred of times that I do NOT believe that SF scenarios **constitute** serious political and social initiatives. Others have said similar things. Dale will say that we do not really know what is going on in our minds, discard our arguments because he knows better, and go on with his broken disk talk.

Actually, you and others can be quoted as saying the exact opposite regardless of what you may have later said in self-defense, so how do you reconcile these embarrassing contradictions?

VDT said...

Not so. I must have said hundred of times that I do NOT believe that SF scenarios **constitute** serious political and social initiatives. Others have said similar things. Dale will say that we do not really know what is going on in our minds, discard our arguments because he knows better, and go on with his broken disk talk.

gp, let's say for argument's sake that you are right to the extent that you and the people you know have never said that scenarios constitute serious political and social initiatives. You are (again) missing the larger point that was being made which is that:

Most of the danger derives not from the content ... but from the bubble-universe guru/True Believer social structure that's immune to criticism and the normal controls of common sense and compromise.

Giulio Prisco said...

De Thezier, I believe we discussed this ad nauseam on the wta-talk list, which has archives that can be consulted by everyone and records of who said what. OK, you want to play this stupid game again. So please tell me when I said that SF scenarios **constitute** serious political and social initiatives. Quotes please, not overstretched personal interpretations.

Michael Anissimov said...

SIAI works towards Friendly (through whatever means works, something other than mathematical-deductive if necessary) seed AGI because the people in the organization see it as a high moral priority. This is humanity's first experience of stepping beyond the Gaussian curve of ordinary human intelligence distributions. If it is not facilitated by AGI, it will be by enhancing humans: whether through psychopharmacology, neuroengineering, brain-computer interfaces, gene therapy, etc.

The question is not "if" intelligence enhancement technologies will be available, but "when". When they are, it will become possible to "construct intelligence" actively rather than be limited to human generational cycles, birth-rates, and education. Now there's nothing at all wrong with these conventional human patterns, but we have to note that the introduction of enhancement technology is bound to throw the existing order out of whack.

It shouldn't be hard to imagine that enhanced humans or AGIs won't get to the point of being substantially smarter than the smartest given humans. After all, the hardware differences, in terms of basic components, between a human and a chimp isn't actually all that large. But a machine could process thoughts at greater speeds and with more flexibility than any member of our species.

If intelligence enhancement tech really does produce a superintelligence, then we have a moral duty to maximize the probability that said superintelligence cares about humanity as a whole, not itself or any narrow group of humans. Otherwise the outcome could be grim. A few thousand Europeans enslaved native populations of millions with "only" somewhat more advanced technology -- here we are talking about fundamental differences in substrate and cognitive architecture. To assume that we could keep intelligence-enhanced people or AGIs under our control is foolish.

So, the idea is to "get them while they're young": create superintelligences with altruistic goal systems. SIAI is the only organization pursuing this goal in a structured manner.

Giulio Prisco said...

De Thezier, thanks for conceding the previous point.

Now concerning "Most of the danger derives not from the content ... but from the bubble-universe guru/True Believer social structure that's immune to criticism and the normal controls of common sense and compromise":

Immune to criticism? WTF, I am discussing this here on Dale's blog, or in the other words "in the enemy's camp", and I am not doing this because I have nothing else to do, but because I am interested in the points made by Dale even if disagree with him. Not your typical cultist reaction I would say.

VDT said...

De Thezier, I believe we discussed this ad nauseam on the wta-talk list, which has archives that can be consulted by everyone and records of who said what. OK, you want to play this stupid game again. So please tell me when I said that SF scenarios **constitute** serious political and social initiatives. Quotes please, not overstretched personal interpretations.

Since my time online is limited, I am not interested playing games with you so I won't waste my time finding and quoting statements which you will rationalize to mean something that doesn't embarrass you.

De Thezier, thanks for conceding the previous point.

I was NOT conceding anything to you. I simply don't want this debate to get sidetracked into another one of your "but-I-didn't-mean-to-say-that" dramas.

Immune to criticism? WTF, I am discussing this here on Dale's blog, or in the other words "in the enemy's camp", and I am not doing this because I have nothing else to do, but because I am interested in the points made by Dale even if disagree with him. Not your typical cultist reaction I would say.

First of all, discussing "in the enemy's camp" doesn't prove that someone is not immune to criticism or doesn't have a cultist impulse. It simply means that, although you cling to your beliefs in the face of all counterarguments or evidence presented to you, you are willing to engage in discussions whose conclusion can already be predicted much as one with a Christian can be.

Second, neither Dale, jfehling or I were talking about you *necessarily* but many people in the transhumanist and singulatarian subcultures. The problem is that you always take *any* criticism of these people as a personal attack against you. Can you even concede that some or even many transhumanists and singulatarians do exhibit the behavior that Dale, jfehlinger and I are criticizing? Probably not.

Dale Carrico said...

Michael: SIAI works towards Friendly (through whatever means works, something other than mathematical-deductive if necessary) seed AGI because the people in the organization see it as a high moral priority.

Michael, how's about a nice definition of "Friendly Seed AGI" for the kids at home. A nice sentence or two. The terminology isn't widespread. As you know I have my own sense of what this "high moral priority" amounts to, in fact, but I'd like to hear it from you. As extra credit, I'd be curious if you could define "intelligence" (a concept on which "Friendly Seed AGI" depends) in a comparably pithy way.

This is humanity's first experience of stepping beyond...

Cue the music.

The question is not "if" intelligence enhancement technologies will be available, but "when".

Actually, there is still palpably a question of "if." There is also a question of when the question of "when" might as well amount to the question of "if" due to the timescales and complexities involved.

Now there's nothing at all wrong with these conventional human patterns,

Gosh, that's big of you.

but we have to note [are compelled to note, by some unspecified necessity -- believe me, it isn't logic] that the introduction of enhancement technology is bound to [again the conjuration of necessity, certainty -- where from? One wonders.] throw the existing order out of whack.

You say "Out of whack," but my guess is you think you have a pretty clear idea of how it's gonna go down, Michael. What I like about this is all the certainty and necessity of the phrasing, without much in the way of admitting how freighted all of these pronouncements are by caveats, qualifications, unintended consequences, sweeping ignorance of fields of relevant knowledge, indifference to historical vicissitudes, and so on. All that is bracketed, and only the stainless steel trajectory luminously remains.

It shouldn't be hard to imagine

Of course not. We've all read sf, watched sf movies and tv shows, seen the ubiquitous iconomgraphy on commercials, etc. No, it isn't hard to "imagine" at all.

that enhanced humans or AGIs won't get to the point of being substantially smarter than the smartest given humans.

Smarter -- how? But bracketing all that for a moment, why not notice that there is greater "intelligence" in cooperation, non-duressed functional division of labor, digital networked p2p production already? Why not devote yourself to unleashing the intelligence humans already palpably demonstrate a capacity for, a desire for, in the service of shared problems that are all around us? Why do you think I advocate a basic income guarantee? Of course it's the right thing to do, of course it provides a basic stake protecting people from exploitation by elites, but also it would function to subsidize citizen participation in p2p networks, creating, editing, criticizing, organizing in the service of freedom.

All the Robot God bullshit is just a funhouse mirror in which symptomatic hopes and fears are being expressed (fine as far as that goes -- it's like literature in that respect, to the study and teaching of which, you may have noticed, I have devoted no small part of my life)

If intelligence enhancement tech really does produce a superintelligence, then we have a moral duty to maximize the probability that said superintelligence cares about humanity as a whole, not itself or any narrow group of humans. Otherwise the outcome could be grim. A few thousand Europeans enslaved native populations of millions with "only" somewhat more advanced technology

If if if if if if if if if if if if if if if if -- and then conjurations of global devastation and enslavement. A scenario caveated into near irrelevance and then hyperbolized back into pseudo-relevance. You might as well be talking about when Jesus comes or the flying saucers arrive.

Before you inevitably misread the *substantial* force of what I am saying here as my lack of vision or foresight, understand that to the extent that software actually can produce catastrophic social impact (networked malware, infowar utilities, automated weapons systems, asymmetric surveillance and data-manipulation and so on) these are actual problems to be addressed with actual programs on terms none of which are remotely clarified by the Superlative iconography of entitative post-biological superintelligent AI or eugenicized intelligence-"enhancement."

It's not that I utterly "discount" the "5% risk" you guys constantly use to justify passing the collection plate at Robot God meetings (even though, truth be told, you pull that number out of your asses and can't yet define basic terms -- "intelligence" "friendliness" -- on which you depend to the satisfaction of Non-Believers), nor claim certainty that nothing like the scenarios that preoccupy your attention "will" or "can" come to pass. My critique has never taken that form. I don't think you guys are ready for prime time critique in that vein. While you are playing at being scientists, most of your discourse has too much Amway and Heinlein in it to really qualify for that designation by the standards I am familiar with (bad news for you: I have actually taught philosophy of science -- I know that will come as a shock since you guys like to dismiss me as an effete elite aesthete to muzzy-headed to grasp the hard chrome dildo of True Science).

But, anyway, again it's not that I dismiss your likeihood and timeline estimations (considering them more as a line of hype than real efforts at science in the main), it's that I think such risk as one can actually reasonably attribute to networked malware and lethal automation and the like is best addressed by people concerned with present, actually emerging, and palpably proximately upcoming technodevelopmental capacities rather than uncaveated and hyperbolic Superlative idealizations freighted with science fiction iconography and symptomatic of the pathologies of agency very well documented in association with technology discourse in general.

(Some advice, one day, when the mood striked you, you might read some Adorno and Horkheimer, Heidegger, Arendt, Ellul, Marcuse, Foucault, Winner, Latour, Tenner, Haraway, Hayles, Noble, for some sense of the things people know about you that you don't know about yourselves. You won't agree with all of it, as neither do I, but if you take it seriously you will come out of the experience feeling a bit embarrassed about the unexamined assumptions on which Superlative Technology Discources always depend.)

So, the idea is to "get them while they're young": create superintelligences with altruistic goal systems. SIAI is the only organization pursuing this goal in a structured manner.

Yes, yes, I know. The idea is that the Singularitarians are the good guy geniuses in the thankless role of saving humanity from the bad guy geniuses who by design or through accident will create the Bad Robot God who will destroy or enslave us, while you want to get there first and create the Good Robot God who will solve all of our problems (cause, he's infinitely "smarter," see, since "smartness" is a reductively instrumental problem-solving capacity and problems are reductively solvable through the implementation of instrumental rationality) and save the world, it's a Robot God arms race, a race for time, urgent, in fact nothing is more urgent once you grasp the stakes, hell, billions of lives are at stake, etc etc etc etc etc. Complete madness. But, indeed, very "serious." Very "serious" indeed.

If SIAI looks to create anything remote like its Robot God, the secret lab will be closed down and the people involved thrown in jail (and a good job too) as far as I can tell. Otherwise, the corporate-militarists themselves will be the ones to create such a thing. They certainly would use advanced lethal authomation and malware and inforwar utilities for malign purposes (as they do almost everything already).

If you were really serious about Unfriendly Robot Gods, you would be engaging in education, agitation, and organizing to diminish the role of hierarchical formations like the military in our democratic society -- demanding an end to secret budgets and ops, making war unprofitable, supporting international war crimes and human rights tribunals, and so on. That coupled with a support of international and multilateral projects to monitor and police the propagation of networked malware, stringent international conventions on automated weapons systems is what a grownup sounds like on this topic.

Nothing is clarified by the introduction of Superlative Discourse to such deliberation, only the activation of irrational passions in general, and an increased vulnerability to hyperbolic sales-pitches and terror discourse of a kind that incumbent interests use to foist absurdly expensive centralized programs down our throats to nobody;s benefit but their own. Sorry, I don't buy any of it.

Giulio Prisco said...

"Can you even concede that some or even many transhumanists and singulatarians do exhibit the behavior that Dale, jfehlinger and I are criticizing? Probably not."

Sure I can. But last time I checked in the English dictionary there was still a difference between "some" and "all".

VDT said...

Sure I can. But last time I checked in the English dictionary there was still a difference between "some" and "all".

I knew you were going to say that. However, what you seem to willfully ignore is that in --- light of the fact that Dale and I have repeatedly said that there are *some* sensible transhumanists that do not exhibit the behavior we are criticizing --- we have never said that *all* transhumanists were guilty of this behavior. So there is no need for you to always jump up and down to reminds us of what we already acknowledge!

gp said...

"you cling to your beliefs in the face of all counterarguments or evidence presented to you"

Indeed, I cling to my belief that 2+2=4 in the face of all counterarguments or evidence that 2+2=5 presented to me.

Seriously. From my point of view it is you who "cling to your beliefs in the face of all counterarguments or evidence presented to you". Did anyone say something about diversity, inclusiveness and let 100 flowers bloom? Or is your point of view the only valid one?

Anonymous said...

"In order to go on to argue that SIAI (or a similar outfit) should be
funded to the level of, say, the Centers for Disease Control, or
even the Defense Department, and take a similar front-stage role
in public policy, you'd also have to buy their argument that the
achievement of human-level AI would necessarily entail the
"recursive self-improvement" bootstrap that would almost immediately
inflate the AI into God-like (or Satan-like) superintelligence,
and would thereby constitute the "existential threat" that
they go on about. You'd also have to buy the argument (and all
the question-begging it implies) that a mathematical-deductive
approach to "friendliness" is an adequate response to this
"threat"."
James,

My argument is that if AI is plausible then there will be some appropriate strategies to pursue, most of them very different from SIAI's donor-funded secretive algorithmic research approach.

If AI is first constructed through imitating brain processes, then beneficial outcomes would hinge more on processes of education and understanding messy emotions rather than formally represented goal systems. Then the SIAI approach would be irrelevant (except as something to be taken up by the humanlike AI) but the personality and motives of resulting AI would remain extremely important.

This is so even without a 'hard takeoff': if it takes time to develop and then educate an AI the first ones created can occupy whole labor markets by rapid self-replication. This would create massive populations with extremely similar motivations, a powerful constituency that would shape the future and provide the base for modification for superior intelligence (even if that process takes several years). Creating the initial AI and allowing it some freedom to communicate and replicate would be practically and politically irrevocable (as AI could quickly spread across borders and make itself economically and militarily indispensable) without extraordinarily rapid recursive self-improvement.

The shared motivations of those AIs would matter enormously: shared tendencies towards sociopathy and contempt for humans (harmless in positions of political weakness) could be disastrous even to the point of human genocide, while benevolence at or beyond the upper limits of the human range could make for a tremendously better democratic society. And in either case the susceptibility of digital minds to copying, intelligence enhancement without many of the complications of using biotech on brains, and running on faster hardware would mean tremendous increases in economic and technological growth.

If hostile or unfriendly AI is not a danger at all, then billions of dollars should be put into public funding for basic research in AI, allocated through normal processes of peer review (plus potentially better ones like tiered prizes). The ratio of cost to expected benefit would be very favorable relative to funding for nuclear fusion research (with its or much of our biomedical research funding (most of which will fail to produce anything of value). [Incidentally, I think both fusion power and cancer research are very worth funding, despite their track records of hype and failure.]

If hostile AI is a danger, and we have unfriendly humans and the fact that power is a convergent goal for lots of different nonhuman motivations to support the idea that it is, then other general measures could be appropriate. Theoretical examination of those dangers that can be conceived of now, conditions on government funding or regulation to ensure that precautions are taken, and possibly an increase in research funding to increase the likelihood that AI is developed under the safety framework rather than outside it come to mind.

Regardless, there are many things to try to do to produce beneficial AI other than secretive research aimed at a provably safe algorithmic AI, funded by private donations at an organization like SIAI with staff you dislike for various reasons (some of them very important ones). There are many possibilities under which AI research and AI safety are of critical importance other than the SIAI hard takeoff scenario, and alternative paths to solutions. I would like to see a supplement to Dale's critique of Superlative discussion of AI with positive alternatives, rather than throwing out the baby with the bathwater. [And I don't take the advice to join the ACLU as really serious engagement on positive alternatives, given the extent of resources already dedicated to its causes and diminishing marginal returns. Discussion of activism to disclose secret weapons research by DARPA would better suggest engagement rather than just a convenient club for dismissal. I also haven't seen a response to the argument that if AI safety is nothing to worry about then we should just try to increase public funding for basic research into AI by quite a lot.]

Giulio Prisco said...

"Yes, yes, I know, I know, you're all Einstein and Tesla and the Wright Brothers and possibly Ayn Rand, too, all condensed into one radioactively brainy scientastic package."

I must say that I have never cared much for Ayn Rand. I used to find her writing boring, her ideas naive, her logic sloppy, and her politics disgusting.

But if you guys here are so much against her, I am beginning to think that perhaps I should read her again. Maybe I will like her more this time.

jimf said...

Dale wrote:

> If you were really serious about Unfriendly Robot Gods, you would
> be engaging in education, agitation, and organizing to diminish the
> role of hierarchical formations like the military in our democratic
> society -- demanding an end to secret budgets and ops, making war
> unprofitable, supporting international war crimes and human rights
> tribunals, and so on. That coupled with a support of international
> and multilateral projects to monitor and police the propagation of
> networked malware, stringent international conventions on automated
> weapons systems is what a grownup sounds like on this topic.

Yes, and while you were at it, I think your theoreticians would spend
less time formulating networks of "subgoals" and "supergoals"
(as though we were programming some kind of 1980s expert system in
OPS5) and look more at what contemporary psychology has to say
about such things as psychopathy, narcissistic personality disorder,
and Machiavellianism. This is where you look to find out what
makes **people** "friendly" (or otherwise). Rather than shrilly dismissing all
psychology as "bunk" in the manner of Scientologists (or Objectivists,
for that matter). I can understand why this kind of thing might
be **a little close to the bone** for a lot of S-ians, but there
it is. Deal with it.

Giulio Prisco said...

"Dale and I have repeatedly said that there are *some* sensible transhumanists that do not exhibit the behavior we are criticizing"

Good, now we are getting somewhere. I must say that I had formed a different impression from your rants against transhumanists (sorry - superlative technocentrics) but I see that I must have misunderstood some words.

But, wait a minute... Dale, do you agree with the sentence quoted above?

VDT said...

Indeed, I cling to my belief that 2+2=4 in the face of all counterarguments or evidence that 2+2=5 presented to me.

Forgive me for being too concise. I meant to say "who not only holds some belief which the vast majority of his contemporaries recognize as pseudoscientific but clings to this belief in the face of all counterarguments or evidence presented to him."

A good example would be creation-science (which I know you reject) and Tiplerian Omega Point "theory" (which I know you hold).

Seriously. From my point of view it is you who "cling to your beliefs in the face of all counterarguments or evidence presented to you".

Really? Please give one example of a *pseudoscientific* belief I cling to in the face of all counterarguments or evidence presented to me?

Did anyone say something about diversity, inclusiveness and let 100 flowers bloom? Or is your point of view the only valid one?

Being for diversity and inclusiveness doesn't mean someone should become gullible. Should we in the name of diversity and inclusiveness let creation-science be taught in schools? Of course not.

For the record, my point of view is one of open-minded skepticism. And in light of the alarming polls showing how many Americans believe in weird things such as ghosts and UFOs, I do think that an open-minded skeptical point of view is not the only valid one but the most urgent.

Anonymous said...

I should recognize that Dale has just mentioned secret DARPA budgets, (although in other exchanges his offered alternatives have been more narrowly ACLU-type).

Also, working towards international restrictions on automated weapons could perhaps lay the groundwork for more general future deals on AI development, although the differences between automated aircraft or combat robots that are under the more or less full control of their command structures and self-willed AI capable of rebellion affect the likelihood of agreement (the treaty against biological weapons was greatly enabled by their tendency to get out of control).

VDT said...

Good, now we are getting somewhere. I must say that I had formed a different impression from your rants against transhumanists (sorry - superlative technocentrics) but I see that I must have misunderstood some words.

The reason why you formed a different impression is because 1) rather than taking a deep breath and meditating over what is being said you succumb to a knee-jerk reaction of having to defend the good name of transhumanism because you take any criticism of it as a personal attack even it clearly is not, and 2) you selective forget every instance where and when Dale or I have carefully qualified our criticisms with the sentence *although some transhumanists are sensible*.

But, wait a minute... Dale, do you agree with the sentence quoted above?

Of course he does but he is usually refering to people like James Hughes and Anne Corwin.

Dale Carrico said...

Dale, do you agree with the sentence quoted above [that not all transhumanist-identified people are dangerously deluded Superlative Technocentrics]?

Oh, yes. Case in point is my good friend the socialist-feminist Buddhist James Hughes, who describes himself as a "democratic transhumanist." There are others who participate in transhumanist circles almost entirely through their engagement with his work. Although I disagree with these folks on many questions I have no problem at all saying that I enjoy and benefit from much of their writing.

I admit that I am hoping against hope that one day James will drop the whole transhumanist bit for something more promising -- I think he confuses carrying water for Robot Cultists and Libertopians with building a technoprogressive update to the marvelous Fabian project to bring enlightened socialism to the masses.

It's a truly endearing mistake, so lovely that I forgive the error without a qualm. And, hell, who knows, maybe he'll pull it off. Crazier things have happened.

jimf said...

Justice de Thezier wrote:

> "Dale and I have repeatedly said that there are *some* sensible transhumanists
> that do not exhibit the behavior we are criticizing". . .
>
> . . .but he is usually referring to people like James Hughes and Anne Corwin.

I'll add Eugen Leitl as somebody I think of as a "sensible"
transhumanist.

SF author Damien Broderick is, as is the case with most SF authors,
more clued in than the folks he "goes slumming" with on the >Hist
lists. He has been reluctant to challenge the most blatant silliness,
though, except in the most indirect and ironic ways (sallies which
are usually over the heads of most of his readership). I do not
know the source of his reluctance -- it could be that he's already
staked too much of his reputation on the >Hist agenda (e.g., with
books such as _The Spike_); it could be that he regards the >Hist
community as "his folks", warts and all; it could be that he expects
some good to come out of them, whatever their faults; and it
could be that he shares their elitism, to some extent. At any
rate, he seems to be a starry-eyed admirer of self-styled
"supergeniuses".

jimf said...

Giulio Prisco wrote:

> I must say that I have never cared much for Ayn Rand. I used to
> find her writing boring, her ideas naive, her logic sloppy, and
> her politics disgusting.
>
> But if you guys here are so much against her, I am beginning to
> think that perhaps I should read her again. Maybe I will like
> her more this time.

Oh well, she was the greatest mind the human race has produced
since Aristotle. Maybe even including Aristotle. Oh,
except for L. Ron Hubbard. And maybe you could supply a
more modern exception.

gp said...

"A good example would be creation-science (which I know you reject) and Tiplerian Omega Point "theory" (which I know you hold)"

Bullshit. I do not "hold" Tipler's OPT. I do consider it as an interesting scientific speculation.

I think Tipler's detailed mechanism is probably wrong, and in his writings there are far too many overstretched analogies with Xian faith.

At the same time the basic insight - that at some point in the future intelligence may become a dominant force in the evolution of the physical universe - is interesting. Gardner and Kurzweil say more or less the same thing.

Tipler goes one step and dares sketching a possible scheme for resurrection. Which is of course what outrages the PC priests of ultra-rationalism.

gp said...

"Really? Please give one example of a *pseudoscientific* belief I cling to in the face of all counterarguments or evidence presented to me?"

You cling to your belief that narrow-minded political correctness is always good and imagination is always bed. You cling to your belief that those who take speculative ideas in consideration are a danger to socially progressive movements.

jimf said...

Giulio Prisco wrote:

> You [Dale] cling to your belief that narrow-minded political
> correctness is always good and imagination is always bed. You cling
> to your belief that those who take speculative ideas in consideration
> are a danger to socially progressive movements.

Dale "clings" (rightly) to his belief that "imagination" and
"speculative ideas" need to be treated with a healthy
dose of skepticism before being co-opted into excuses
for various peoples' agendas ("politically correct" or
otherwise).

--------------------------------
In about 1972, I had the occasion to meet privately with
[50s SF author A. E.] Van [Vogt] in a bar in Washington DC.
One of the things I wanted to talk about was his relationship with Hubbard
and Scientology. (Van is the only celebrity I've actually personally
spoken with on this subject, but I'll guarantee what I say here is a true
and correct recollection of this discussion.)

Anyway, by the time I met with Van, he no longer believed in any of the
stuff put out by Hubbard. Also, he felt he had to remain silent because
of threats against him and his family. Accordingly, he asked me not to
broadcast his story since he still lived a bit in fear of Hubbard and
his minions.

Van had nothing flattering to say about either Hubbard, Dianetics, or
Scientology during that meeting. I'd say he was totally disillusioned
by his experiences.

He had involved himself with Hubbard as part of an exuberance of youth;
a group of people out to "set the world right" and "make a difference."
What that group evolved into, we now know. But in the beginning, all
of the people involved were filled with idealistic visions of what the
world would be like if people were free of the bad motivations inside
themselves.
--------------------------------
http://groups.google.com/group/alt.religion.scientology/msg/4aec1277d08493c1

Did you know that Scientology in the 50s (allegedly) served
as a de facto dirt-collecting arm of the FBI? The "auditing"
process digs up the most intimate and embarrassing details of the
"patient"'s personal life, all of which is recorded and archived
by the organization. This is used for blackmail against an
unruly client, if necessary, but apparently other "friendly"
authorities also find it useful.

VDT said...

Bullshit.

Please refrain from using profanity (which is very different from an insulting personal attack, which I can tolerate).

I do not "hold" Tipler's OPT. I do consider it as an interesting scientific speculation.

Hmmm... I am willing to give you the benefit of the doubt but it seems to me that you only started saying that *after* I and others called you on it. In other words, you had no problem making unambiguous statements of faith like "future magic will resurrect the dead!" based on Tipler's writings.

Tipler goes one step and dares sketching a possible scheme for resurrection. Which is of course what outrages the PC priests of ultra-rationalism.

The fact that you say that shows that not only you have a poor understanding of rational skepticism and the academic critique of Tipler's pseudo-scientific ideas but you have an anti-intellectual impulse which I find very disturbing.

You cling to your belief that narrow-minded political correctness is always good and imagination is always bed. You cling to your belief that those who take speculative ideas in consideration are a danger to socially progressive movements.

First, even if what you said was true, these are not examples of a pseudoscientific belief, which is the issue under discussion.

Second, the fact you got this utterly ridiculous impression from anything I have said on the wta-talk list shows how incapable you are at understanding any criticism that is leveled against you and/or the subculture we belong to.

Third, political correctness is a term used to describe language, ideas, policies, or behavior seen as seeking to minimize offence to racial, cultural, or other identity groups. Although I am concerned that transhumanists often promote language, ideas, policies, or behavior that unintentionally maximize offence to some groups such as the disabled, my criticism of the pseudoscientific nature of some transhumanist ideas has nothing to do with minimizing offense to anyone but demanding that transhumanism lives to its claim that it has a strong respect for reason and science.

Fourth, in light of the fact that I am working on several science-fiction scripts, the notion that I find imagination to be always bad is laughably absurd.

Fifth, there is nothing wrong in taking speculative ideas into consideration. That's what ethics is often about. However, I do think that those who have come to believe that their enthusiasm for the pseudo-scientific tropes of science-fiction constitute serious political and social initiatives are, at best, a priority distraction, and, at worse, a credibility danger to socially and politically progressive movements. That being said, if someone can provide me with a logical counter-argument, I am perfectly willing to revise my position. Therefore, I don’t cling to anything.

VDT said...

Dale "clings" (rightly) to his belief that "imagination" and
"speculative ideas" need to be treated with a healthy
dose of skepticism before being co-opted into excuses
for various peoples' agendas ("politically correct" or
otherwise).


Actually, Giulio Prisco was talking about me not Dale.

By the way, you might want to trim or even delete your signature when posting comments to a blog because it ends up as white noise rather than pertinent insight due to its length. Also, Internet still runs on fossil fuels. The fossil fuel vice is so deeply embedded that even our email discussions haplessly contribute to the Greenhouse Effect. So let's try to minimize that as much as technically possible. ;)

gp said...

"Bullshit - Please refrain from using profanity (which is very different from an insulting personal attack, which I can tolerate)."

This is Dale's blog. Since he makes abundant use of the same word, I feel free to use it.

Of course, I will stop using it as soon as Dale asks me not to use it. Until that moment, I will keep using it when I think it is appropriate.

Giulio Prisco said...

"However, I do think that those who have come to believe that their enthusiasm for the pseudo-scientific tropes of science-fiction constitute serious political and social initiatives are, at best, a priority distraction, and, at worse, a credibility danger to socially and politically progressive movements."

This is, I believe, your core point and that of Dale. I understand it perfectly, and accept it as a very valid point.

Two comments though:

Here we really come to a "conceptual impasse no amount of argument can circumvent". I understand and accept the validity of Dale's point, but will not give up my own ideas. I can only say "too bad". We could cooperate to achieve outcomes that we all support, and instead we fight over abstract (yes, abstract) "identity" issues. Why don't we just agree to disagree, let each other keep his "identities", and see if there is something useful that we can do together.

If you guys could limit yourself to stating your point as above (and without bullshit like "pseudo-science"), I and others would still not "comply" but at least we would acknowledge your point of view without responding in kind. Otherwise, we will always answer to rudeness with rudeness.

VDT said...

This is, I believe, your core point and that of Dale. I understand it perfectly, and accept it as a very valid point.

Good.

Two comments though: Here we really come to a "conceptual impasse no amount of argument can circumvent". I understand and accept the validity of Dale's point, but will not give up my own ideas. I can only say "too bad". We could cooperate to achieve outcomes that we all support, and instead we fight over abstract (yes, abstract) "identity" issues. Why don't we just agree to disagree, let each other keep his "identities", and see if there is something useful that we can do together.

Well, what Dale is trying to explain to you is that the identity issues are not abstract! As I have explained to you in the past, regardless of the relative success the WTA has gotten or will ever get, social and political activists can undermine their credibility and make it harder for themselves to achieve their goals if they make the mistake of identifying themselves as transhumanists or being associated with transhumanists when they are not transhumanist themselves.

And I know *from experience* this is a factual obstacle that can prevent many people from working with us.

If you guys could limit yourself to stating your point as above (and without bullshit like "pseudo-science"), I and others would still not "comply" but at least we would acknowledge your point of view without responding in kind. Otherwise, we will always answer to rudeness with rudeness.

There nothing "bullshit"-ic about the ideas we have legitimately described and criticized as pseudo-science. There is nothing rude about calling creation-science or Tiplerian Omega point theory pseudo-science when it is. It's not the propopents of these ideas we are criticizing but the scientific validity of these ideas. Until you understand this, we will never be able to move on...

ZARZUELAZEN said...

Good points by Dale and Utilitarian:

For myself, the possibility of Singularity/AGI/Hard-take off etc is not in dispute.

What *is* in dispute is the idea that one needs to start suporting the SIAI and join an *ism* - transhumanism. I think Dale and Utilitarian are pointing out that there are far more productive things to be doing.

Will 'transhumanism' and SIAI ever be mainstream? Not a hope in hell! And thank godness for that.

There isn't any 'transhumanist' community. There are few good people here and there (Guilio, Nick Bostrom, Ben and Bruce seem like good sorts) but for the most part you've just got a small collection of rather nasty high-IQ types on messageboards with over-inflated ego's pushing a lot of crack-pot ideas.

They're too ego-centric to ever work together effectively, too nasty to ever attract any more than a tiny minority and just too selfish to ever do much more to help the world than argue on internet messageboards any way. No I think 'transhumanism' and SIAI will fade into oblivion.

Giulio Prisco said...

I will have more comments later (hectic day) but I wish to comment on this now: "you had no problem making unambiguous statements of faith like "future magic will resurrect the dead!" based on Tipler's writings".

I do say things like "future magic will resurrect the dead!" on occasions, based on the writings of many thinkers including Tipler.

But such statements are to be interpreted exactly in the same spirit as the "we will win the match" that football players say to each other before a match, or the "we will win the elections" that political activists say to each other before elections.

These are not statements of fact, but rather expressions of hope and declarations of intent.

"We will win the elections" mean: "we hope to win the elections, AND we will do our fucking best to actually win the elections".

Since these are linguistic conventions that we all use in everyday language, everyone would understand these sentences as above.

If that makes you happier: I do not BELIEVE that future magic will resurrect the dead (do you relly think I am that stupid?). I HOPE that future magic will resurrect the dead, and I INTEND to do my best to contribute.

VDT said...

Giulio Prisco wrote:

I do say things like "future magic will resurrect the dead!" on occasions, based on the writings of many thinkers including Tipler. But such statements are to be interpreted exactly in the same spirit as the "we will win the match" that football players say to each other before a match, or the "we will win the elections" that political activists say to each other before elections. These are not statements of fact, but rather expressions of hope and declarations of intent. "We will win the elections" mean: "we hope to win the elections, AND we will do our fucking best to actually win the elections". Since these are linguistic conventions that we all use in everyday language, everyone would understand these sentences as above.

LOL

Rationalization in psychology is the process of constructing a logical justification for a decision, behavior or statement that was originally arrived at through a different mental process. What you just said is probably the best example of a rationalization which should be quoted in an encyclopedia for posterity.

There is absolutely nothing you or I can do that has a high probability of contributing to the likelihood that the collapse of the Universe BILLIONS OF YEARS HENCE could create the conditions for the perpetuation of humanity in a simulated reality within a megacomputer, and thus achieve a form of "posthuman godhood"!

That being said, even if your weird claims were "expressions of hope" rather than statements of facts, these expressions, which you admit are based on the writings of many thinkers including Tipler, *ignore* the fact that most of these writings are pieces of pseudoscience, the product of fertile and creative imaginations unhampered by the normal constraints of scientific and philosophical discipline.

http://www.nature.com/nature/journal/v371/n6493/pdf/371115a0.pdf

If that makes you happier: I do not BELIEVE that future magic will resurrect the dead (do you relly think I am that stupid?). I HOPE that future magic will resurrect the dead, and I INTEND to do my best to contribute.

First, please don't ask me what I really think otherwise you may not like the answer. ;)

Second, it doesn't matter what makes me happy. However, it does concern me when someone who is a spokesperson of a movement I am involved in choose language that makes him sound like a crank and therefore makes the rest of look like cranks due to guilt by association...

GP said...

"First, please don't ask me what I really think otherwise you may not like the answer. ;) Second, it doesn't matter what makes me happy. However, it does concern me when someone who is a spokesperson of a movement I am involved in choose language that makes him sound like a crank and therefore makes the rest of look like cranks due to guilt by association..."

And this is what happens when one tries to explain his ideas, calmly and without aggressive or offensive words...

... to an aggressive and intolerant thought-cop like this "gentleman" (irony very much intended).

De Thezier, it is out of respect for Dale that I will refrain from saying what I think.

Over, I have no time to waste with you.

GP said...

Dale, I have really enjoyed our discussion so far and look forward to continuing it, perhaps by private mail.

I am sure you will understand that, after trying to have a calm argument and receiving only more aggressive and rude words from Mr. De Thezier, I do not intend to continue this discussion here.

Best,
Giulio

VDT said...

And this is what happens when one tries to explain his ideas, calmly and without aggressive or offensive words...

Give me a break! You consistently increase the tension of the debate by using profanity like "bullshit" and "chickenshit" to describe the opinions of others and then have the audacity to claim that you are not aggressive or offensive. Even if you didn't use such profanity in your last post, you set the tone of this exchange. Learn to live with the consequences.

... to an aggressive and intolerant thought-cop like this "gentleman" (irony very much intended).

In light of the fact that my alleged "aggressiveness", "intolerance" and "thought-policing" (which is nothing more than harsh yet fair skepticism or personal advice) is quite mild compared to what Dale Carrico dishes out on a daily basis, I take your insults as a compliment.

De Thezier, it is out of respect for Dale that I will refrain from saying what I think. Over, I have no time to waste with you.

Good. I've pretty much said everything I had to say to you on this subject. And you have never been able to provide a coherent reply that deserved my prolonged attention so I consider this matter closed.

jimf said...

Justice De Thezier wrote:

> [Giulio Prisco wrote:]
>
> > ... to an aggressive and intolerant thought-cop like this "gentleman"
> > (irony very much intended).
>
> In light of the fact that my alleged "aggressiveness", "intolerance"
> and "thought-policing" (which is nothing more than harsh yet fair
> skepticism or personal advice) is quite mild compared to what Dale Carrico
> dishes out on a daily basis, I take your insults as a compliment

I find the appeal to Orwell ("thought-cop") and the claim of intolerance
rather ironic in light of the fact that a discussion like this **simply
could not** take place on any >Hist mailing list (the Extropians', WTA-talk,
or SL4) or >Hist-identified blog (Accelerating Future? You're joking!)

I was myself moderated off the WTA-talk list just a few months ago.
Giulio Prisco was one of the moderators who publicly said, in
effect, "I vote we dump him."

If casual, thoughtless censorship (no more troubling, it would
seem, than scratching an itch) isn't "thought-policing",
then I suppose we need the telescreens, black uniforms, truncheons,
and Ministry of Love to make it count.

VDT said...

I forgot to address something Giulio Prisco wrote:

I do say things like "future magic will resurrect the dead!" on occasions, based on the writings of many thinkers including Tipler. But such statements are to be interpreted exactly in the same spirit as the "we will win the match" that football players say to each other before a match, or the "we will win the elections" that political activists say to each other before elections. These are not statements of fact, but rather expressions of hope and declarations of intent. "We will win the elections" mean: "we hope to win the elections, AND we will do our fucking best to actually win the elections". Since these are linguistic conventions that we all use in everyday language, everyone would understand these sentences as above.

If everyone looks carefully at at what I wrote earlier, I never argued that Prisco made statements of facts. I said "statements of faith", which I find inappropriate for reasons I explained earlier.

The End.

VDT said...

jfehlinger wrote:

I was myself moderated off the WTA-talk list just a few months ago. Giulio Prisco was one of the moderators who publicly said, in
effect, "I vote we dump him."


Although this is a bit off-topic, could you remind me what you said and did which triggered your expulsion from the wta-talk list since I wasn't paying attention at the time.

jimf said...

Justice De Thezier wrote:

> Although this is a bit off-topic, could you remind me
> what you said and did which triggered your expulsion from
> the wta-talk list since I wasn't paying attention at
> the time.

Oh dear. That could end up doubling the length of this
comment thread. Well, let's risk it. ;->

I need to consult my e-mail archive for this.

My first post was on May 25, 2007. Why did I join --
I hadn't read any WTA-talk since the list went
registration-only. Can't remember exactly, but it
must have been I'd heard there was a burst of juicy
snark from Dale. It appears my subscription request
was accepted on May 10, so I must have just lurked
for a couple of weeks. Eventually, though, the
temptation to stir the pot must've been too great.
I did not, um, bow and scrape like a newcomer,
because I assumed I'd be known, at least vaguely,
to a lot of the participants (though not to James
Hughes or, for that matter, Giulio Prisco).

Anyway, this was how I burst on the scene (I am, of
course, sharing the contents of a private mailing list
without permission. Make of that what you will.)

===============================================
Subject: Programmers and politics and the Norman Einstein of the Second Tier
Date: 5/25/2007 1:05 PM

In
http://www.transhumanism.org/mailman/private/wta-talk/2007-May/018231.html
Eugen Leitl wrote:

> On Wed, May 23, 2007 at 07:15:29PM -0700, Eliezer S. Yudkowsky wrote:
>
> > Although the following may sound insulting, it is an actual hypothesis. . .
> > James, did you ever try to learn how to program a computer? If so,
> > did you fail?
>
> It sounds less insulting than idiotic. But it wouldn't be your first.

In
http://www.transhumanism.org/mailman/private/wta-talk/2007-May/018232.html
James J. Hughes wrote:

> > Although the following may sound insulting,
>
> Oh no Eli. Your insinuations that people who disagree are stupid are
> never insulting.

In
http://www.transhumanism.org/mailman/private/wta-talk/2007-May/018251.html
Jef Allbright wrote:

> Eliezer -
>
> I remember trying to talk with you about this concept almost two years
> ago on the outside patio at SAP the night before the Accelerating
> Change 2006 conference. It was both frustrating and amusing. You
> ignored my words, and suggested that I might consider the Way of
> "Beizu-tsukai" and then condescendingly explained the term, without
> any receptivity for the fact that I speak conversational Japanese and
> am quite comfortable with Bayes.
>
> I tried again, saying I would like to share some thinking with you
> about the value of archetypes for building a framework for acquiring
> new concepts. Again you seemed not to hear my words, and
> condescendingly referred me to Robin Dawes' work on heuristics and
> biases -- without any receptivity for the fact that I had already read
> Dawes and actually had an electronic version in my pocket at that
> moment (along with Kahneman and Tverksy) who's work I prefer.
>
> Despite the lack of meaningful communication, that conversation was
> most enlightening. ;-) I share some of your concern for cognitive
> gaps, but would suggest that you are quite dangerously blind to some
> of your own.

This exchange reminds me of nothing so much as a spectacle currently
taking place in a parallel universe -- the sniping between
Ken Wilber (the New Age celebrity and guy with a Theory of Everything)
and his fans, ex-fans, and, er, un-fans. (Wilber founded
something called the Integral Institute with the help of a cool
million dollars from Silicon Valley CEO Joe Firmage -- the guy who
had to quit his job in '98 when he went public with his belief
in UFOs).
( http://in.integralinstitute.org/contributor.aspx?id=30
http://www.sfgate.com/cgi-bin/article.cgi?file=/chronicle/archive/1999/01/09/MN19158.DTL )

The following extracts may shed some light on what's going on here:

http://www.integralworld.net/index.html?overyourhead.html
------------------------------------------------------------------
"Sorry, it's just over your head"
Wilber's response to recent criticism
Jim Chamberlain
. . .
[W]hat [Ken Wilber has] done is play “three cards”, well known
to anyone who has studied cults which sprung up with the mixing
of eastern religion and western psychology in the sixties
and seventies, e.g. Adi Da. These three cards are:

1. The Higher Level Card (i.e. Sorry, it's just over your head).
You're just not smart enough to realize I am smarter than you,
because you're on a lower (less divine) level.

2. The Projection Card (i.e., I know you are, but what am I?).
By criticizing me, you are really just criticizing yourself,
because any problem you see in me is just a projection of a
problem in yourself.

3. The Skillful Means Card (i.e., it was only a test, dickhead).
The most potent card of all! It's not abuse; it's not pathetic or
ridiculous or wrong; it's a crazy-wise teaching. You know, like
Zen stuff. So when I call you a dickhead, it's not because
I'm a dickhead, it's because you have a dickhead-complex that
you need to evolve past, and I'm here to help you see that.

Folks who work in deprogramming poor souls trapped within a
cultic mindset find these three cards the final barriers and
defense mechanisms they must break through to get somebody out
(mentally and physically speaking). Note also, and this is
important, that explicitly playing these cards rules one out,
automatically, from any serious academic or spiritual/religious
circles.

http://www.integralworld.net/index.html?visser12.html
------------------------------------------------------------------
Games Pandits Play
A Reply to Ken Wilber's Raging Rant
Frank Visser
. . .
I have spent long enough in the world of psychospiritual movements
and literature to know when people start playing games. These are
among the most favorite, although there are many more. All of them
are evidenced in Wilber's recent blog posting, and the curious
excitement it generates in the Integral Scene.

-- What the guru does is always good.
Wilber's posting was phrased rudely on purpose: to separate the
green from the yellow.

-- Making fun of critics.
Portray them as loopy, nuts, deranged. They are the worst amateurs,
"non quoting perverts".

-- Feeling persecuted or mistreated. Wilber presents himself as
the lonesome cowboy who gets shot by bandits.

-- Getting rude and offensive before one's own audience.
Here's where the posting gets deranged. But it's way cool! Kick ass!

-- Making ingroup / outgroup distinctions.
Some are included in the fold of I[ntegral]I[nstitute], others,
especially me, are "left out".

-- Praising and blaming/punishing people.
Those who get it are great, those who don't get it are bad,
VERY bad – real morons.

-- Overreacting to critics.
Mild criticism from responsible writers is taken to be a vicious
attack, that requires retaliation.

-- Turn opponents into caricatures.
With minor exceptions, writers who publish at Integral World are
presented as empty heads.

-- Create a common enemy.
The Green Monster Meme is out there to get us. Only the Yellow Castle
is safe.

-- Act out your emotions.
Both Wilber's and Annie MacQuade's postings
are an example of this. Rationalize this as authenticity.

Is this outrageous? Consider it as a test ;-) Those who are "in" will
think so; those who are "out" will recognize these features immediately.

http://www.normaneinsteinbook.com/nechapters/baldnarcissism.php
------------------------------------------------------------------
Geoffrey D. Falk, _"Norman Einstein": The Dis-Integration of Ken Wilber_
Chapter IX, "Bald Narcissism"

On June 8 of 2006, Ken Wilber posted a very revealing entry
on his blog, exhibiting something of a “Wyatt Earp” complex.
(That is, as an underappreciated gunslinger/sheriff/savior,
out to save the Wild West according to his own version of the
Kosmic Law.) From that embarrassing rant:

"In short, it’s just ridiculous to say that I try to hide from
this criticism, I live on it!.... This is what second tier does
automatically anyway, it takes new truths wherever it finds them
and weaves them into larger tapestries. It can’t help doing so!
If I find one, I am ecstatic! So mark this well: Only a first-tier
mentality would even think that one would run away from good
criticism."

Wilber, however, does indeed run away from competent, thorough
criticism like vampires flee from the sunlight. Mark that well.
You do not need to be first-, second-, or nth-tier to see that;
all you need to be able to do is recognize competent research
when you see it, and then note kw’s derogatory response to
(or freezing-out of) that.
. . .
Len Oakes wrote an entire book (_Prophetic Charisma_)
on the typically narcissistic personality structure of cult leaders.
What we are seeing with kw is just par for the course and would,
as Bauwens has noted, have happened eventually even without any
“critical” provocation: Wilber was always an “institute” waiting
to happen.

http://www.geoffreyfalk.com/blog/August2005.asp
------------------------------------------------------------------
Subject: Henry the Eighth I Am, I Am
August 11, 2005
. . .
What never seems to be mentioned in discussions of why people
join cults is the psychological need to belong to a saved
"in" group, and to have the social support of others who are
equally "saved." Personally, I am convinced that the average human
being can talk himself into believing almost anything if doing
so will get him into the "in" group, even without any
"cultish brainwashing." After all, when even "love bombing" and
peer pressure are quoted as techniques of "brainwashing" or
mind control, it is painfully obvious that very little persuasion
is actually needed in order to push the average person
"over the edge," into believing whatever will satisfy his
"belongingness needs."

Subject: Rainy Day People
August 7, 2005
. . .
As critics of the Ayn Rand cult — the former haunt of Wilber's good friend,
Nathaniel Branden — have noted, "when people identify too closely with
their system of beliefs, they have no choice but defend them tooth
and nail from any hint of cognitive dissonance." That applies to integral
beliefs and heroes just as surely as it does to Rand's Objectivist ones.
It applies to groups of skeptics and scientists, too, except that the
proper application of the scientific method works to eventually sort
fact from fiction, limiting the length of time through which one can
fool oneself.

"...Although, perhaps I should mention that I am at the center of
the vanguard of the greatest social transformation in the history
of humankind ... using [my] Zen sword of prajna to cut off the
heads of critics so staggeringly **little** that [I have] to
slow down about 10-fold just to see them."
-- Ken Wilber
( http://kheper.net/topics/Wilber/Wilbers_rant.html )

Oh, BTW, I'm afraid Mr. Wilber is somewhat scornful of
at least one of the transhumanists' ambitions:
http://www.strippingthegurus.com/stgsamplechapters/dalai.asp
------------------------------------------------------------------
Geoffrey D. Falk, _Stripping the Gurus: Sex, Violence,
Abuse and Enlightenment_
Chapter XXII, "Hello, Dalai!"
. . .
Interestingly, Ken Wilber (2001b) offered his own opinion on a
very closely related subject to the above reincarnational
suggestions:

"[T]his whole notion that consciousness can be downloaded into
microchips comes mostly from geeky adolescent males who can’t
get laid and stay up all hours of the night staring into a
computer screen, dissociating, abstracting, dissolved in
disembodied thinking."

;->
===============================================

This was, I'll admit, provocative.

I seem to have made approximately 60 posts over the next
three weeks or so, and to have been bounced around June 15.

In the meantime, there was a fair amount of sniping with some
of the, um, more defensive folks; e.g.,

===============================================
Subject: Enturbulated MEST (Re: Waltzing with Matilda)
Date: 6/6/2007 9:56 PM

In
http://www.transhumanism.org/mailman/private/wta-talk/2007-June/018691.html
Michael Anissimov wrote:

> [James J. Hughes wrote:]
>
> > Yes, the idea that the messianic leader is so wise he can't be taught
> > anything by mortal men (especially hide-bound scholars and experts too
> > conventional to understand the startling nature of the revelation) is a
> > common feature of religious, and expecially millennial, movements. We
> > need to be on guard against that as well.
>
> Not-so-veiled reference to Eliezer Yudkowsky, but considering that no
> one believes he "can't be taught by mortal men" your gesturings are
> towards no one in reality, unless you had someone else in mind. If
> so, please spell out their name into your email window.

Many penny-ante (and not so penny-ante) gurus and cult leaders
have paraded such affectations.

L. Ron Hubbard, for one.

"Every Why has a Who, or a group of Whos. They become the target
of the handlings. If it is an in-the-organization eval, the Who
might be the top executive of a section. (When in-the-org, the why
will usually be some non-application of Hubbard technology. It is
NEVER NEVER NEVER due to the application of Hubbard. Never!
Not in the org or outside of it.)"


A more modern example:

"Keith Raniere says he conceptualized a practice called 'Rational Inquiry'
at the age of 12 while reading _Second Foundation_ by Isaac Asimov.

The premise of the science fiction series is that a mathematician forecasts
the end of civilization and devises a plan to shorten the period of barbarity
before a new civilization is established.

Rational Inquiry, a formula for analyzing and optimizing how the mind
handles data, as Raniere describes it, is the basis for NXIVM
(pronounced NEX-ee-um), a multimillion-dollar international company. . ."

http://www.rickross.com/reference/esp/esp32.html


"Keith Raniere's devoted followers say he is one of the
smartest and most ethical people alive. They describe
him as a soft-spoken, humble genius who can diagnose
societal ills with remarkable clarity. . .

His teachings are mysterious, filled with self-serving
and impenetrable jargon about ethics and values, and defined
by a blind-ambition ethos akin to that of the driven
characters in an Ayn Rand novel. His shtick: Make your own
self-interest paramount, don't be motivated by what other
people want and avoid 'parasites' (his label for people
who need help); only by doing this can you be true to
yourself and truly 'ethical.' The flip side, of course,
is that this worldview discredits virtues like charity,
teamwork and compassion--but maybe we just don't get it."

http://www.rickross.com/reference/esp/esp31.html


It turns out that one of Raniere's claims to fame is his (self-allegedly)
high IQ and prodigious childhood accomplishments:

--------------

He spoke in full sentences by the age of one [and]
was reading by the age of two

At the age of eleven, he was an Eastern Coast Judo
Champion.

At age 12, he taught himself high school mathematics
in less than a day and taught himself three years
of college mathematics by age 13.

He plays many musical instruments and taught himself
to play piano at a concert level by age 12

He was entered in the Guinness Book of World Records
for “Highest IQ” in 1989.

He has been noted as one of top three problem-solvers
in the world.

He was a millionaire by the age of 30 and worth
$50 million by the age of 32.

--------------

http://keithraniere.com/
http://espian.net/topiq.html


> As a youth, I sometimes aspired to be a messianic leader, but it
> turned out I wasn't quite smart or charismatic enough. Here's to
> youthful ambitions, in any case.

You need to learn The Voice. ;->


Tone 40:
Intention without reservation or limit. . .

The top of the emotional tone scale, seen as a godlike state of
command and control of others.

Tone Forty Command, an order given at Tone Forty, and therefore
filled with mystical OT, Operating Thetan intention that must be
instantly obeyed. [who came first, L. Ron Hubbard, or
Frank Herbert? Bene Gesserit?]"


===============================================
Subject: Open those eyes, little brother! (Re: Enturbulated MEST)
Date: 6/7/2007 11:15 AM

In
http://www.transhumanism.org/mailman/private/wta-talk/2007-June/018706.html
Michael Anissimov wrote:

> I understand all these messianic leader examples, but in my opinion,
> transhumanists are, in fact, too smart and critical of such things to
> ever let a messianic leader of any type emerge among us. . .
> I don't think we have a problem of being entranced by an actual human being.
> Transhumanists are *so* paranoid about such things, I just don't see
> us taking anyone who bills themselves as a messiah very seriously.

Bagheera replies:

Mowgli, sweetheart, I hate to have to be the one to
break it to you. . .

"You're soaking in it!"

-- Madge the Manicurist, in the old Palmolive TV commercials


===============================================
Subject: Rodney Dangerfield sez (Re: Open those eyes, little brother!)
Date: 6/7/2007 12:37 PM

In
http://www.transhumanism.org/mailman/private/wta-talk/2007-June/018710.html
Michael Anissimov wrote:

> This is totally disrespectful, from the subject line to the quotes to
> the tone. . .

Tone 40, my boy, Tone 40!

> Disrespecting a fellow human being in this way only brings shame to
> yourself.

Ah, unless, presumably, it's done for a Higher Purpose.
Or when der Fuhrer becomes justifiably angry with Lesser Minds (TM).

Geoffrey Falk, _Stripping the Gurus_
Chapter XXI, "Sometimes I Feel Like a God"
(Andrew Cohen)

--------------------------------------------------
It is easy to show, via the same contextual comparison method
which we have utilized for previous “crazy wisdom” practitioners,
that [Andrew] Cohen’s reported rude behavior, like Adi Da’s
and Trungpa’s, apparently lacks any wise or noble basis.

For example, consider that in 1997 an Amsterdam newspaper printed
a generally complimentary review of a lecture there by Cohen.
The piece ended with the ironic but nevertheless fairly innocent
observation that, although the guru had his students shave their
heads, Cohen’s own hair was well coiffed.

When that article was read to Andrew in English, Cohen reportedly
“shows no response until those last lines. Then he pulls a face”:

“What a bastard, that interviewer. He seemed like such a
nice guy. Call him up Harry! Tell him he’s a jerk.”

When Harry sensibly resists burning that PR bridge, Cohen apparently
shoots back:

"He’s an incompetent journalist. Then just tell him he’s
no good at his profession (in van der Braak,
[_Enlightenment Blues_,] 2003)."

If the journalist in question had been a formal disciple of Andrew’s,
everyone involved would have had no difficulty at all in rationalizing
Cohen’s reported temper as being a “skillful means.” That is, his
rumored outburst would have been meant only to awaken the scribe from
his egoic sleep. That hypothetical situation, however, is not at all
the case. We should therefore not credit Cohen’s reported response,
at such absolutely minimal provocation, as being anything more than
infantile. Further, we must take alleged eruptions such as that as
forming the “baseline” for the man’s behavior, against which all
other potentially “skillful means” are to be judged.

My own considered opinion is that when the baseline of such “noise”
is subtracted from Cohen’s reported behaviors in the guru-disciple
context, there is nothing at all left to be regarded as a
“skillful means” of awakening others in that.

===============================================

And some back channel, with the list owner:

===============================================
Subject: Chilling over and out
Date: 6/7/2007 2:53 PM

Dr. Hughes,
You wrote:

> Jfehlinger - I'm in sympathy with a lot of your posts, and the material
> you quote is fascinating, but having a conversation via quotes is a
> little off-putting and prone to misunderstanding.

If you're saying (gently) that my posting style is unsuitable
for the list, then I might as well stop posting, because I can't
change it and still say what I want to say, and if I can't
say what I want to say, I might as well not say anything.

You needn't feel too bad -- while I got away with it for a couple
of years back in 2000 and 2001 on the predecessor of what is now
Extropy-chat, I got bounced off the Yahoo Classic Extropians
last summer for exactly the same reason -- the length of my posts.
I had been invited to join months earlier by Eugen Leitl, and
lurked for a long time, but got unplugged about a week after
my first post. It was actually a cascade -- Perry Metzger
complained about the quotes, which "woke up" list owner Russell
Whitaker who said, basically, "Who the hell are you, and what
are you doing here, and vat are your principles =Ayn Rand accent=?"
So I told him my principles (and took a swipe at Ayn Rand along
the way, and also mentioned Dale Carrico) which caused Samantha
Atkins to **screech** in protest, which led to my being unplugged.
Actually, I got a brief reprieve when Whitaker repented of
acting hastily and gave me "another chance", but I managed to
irritate him almost instantly, and he **ordered** me to stop
using emoticons in my posts (;-> ;->). So I replied, basically,
"f*ck you", and **then** he pulled the plug for good. ;->

Well, if you want to know what axes I'm grinding -- here's a link
you can follow (I see you're on Orkut, so all you have to do is
sign in):
http://www.orkut.com/Community.aspx?cmm=38810

That was a core dump, mostly from 2004. I squirrelled it away
deliberately on Orkut because I wasn't interested in getting
hate mail. It's been, as I expected, largely ignored.

Here are a few updated highlights:

I think the legacy of Ayn Rand exerts a malevolent influence on
the >H community. Hell, she even distorts the discourse
surrounding AI (among the >Hists, that is, and some of their
SF fellow-travellers, **not** any more among serious researchers).

I believe Eliezer Yudkowsky is what Sam Vaknin calls a "malignant
narcissist" (though I've never dared to come out and say so
publicly, I should **hope** that people know bloody well who
I'm talking about when I mention gurus and narcissists). He
alone has exerted a baleful influence on the >H community
since he burst on the scene in '96 (though he is, ironically,
the reason I'm here). I don't base my "diagnosis" on his
public writing alone (though I think it's pretty obvious even
there) -- I've had some unsettling personal (via e-mail) interactions
with Yudkowsky, going back to 2001 - 2003, and he is one nasty dude.
Mike LaTorra has some more detailed information about that (he
backchannelled me when I started posting to WTA-talk); if you're interested,
tell him I said it's OK to forward anything I sent him (and you
can forward him this e-mail if you want).

I believe that Dale Carrico's concerns about the authoritarian
and right-wing influence of the >Hists at large is absolutely
correct. It has a lot to do with the guru-True Believer dynamic
(which is always inherently authoritarian), with the apocalyptic
religious overtones of the movement, **and** with the narcissism
characteristic of the "congregation" (as it were) as well as
the leaders.

Something I find utterly remarkable is how much nastiness and
attitude Eliezer gets away with on public mailing lists **without
ever being moderated**. His sock-puppet Anissimov never
acknowledges publicly that his Fuhrer is a bully, but Anissimov's
own strategy of aggrievedly accusing a critic of "disrespect" worked
to silence **me**, on this occasion. Well, good for him -- that's
what PR directors are for, I guess (and lawyers, when SIAI has
more money).

I don't think, BTW, that Michael Anissimov is ever **not** wearing
his SIAI or ImmInst PR/fund-raiser hats when he's posting on
mailing lists (or even replying to strangers via e-mail).
His outrage just now, I believe (I might be wrong, but I strongly
suspect) was simply an attitude he struck in order to get
a desired result. And he got it!

So, anyway -- it was fun, and ta ta!

Jim F.

===============================================

Subject: RE: Chilling over and out
Date: 6/7/2007 3:32 PM

I completely agree with your posts, and found the historical and
literary references excellent.

I've also had unpleasant interactions with Yudkowsky and his acolytes
going back five years now.

I keep getting assured that they are maturing and broadening beyond
their insular nonsense, and see no evidence of it. The fact that they
have been trumpeted for their relative fundraising success (basically
based on one philanthropic donor) empresses me as much as the
fund-raising success of any religious group.

Which is to say, don't leave the list on my account. I just need to jump
in and remind everyone not to flame on a regular basis, and we had
definitely hit the sensitive buttons on the FAI devotees which suggested
another ramp-up in heat.

J.

===============================================

Subject: Thanks!
Date: 6/7/2007 3:37 PM

> Which is to say, don't leave the list on my account. I just need to jump
> in and remind everyone not to flame on a regular basis, and we had
> definitely hit the sensitive buttons on the FAI devotees which suggested
> another ramp-up in heat.

That's cool. Thanks for the vote of confidence!

Jim F.

===============================================

Subject: Ratcheting back on length, volume
Date: 6/14/2007

Again, love the posts.

But find them pretty long, and still the literary quotes are probably
annoyingly oblique for most.

Can we ratchet back a little?

J.

===============================================

Subject: Re: Ratcheting back on length, volume
Date: 6/14/2007 5:09 PM

> Again, love the posts.
>
> But find them pretty long, and still the literary
> quotes are probably annoyingly oblique for most.
>
> Can we ratchet back a little?

Dr. Hughes,

Warning (number two) acknowledged.

It is not my intention to be rude to you, but neither
do I intend to be diplomatic.

You are, I presume, the owner of this list and have the
final say over who may subscribe and post to it.

If I continue to post, I do not intend to count characters
or worry whether or not I might be stepping over some
ill-defined limit. Therefore, I say this in advance:
if the frequency and/or content (subject matter, length,
or included quotes) of my posts as exemplified by my
contributions over the last two weeks is not, in your
opinion, suitable to the WTA-talk list, then since I
have no plans to change my posting habits and/or style,
you might as well save yourself any more trouble and
pull the plug on me now.

I suspect a number of list participants will thank you
for it.

One other thing -- when you do ban me, please make a public
announcement explaining why. You can even include this
letter, if you like (please include **all** of it if you
do).

Thanks.

Jim F.

===============================================

Subject: GCU 'Grey Area' -- a.k.a. The Meat F*cker (Re: Throttling Jim)
Date: 6/15/2007 1:20 AM

In
http://www.transhumanism.org/mailman/private/wta-talk/2007-June/018931.html
James J. Hughes wrote:

> I find off-putting Jim's refusal to acknowledge my friendly
> request to throttle back a little. This is one
> of those grey areas.

Yes, well, I suppose I can put up with existing in a
"grey area" for my tenure on this list, if nobody has
the authority (or the backbone) to issue a verdict of
White. Or Black.

In
http://www.transhumanism.org/mailman/private/wta-talk/2007-June/018940.html
Giulio Prisco wrote:

> I, and many others, do not agree with the content of most JF's posts.

"Many others" as in "one, two, three, many?" You got a list?
I suppose I should be flattered that you bother to **read** my
posts. Quite frankly, I do not bother to read yours.

> But I would not support banning him just for that.

Oh, well, let's hear it for diversity of opinion!
We no have cult, here.

> However many list members find annoying his habit of filling his posts
> with endless quotes. If he does not wish to stop doing so, I would
> support banning him.

As I indicated to Dr. J., I have **no** intention to alter my posting
style in any way, or to worry about the frequency of my posts.

Take it or leave it, folks!

My ass-kissing days with this crowd are **long** over.

===============================================

Subject: Re: GCU 'Grey Area' -- a.k.a. The Meat F*cker (Re: Throttling Jim)
Date: 6/15/2007

In
http://www.transhumanism.org/mailman/private/wta-talk/2007-June/018948.html
Giulio Prisco wrote:

> It may be relevant or not, but you have just added some more useless
> kilobites to my mailbox.

Interesting characterization. How can kilobites, or trilobites,
be relevant (as in "pertinent") and "useless" at the same time?

> The correct and respectful way to point readers to long texts is
> to give a URL.

Not necessarily. The correct and respectful way to maintain long
texts in the context of an archived discussion is to reproduce
the text. Then there are no URLs to go stale.

Curious that you're making such a fuss about a few kbytes of **text**,
in this day and age. This isn't 1987, in case you hadn't noticed.
How many gig is **your** porn collection, I wonder?

> I will propose banning you at the next long post with
> useless quotes.

Useless to whom, exactly? Oh, right -- you're a moderator, ergo
useless to **you** means useless to **everybody**.

> Please consider making your points
> precisely and concisely, and we will get along just fine.

Please consider not reading my posts, and I'll continue not to read yours,
and we'll get along just fine.

In fact, why don't you ask Adrian-Bogdan Morut how mail filters
work, and then you won't get **any** useless bites from me.

> I can see you are serious about maintaining your posting style. As I
> am serious about banning you if you do. So I will formally propose
> banning you next time.

All right, let's get this over with. I've got plenty in my
archive that's both relevant and "useless" (to you). Be
back to you later.

In
http://www.transhumanism.org/mailman/private/wta-talk/2007-June/018946.html
BillK wrote:

> The only way to deal with trolls is to limit your reaction to
> reminding others not to respond to trolls.
>
> When you try to reason with a troll, he wins. When you insult a troll,
> he wins. When you scream at a troll, he wins. The only thing that
> trolls can't handle is being ignored.

That's what I would have thought. On the other hand, I'm not
**exactly** a troll, am I? And I **won't** be ignored by
everybody. I have no doubt that some people are enraged by
that.

===============================================

Subject: META: Moderating Jim
Date: 6/15/2007 8:19 AM

> My ass-kissing days with this crowd are **long** over.

If you can't distinguish my friendly polite back-channel requests from
strong-armed censorship, AND you insist on lobbing in hostile
encyclopedias, then I think you need to be moderated for a while (with
the agreement of my co-moderators of course, on whom this imposes a
burden). Or you could just take a break.

By contrast, although Michael A. is sometimes aggrieved, he is
unfailingly polite, and comparatively concise. I think you would make
your case better if you took his example.

J.

===============================================

Subject: You have been unsubscribed from the wta-talk mailing list
Date: 6/15/2007 8:44 AM

ZARZUELAZEN said...

When are you Singularitarian guys going to realize that whilst Yudkowsky doesn't have all the answers, I (Geddes) do? ;)

If its robot god-hood you guys want, you need to start funnelling some cash my way so I can develop my MCRT (Mathematico-Cognition Reality Theory)

Cheers

ZARZUELAZEN said...

Jim F wrote:

>I believe Eliezer Yudkowsky is what Sam Vaknin calls a "malignant
narcissist" (though I've never dared to come out and say so
publicly, I should **hope** that people know bloody well who
I'm talking about when I mention gurus and narcissists). He
alone has exerted a baleful influence on the >H community
since he burst on the scene in '96 (though he is, ironically,
the reason I'm here). I don't base my "diagnosis" on his
public writing alone (though I think it's pretty obvious even
there) -- I've had some unsettling personal (via e-mail) interactions
with Yudkowsky, going back to 2001 - 2003, and he is one nasty dude.


No question Jim, I have to agree. I kept trying to give Yudkowsky the benefit of the doubt for as long as I could (because he undeniably very smart and working on some important things) but it just wasn't worth it. I had to cut the SIAI and co loose, because E.Y is undoubtably 'one nasty dude'. Unfortunately the lights are on, but no one is home. These people seem to me to utterly devoid of 'emotional affect'.


But don't take my word for it. Let the SIAI 'true believers' speak for themselves. Have a look at this video of M.Anissimov:

http://www.acceleratingfuture.com/michael/blog/

(Click on video of 'Me At Singularity Summit).

Sorry, but something but E.Yudkowsky (and M.Anissimov too) just really really rubs me the wrong way.

jimf said...

Marc Geddes wrote:

> Unfortunately the lights are on, but no one is home. These
> people seem to me to utterly devoid of 'emotional affect'.

http://health.groups.yahoo.com/group/narcissisticabuse/message/5158
--------------------------------------------------
The Diagnostic and Statistical Manual IV-TR (2000), published by the
American Psychiatric Association (APA), has this to say about narcissists on
pages 714-6 (this is a very partial list):

"(Narcissists are) furious (when not catered to) ... overwork (people)
without regard for the impact on their lives ... (are) contemptuous and
impatient with others ... oblivious to the hurt their remarks may inflict
... the needs, desires, and feelings of others are likely to be viewed (by
the narcissist) disparagingly as signs of weakness or vulnerability ...
(Narcissists display) emotional coldness and lack of reciprocal interest ...
Harshly devalue the contributions of others ... (Narcissists) are arrogant,
haughty, snobbish, disdainful, or patronizing ... React with disdain, rage,
or defiant counterattack ... Interpersonal relationships are typically
impaired due to problems derived from entitlement ... and the relative
disregard for the sensitivities of others ..."

And so on for 3 pages. Sounds like abusive behavior to me. And this is the
DSM - the bible of the mental health profession.

The vast majority of scholars, in numerous studies, have repeatedly reached
the following conclusions:

Not all abusers are narcissists - but ALL narcissists are abusers.

Often, the abuse that narcissists mete out is not intentional. It is merely
owing to their lack of empathy, grandiose fantasies, tendency to
confabulate, etc. The very diagnostic criteria for Narcissistic Personality
Disorder amount to a description of an abuser. Both narcissists - and
abusers - lack empathy, have a low threshold of frustration due to a sense
of entitlement (rage attacks), are "interpersonally exploitative", arrogant
and haughty, and envious.
--------------------------------------------------

BTW, I see **no** evidence that Michael Anissimov fits the above
description. He just works for one.

Roko said...

"Yes, yes, I know, I know, you're all Einstein and Tesla and the Wright Brothers and possibly Ayn Rand, too, all condensed into one radioactively brainy scientastic package"

Ok, so you're putting words into my mouth, which annoys me. At no point did I identify with Objectivism or Ayn Rand.

What I don't think you really get is that imagination is not purely an artistic exercise. Transhuman thinkers speculate. They make inferences which are uncertain. They play with ideas. There is a point to all of this, which is that some of these ideas may be close to the truth. To be honest, I don't think you've really grasped this point, and most of your response to my post is outright mockery, which I expected to leave behind when I left high school.