Using Technology to Deepen Democracy, Using Democracy to Ensure Technology Benefits Us All

Wednesday, April 15, 2009

Let's Talk About Cultishness

On the one hand, I find the organizational forms of superlative futurology so ridiculous that I often judge that they demand nothing but ridicule in return. But, on the other hand, I think that the discourses of superlative futurology represent a symptom and reductio of prevailing neoliberal developmental discourse that repays our more serious scrutiny and I also think that the hyperbolic rhetoric arising out of the sub(cult)ures of superlative futurology are congenial to sensationalist mass media and contribute in ways we should take seriously to the derangement of sensible deliberation on technodevelopmental questions at an important historical moment of disruptive change. So, I regard superlativity as ridiculous but I take it seriously, too. In the moments in which I am impressed most by its ridiculousness I find myself referring to organized sub(cult)ural formations of superlative futurology as "The Robot Cult" and its representatives as "Robot Cultists." How apt is that charge when all is said and done, and just how glib am I being in making it? Let's talk about that a little bit, shall we?

I will take another comment by "Hjalte" that I've upgraded and adapted from the Moot, this time one in which she takes umbrage at some of the insinuations arising from the charge of Robot Cultism, as the occasion for some scattered speculations on the relations of superlative futurology, its organized forms, the sub(cult)ures associated with these, and finally the derisive designation of Robot Cultism itself.

"Hjalte" protests: It is not like I worship the man as if he was the guru in some sort of robot cult. I said particularly: not that he is the first to come up with such ideas. And those other people I refer to is not (just) the rest of the incrowd at SIAI. It is people like Sam Harris and Daniel Dennett, and likely countless other philosophers and neuroscientists of whom I have not heard. (maybe even some of the old Greek philosophers as well, they had moments of good insight). Also I don’t say that anyone possess full knowledge of these issues, though the state of the art may be a little above ”various reactions going on in the brain”.

The "man" in question is would-be guru Eliezer Yudkowsky of the Singularity Institute for Artificial Intelligence, which one might describe, together with would-be guru Ray Kurzweil's Singularity U, as something like Robot Cult Ground Zero. Sam Harris, needless to say, isn't a neuroscientist. Daniel Dennet is a philosopher. I liked his book Elbow Room very much, and like him I am a champion of Darwin and enjoy some of the things he writes in championing Darwin himself. It's nice that you like some of the Greeks, as well. Me too. I must say that there is a strange mushy amalgam of bestselling popular science authors and "the new atheism" polemicists with a broad family resemblance, rather than an explicit program exactly, holding them together, mostly involving a rather pointless and hysterical assertion in my view of technical triumphalism through reductionism. I always find myself wishing secularists would go back to reading James and Dewey rather than all this facile reductionism misconstrued as respect for science. This is a brutal oversimplification, but it seems to me, roughly speaking, that in mis-identifying fundamentalism with the humanities, they tend to advocate a reductionism that re-writes science itself in the image of a priestly authoritarianism with too much in common with the very fundamentalisms they claim to disdain (and rightly so).

Anyway, it's easy to see why you would connect the Robot Cultists you cherish to this popular science assembly (some of the authors in which I personally find more or less appealing myself in their proper precinct), and probably in a loose sort of way with the Edge.org folks (I tend to gravitate predictably enough more toward the more progressive and capacious Seed Scienceblogs set myself). This amalgam of insistent scientism -- again, I'm painting with too broad a brush, but you take my point, I'm hoping -- is more or less what the American Ayn Rand enthusiasts, also something of a cult, mind you, of the 60s mutated into, by way of the L5 society, by the time of the irrational exuberance of the 90s. Wired's libertechian "digirati" and Extropian transhumanism were very much a part of that moment -- to their everlasting embarrassment one would think. Vinge, Kurzweil, Yudkowsky either originated together with it or arise out of it (of these three, only Vinge is a figure of lasting significance in my opinion). This fandom-cum-sub(cult)ure hasn't really changed all that much in broad outline over the years, apart from occasional terminological refurbishments and fumigations in the name of organizational PR, since Ed Regis offered his arch ethnography Great Mambo Chicken way back. Brian Alexander's Rapture, written years and years later is most extraordinary in my view for the lack of change in the futurological cast of characters he discovers, the claims they make, the (lack of) influence they exert in their marginality, and so on.

Be all that as it may, let's have something a reality check here, shall we?

If you are concerned about software and network security issues (and there are plenty of good reasons to be), you certainly need not join a Robot Cult to work on them, you need not think of yourself as a "member" of a "movement" that publishes more online manifestos than actually cited scientific papers.

Why would efforts to address software and network security issues impel one into a marginal sub(cult)ure in which one finds a personal identity radically at odds with most of one's peers and what is taken to be a perspective on and place within a highly idiosyncratic version of human history freighted with the tonalities of transcendence and apocalypse?

I don't doubt you when you say that you do not literally worship would-be Robot Cult gurus like Yudkowsky or Kurzweil or Max More or whoever (depending on the particular flavor of superlativity you most invest in personally), but the fact remains that these figures are incredibly marginal to scientific consensus, and you locate yourself very insistently outside that mainstream yourself when theirs are the terms you take up to understand what is possible and important and problematic in the fields of your greatest interest.

The fact that this self-marginalization is typically coupled among superlative futurologists with the assumption of a defensive assertion that in fact you represent a vanguard championing a super-scientificity while you actually actively disdain consensus-scientificity suggests there are other things afoot in this identity you have assumed for whatever reasons than simply a desire to solve software and network security problems.

There are, after all, thousands upon thousands of serious, credentialized, published professionals and students working to solve such problems who have never heard of any of the people you take most seriously and who, upon hearing of them, would laugh their assess off. This possibly should matter to you.

Transhumanism, singularitarianism, techno-immortalism, extropianism, and all the rest might seem to differ a bit from classic cult formations in that they do tolerate and even celebrate dissenting views on the questions that preoccupy their attention. What one notices however is that the constellation of problems at issue for them are highly marginal and idiosyncratic yet remain unusually stable, and the disputatious positions assumed in respect to these issues are also fairly stable as well.

The "party line" for the Robot Cult is not so much a matter of memorizing a Creed and observing Commandments, but of taking seriously as nobody else on earth does (sometimes by going through the ritual motions of dispute itself) a set of idealized outcomes -- outcomes that would just happen to confer personal "transcendence" on those who are preoccupied with them, namely, superintelligence, superlongevity, and superabundance -- and fixating on a set of "technical" problems (not accepted as priorities in the consensus scientific fields on which these "technical" vocabularies parasitically depend) standing in the way of the realization of those idealized outcomes and the promise of that transcendence.

It is not so much a hard party-line that is policed by the Robot Cult, but a circumscription of debate onto an idiosyncratic set of marginal problems and marginal "technical" vocabularies in the service of superlative transcendentalizing aspirations rather than conventional progressive technodevelopmental aspirations.

This marginality is compensated by the fraught pleasures of a highly defensive sub(cult)ural identification, the sense of being a vanguard rather than an ignoramus or a crank, the sense of gaining a highly simplified explanatory narrative and a location within it as against the ignorance and confusion that likely preceded the conversion experience (or, to be more generous about it, for some, the assumption of the futurological enthusiasm that impelled them into this particular fandom), not to mention the offering up of a tantalizing glimpse and promise of superlative aspirations, however conceptually confused, however technically implausible.

For some, superlativity functions as a straightforward faith-based initiative, and mobilizes the conventional authoritarian organizational circuit of True Believers and would-be Priestly Authorities, while for others it is a self-marginalizing sub(cult)ural enthusiasm more like a fandom. The fandom may be less psychologically damaging and less fundamentalist and less prone to authoritarianism (or not), but it nurtures and mobilizes the worst extremes in organized superlative futurology all the same.

The True Believers and the Fans will all refer just the same to "the movement" and to themselves as "transhumanists" or "singularitarians" or what have, imagining themselves different sorts of people in consequence of their identification with that movement and with the Movement of History in which it is imagined uniquely to participate along a path to transcendence or apocalypse.

Beyond all that, as I said, superlative futurology also continues to provide an illuminating symptom and clarifyingly extreme variation on prevailing neoliberal developmental discourse as such, which is saturated with reductionisms, determinisms, utopianisms, eugenicisms, and libertopianisms very much like the ones the find their extreme correlates in superlative futurology. It is as both sympton and reductio of neoliberal developmentalism that superlative futurology probably best repays our considered attention.

On their own, the Robot Cultists are a rather clownish collection, even if one should also pay close attention to the ways in which sensationalist media take up their facile and deranging framings of technodevelopmental quandaries to the cost of sense at the worst possible historical moment, and also one should remain vigilant about the organizational life of superlative futurology since even absurd marginal groups of boys with toys who say useful things to incumbent interests while fancying themselves the smartest people in the room and Holders of the Keys of History can do enormous damage if they connect to good funding sources however palpably idiotic their actual views (as witness Nazis and Neocons and all the usual suspects in this dumb dreary disastrous vein).

14 comments:

jimf said...

Dale wrote:

> The "man" in question is [a] would-be guru. . . I always find
> myself wishing secularists would go back to reading James and Dewey
> rather than all this facile reductionism misconstrued as respect
> for science.

And speaking of William James (and cranks) -- from my e-mail archive:

I picked up a copy of William James' _The Varieties of Religious
Experience_ (1902) the other day, and in Lecture I ("Religion and
Neurology"), I found the following quote.

From H. Maudsley, _Natural Causes and Supernatural Seemings_,
1886 (pp. 157, 256):

"What right have we to believe Nature under any obligation to
do her work by means of complete minds only? She may
find an incomplete mind a more suitable instrument for
a particular purpose. It is the work that is done, and the
quality in the worker by which it was done, that is alone
of moment; and it may be no great matter from a cosmical
standpoint, if in other qualities of character he was
singularly defective -- if indeed he were hypocrite,
adulterer, eccentric, or lunatic. . . . Home we come
again, then, to the old and last resort of certitude --
namely the common assent of mankind, or of the
competent by instruction and training among mankind."

A posteriori, of course, it makes no difference whatsoever,
though a priori, if you're trying to pick a horse to bet on,
weeellll.....

James adds, later on in the same chapter:

"Similarly, the nature of genius has been illuminated by
the attempts, of which I already made mention, to
class it with psychopathical phenomena. Borderland
insanity, crankiness, insane temperament, loss of mental
balance, psychopathic degeneration (to use a few
of the many synonyms by which it has been called),
has certain peculiarities and liabilities which, when
combined with a superior quality of intellect in an
individual, make it more probable that he will make
his mark and affect his age, than if his temperament
were less neurotic. There is, of course, no special
affinity between crankiness as such and superior
intellect [footnote: Superior intellect, as Professor Bain
has admirably shown, seems to consist in nothing
so much as in a large development of the faculty
of association by similarity.], for most psychopaths
have feeble intellects, and superior intellects more
commonly have normal nervous systems. But
the psychopathic temperament [*] whatever be the
intellect with which it finds itself paired, often brings
with it ardor and excitability of character. The cranky
person has extraordinary emotional susceptibility.
He is liable to fixed ideas and obsessions. His
conceptions tend to pass immediately into belief
and action; and when he gets a new idea, he has
no rest till he proclaims it, or in some way 'works it off.'
"What shall I think of it?" a common person says
to himself about a vexed question but in a 'cranky'
mind "What must I do about it?" is the form the question
tends to take. In the autobiography of that high-souled
woman, Mrs. Annie Besant, I read the following passage:
"Plenty of people wish well to any good cause, but
very few care to exert themselves to help it, and still
fewer will risk anything in its support. 'Some one ought
to do it, but why should I?' is the ever re-echoed phrase
of weak-kneed amiability. 'Some one ought to do it, so
why not I?' is the cry of some earnest servant of man,
eagerly forward springing to face some perilous duty.
Between these two sentences lie whole centuries of
moral evolution." True enough! and between these
two sentences lie also the different destinies of the
ordinary sluggard and the psychopathic man. Thus,
when a superior intellect and a psychopathic temperament
coalesce -- as in the endless permutations and combinations
of human faculty, they are bound to coalesce often enough --
in the same individual, we have the best possible
condition for the kind of effective genius that gets into
the biographical dictionaries. Such men do not remain
mere critics and understanders with their intellect."


[*] I don't suppose James is using "psychopathic" in
its modern sense; he no doubt means something more
general, what we would call "neurotic" or "mentally
disturbed".

jimf said...

Dale wrote:

> [O]nly Vinge is a figure of lasting significance in my opinion. . .

He writes damned good SF. _A Fire Upon the Deep_ is terrific
(the interstellar Usenet is extremely entertaining, and the
first few chapters where the Straumli Blight takes off are
as terrifying as anything in SF), and his latest, _Rainbows End_,
shows that he hasn't lost his touch (and indeed, is keeping
one step ahead of the folks who take him [too] seriously).

Eliezer Yudkowsky was living in the "Low Beyond" when I first ran into
him in 1997, and he still lives there. :-/




A singleton star, reddish and dim. . . [A] single planet,
more like a moon. . . The structures on the surface were gone from normal
view, pulverized into regolith across a span of aeons. The treasure was far
underground, beneath a network of passages, in a single room. . .
Information at the quantum density, undamaged. Maybe five billion
years had passed since the archive was lost to the nets.
The curse of the mummy's tomb, a comic image from mankind's own
prehistory, lost before time. They had laughed when they said it, laughed
with joy at the treasure ... and determined to be cautious just the same.
They would live here a year or five, the little company from Straum, the
archaeologist programmers, their families and schools. A year or five would
be enough to handmake the protocols, to skim the top and identify the
treasure's origin in time and space, to learn a secret or two that would
make Straumli Realm rich. . .

But the local net at the High Lab had transcended -- almost without the
humans realizing. The processes that circulated through its nodes were
complex, beyond anything that could live on the computers the humans had
brought. Those feeble devices were now simply front ends to the devices the
recipes suggested. The processes had the potential for self-awareness ...
and occasionally the need. . .

Days passed. For the evil that was growing in the new machines, each
hour was longer than all the time before. Now the newborn was less than an
hour from its great flowering, its safe spread across interstellar spaces.
The local humans could be dispensed with soon. Even now they were an
inconvenience, though an amusing one. Some of them actually thought to
escape. For days they had been packing their children away into coldsleep
and putting them aboard the freighter. "Preparations for departure," was how
they described the move in their planner programs. For days, they had been
refitting the frigate -- behind a a mask of transparent lies. Some of the
humans understood that what they had wakened could be the end of them, that
it might be the end of their Straumli Realm. There was precedent for such
disasters, stories of races that had played with fire and had burned for it.
None of them guessed the truth. None of them guessed the honor that had
fallen upon them, that they had changed the future of a thousand million
star systems.

The hours came to minutes, the minutes to seconds. And now each second
was as long as all the time before. The flowering was so close now, so
close. The dominion of five billion years before would be regained, and
**this** time held. Only one thing was missing. . . Thousands of microseconds
were spent (wasted) . . . sorting the trivia... finally spotting one
incredible item:

Inventory: quantum data container, quantity (1), loaded to the frigate
one hundred hours before!

And all the newborn's attention turned upon the fleeing vessels.
Microbes, but suddenly pernicious. . . But it would be another three
seconds before it could make its first ultradrive hop. The new Power
had no weapons on the ground, nothing but a comm laser. . .
[T]he laser was aimed, tuned civilly on the retreating warship's receiver. No
acknowledgment. The humans knew what communication would bring. The laser
light flickered here and there across the hull, lighting smoothness and
inactive sensors. . . Searching, probing. The Power had never
bothered to sabotage the external hull, but that was no problem.
Even this crude machine had thousands of robot sensors
scattered across its surface, reporting status and danger, driving utility
programs. Most were shut down now, the ship fleeing nearly blind. They
thought by not looking that they could be safe.

One more second and the frigate would attain interstellar safety.
The laser flickered on a failure sensor. . . Its interrupts could not
be ignored if the star jump were to succeed. Interrupt honored. Interrupt
handler running, looking out, receiving more light from the laser far
below.... a backdoor into the ship's code, installed when the newborn had
subverted the humans' groundside equipment....

.... and the Power was aboard, with milliseconds to spare. Its agents
-- not even human equivalent on this primitive hardware -- raced through the
ship's automation, shutting down, aborting. There would be no jump. Cameras
in the ship's bridge showed widening of eyes, the beginning of a scream. The
humans knew, to the extent that horror can live in a fraction of a second.
There would be no jump. Yet the ultradrive was already committed. There
would be a jump attempt, without automatic control a doomed one. Less than
five milliseconds till the jump discharge, a mechanical cascade that no
software could finesse. The newborn's agents flitted everywhere across the
ship's computers, futilely attempting a shutdown. Nearly a light-second
away, under the gray rubble at the High Lab, the Power could only watch.
So. The frigate would be destroyed. . .

Frustration. . . Something of significance had died with the frigate,
something from this archive. Memories were dredged from the context,
reconstructed: What was lost might have made the newborn still more
powerful ... but more likely was deadly poison. After all, this Power had
lived once before, then been reduced to nothing. What was lost might have been
the reason. Suspicion. The newborn should not have been so fooled. Not by mere
humans. The newborn convulsed into self-inspection and panic. Yes, there
were blindspots, carefully installed from the beginning, and not by the
humans. Two had been born here. Itself ... and the poison, the reason for
its fall of old. The newborn inspected itself as never before, knowing now
just what to seek. Destroying, purifying, rechecking, searching for copies
of the poison, and destroying again.

Relief. Defeat had been so close, but now ...

. . .

Crypto: 0
As received by: Transceiver Relay03 at Relay
Language path: Samnorsk->Triskweline, SjK:Relay units
From: Straumli Main
Subject: Archive opened in the Low Transcend!
Summary: Our links to the Known Net will be down temporarily
Key phrases: transcend, good news, business opportunities, new archive,
communications problems
Distribution:
Where Are They Now Interest Group, Homo Sapiens Interest Group,
Motley Hatch Administration Group, Transceiver Relay03 at Relay,
Transceiver Windsong at Debley Down, Transceiver Not-for-Long at Shortstop

Date: 11:45:20 Docks Time, 01/09 of Org year 52089
Text of message:

We are proud to announce that a human exploration company from Straumli
Realm has discovered an accessible archive in the Low Transcend. This is not
an announcement of Transcendence or the creation of a new Power. We have in
fact postponed this announcement until we were sure of our property rights
and the safety of the archive. We have installed interfaces which should
make the archive interoperable with standard syntax queries from the Net. In
a few days this access will be made commercially available. (See discussion
of scheduling problems below.)

Because of its safety, intelligibility, and age, this Archive is
remarkable. We believe there is otherwise lost information here about
arbitration management and interrace coordination. We'll send details to the
appropriate news groups. We're very excited about this. Note that no
interaction with the Powers was necessary; no part of Straumli Realm has
transcended.

Now for the bad news: Arbitration and translation schemes have had
unfortunate clenirations[?] with the ridgeway armiphlage[?]. The details
should be amusing to the people in the Communication Threats news group, and
we will report them there later. But for at least the next hundred hours,
all our links (main and minor) to the Known Net will be down. Incoming
messages may be buffered, but no guarantees. No messages can be forwarded.
We regret this inconvenience, and will make up for it very soon!
Physical commerce is in no way affected by these problems. Straumli
Realm continues to welcome tourists and trade.

. . .


Crypto: 0
As received by: Transceiver Relay03 at Relay
Language path: Firetongue->Cloudmark->Triskweline, SjK units
[Firetongue and Cloudmark are High Beyond trade languages.
Only core meaning is rendered by this translation.]
From: Arbitration Arts Corporation at Firecloud Nebula [A High Beyond
military[?] organization. Known age ~100 years]
Subject: Reason for concern
Summary: Three single-system civilizations are apparently destroyed
Key phrases: scale interstellar disasters, scale interstellar warfare?,
Straumli Realm Perversion
Distribution:
War Trackers Interest Group, Threats Interest Group, Homo Sapiens Interest Group

Date: 53.57 days since the fall of Straumli Realm
Text of message:

Recently an obscure civilization announced it had created a new Power
in the Transcend. It then dropped "temporarily" off the Known Net. Since
that time, there have been about a million messages in Threats about the
incident -- plenty of speculations that a Class Two Perversion had been born
-- but no evidence of effects beyond the boundaries of the former "Straumli
Realm".

Arbitration Arts specializes in treckle lansing disputes. As such, we
have few common business interests with natural races or Threats Group. That
may have to change: sixty-five hours ago, we noticed the apparent extinction
of three isolated civilizations in the High Beyond near Straumli Realm. Two
of these were Eye-in-the-U religious probes, and the third was a Pentragian
factory. Previously their main Net link had been Straumli Realm. As such,
they have been off the Net since Straumli dropped, except for occasional
pinging from us.

We diverted three missions to perform fly-throughs. Signal
reconnaissance revealed wideband communication that was more like neural
control than local net traffic. Several new large structures were noted. All
our vessels were destroyed before detailed information could be returned.
Given the background of these settlements, we conclude that this is not the
normal aftermath of a transcending.

These observations are consistent with a Class Two attack from the
Transcend (albeit a secretive one). The most obvious source would be the new
Power constructed by Straumli Realm. We urge special vigilance to all High
Beyond civilizations in this part of the Beyond. We larger ones have little
to fear, but the threat is very clear.

. . .

"[T]he rumors in the Threats newsgroup are true. The Straumers had
a laboratory in the Low Transcend. They were playing with recipes from some
lost archive, and they created a new Power. It appears to be a Class Two
perversion."

The Known Net recorded a Class Two perversion about once a century.
Such Powers had a normal "lifespan" -- about ten years. But they were
explicitly malevolent, and in ten years could do enormous damage. **Poor
Straum**. . .

"In the Transcend, truly sophisticated equipment can operate, devices
substantially smarter than anyone down here. Of course,
almost any economic or military competition can be won by the side with
superior computing resources. Such can be had at the Top of the Beyond and
in the Transcend. Races are always migrating there, hoping to build their
utopias. But what do you do when your new creations may be smarter than you
are? It happens that there are limitless possibilities for disaster, even if
an existing Power does not cause harm. So there are unnumbered recipes for
safely taking advantage of the Transcend. Of course they can't be
effectively examined except in the Transcend. And run on devices of their
own description, the recipes themselves become sentient." . . .

"There are complex things in the archives. None of them is sentient, but some
have the potential, if only some naive young race will believe their promises.
We think that's what happened to Straumli Realm. They were tricked by
documentation that claimed miracles, tricked into building a transcendent
being, a Power -- but one that victimizes sophonts in the Beyond." She
didn't mention how rare such perversion was. The Powers were variously
malevolent, playful, indifferent -- but virtually all of them had better
uses for their time than exterminating cockroaches in the wild.

"Okay, I guess I see. But I get the feeling this is common knowledge.
If it's this deadly, how did the Straumli bunch get taken in?"

"Bad luck and criminal incompetence," the words popped out of her with
surprising force. . . . "Look. Operations in the High Beyond and
in the Transcend are dangerous. Civilizations up there don't last
long, but there will always be people who try. Very few of
the threats are actively evil. What happened to the
Straumers.... They ran across this recipe advertising wondrous treasure.
Quite possibly it had been lying around for millions of years, a little too
risky for other folks to try. You're right, the Straumers knew the dangers."

But it was a classic situation of balancing risks and choosing wrong.
Perhaps a third of Applied Theology was about how to dance near the flame
without getting incinerated. No one knew the details of the Straumli
debacle, but she could guess them from a hundred similar cases:

"So they set up a base in the Transcend at this lost archive -- if
that's what it was. They began implementing the schemes they found. You can
be sure they spent most of their time watching it for signs of deception. No
doubt the recipe was a series of more or less intelligible steps with a
clear takeoff point. The early stages would involve computers and programs
more effective than anything in the Beyond -- but apparently well-behaved."

"And some of these would be near or beyond human complexity. Of course,
the Straumers would know this and try to isolate their creations.
But given a malign and clever design ... it should be no surprise
if the devices leaked onto the lab's local net and distorted the
information there. From then on, the Straumers wouldn't have a chance. The
most cautious staffers would be framed as incompetent. Phantom threats would
be detected, emergency responses demanded. More sophisticated devices would
be built, and with fewer safeguards. Conceivably, the humans were killed or
rewritten before the Perversion even achieved transsapience."

. . .

The Emissary Device shook its head. "Vrinimi Org is very busy right
now, trying to convince me to get off their equipment, trying to screw up
their courage and force me off. They don't believe what I'm telling them" . . .
"See, the Blight is not a Class Two perversion. In the time I have left,
I can only guess what it is.... Something very old, very big. Whatever it
is, I'm being eaten alive." . . . Some thousands of light-years away,
well into the Transcend, a Power was fighting for its life. And all they
saw of it was one man turned into a slobbering lunatic.

-- Vernor Vinge, _A Fire Upon The Deep_

jimf said...

Dale wrote (down below in "Core Breach"):

> [Hjalte wrote:]
>
> > It is, as a wise man once said, simplified humanism.
>
> I don’t happen to agree that the man who said “transhumanism
> is simplified humanism” was the least bit wise, in fact I
> think he is something of a charlatan. . .

Oh, OK, I didn't know who we were talking about here.

> Hjalte says:
>
> Mener du, at transhumanisme er en idé man bør gøre modstand imod?
> Do you think trans-humanism is an idea we should make the opposition?
>
> Det er jo bare forsimplet humanisme:
> It is just simplified humanism:
>
> http://yudkowsky.net/singularity/simplified

http://deleet.dk/2009/03/29/transhumanisme-eksempel/
(Translated from Danish by Google.)

jimf said...

> Let's Talk About Cultishness

http://leitl.org/docs/public_html/postbiota/sl4/0205/3541.html

The Revolution Refuses To Form a Clique
From: Eliezer S. Yudkowsky
Date: Fri May 03 2002

. . .


"Every now someone on SL4 accuses us of groupthink or accuses me of being
the local guru. Usually this person is also incapable of correct spelling
or structured thinking and gets kicked out for that reason; the ones who can
write a decent post stay and usually learn better after a while."

ZARZUELAZEN said...

New Press Release from Robot Cult Ground Zero!

Michael Anissimov has just published on his blog an interview with the new 'President' of SIAI, whose photo looks like that of a 15-year old teenage boy

http://www.acceleratingfuture.com/michael/blog/

Apparently Tyler Emerson (the older, experieced relatively sensible 'Executive Director' has left SIAI and been replaced by this new baby-faced 'President').

Meanwhile, old exprienced SIAI researchers such as M.Wilson and Marcello Herreshoff have suddenly vanished without a trace and the new potential employees are described as 'summer undergraduates'.

The new 'President' does say one sensible thing (try not to laugh out loud):

---

AF: Why should someone regard SIAI as a serious contender in AGI?

Vassar: The single biggest reason is that so few people are even working towards AGI. Of those who are, most are cranks of one sort or another.

---

Indeed.

Hjalte said...

There are, after all, thousands upon thousands of serious, credentialized, published professionals and students working to solve such problems who have never heard of any of the people you take most seriously and who, upon hearing of them, would laugh their assess off. This possibly should matter to you.The fact than very few people take these maters seriously do matter to me, and it worries me. In two ways.
It worries for the reason you think of, namely that it indeed is evidence for something wrong in the transhumanists agenda, just like the very fact extremely few scientists takes ID seriously is evidence that ID is not good science.

But the reason why these established scientists would laugh their assess off upon hearing what a transhumanist has to say, is because the h+ agenda sounds so incredibly silly (unlike the ID-case which may sound reasonable when first encountered, but turns out to be silly upon further thought, and I think most scientist in the field have given it such further thought before dismissing it). The fact that something sounds silly, is not proof that it also is silly. And in order to stop taking the h+-guys serious, I want such a proof (well not a proof, but at least some more powerful evidence than ”most other people think X, therefore X”). In the ID-case such a proof exists, if it does in the h+-case I have not encountered it. The fact that most people, including established scientist, would tend to dismiss such issues out of hand and laugh their asses off without even giving the subject a second thought, is my second worry.

The "party line" for the Robot Cult is not so much a matter of memorizing a Creed and observing Commandments, but of taking seriously as nobody else on earth does […] a set of idealized outcomes -- outcomes that would just happen to confer personal "transcendence" on those who are preoccupied with them, namely, superintelligence, superlongevity, and superabundance -- and fixating on a set of "technical" problems […] standing in the way of the realization of those idealized outcomes and the promise of that transcendence.
It is not so much a hard party-line that is policed by the Robot Cult, but a circumscription of debate onto an idiosyncratic set of marginal problems and marginal "technical" vocabularies in the service of superlative transcendentalizing aspirations rather than conventional progressive technodevelopmental aspirations.
Some transhumanists take the subject very serious, and some of them take it more serious than is healthy. There I agree. Other transhumanist occasionally read the various h+ websites and blogs, and maybe go so far as to discuss the subject on the web when they have been given an extended Easter-holyday, but they do not give the subject much thought in their daily life.
Some of the transhumanist of the first kind speak a little like cult-leaders. Other of the transhumanist of the first kind speak like they would if they were members of a cult.

Does any of this qualify transhumanism to the status of cult. Not according to me.
I have never seen a transhumanist being forced by the “gurus” to give up their family and friends because they are of the wrong faith. Nor have I seen a transhumanist work eighteen hours a day in order to gather more money for the “cult”. No shining posthumans from beyond the singularity have given granted any of the “gurus” any special knowledge. Etc etc. (maybe co-commenter jimf can extend the list with some quotes from that book of his)
You need not join a cult in order to have a slightly exaggerated and limited sphere of interests, just look at stamp collectors.

even absurd marginal groups of boys with toys who say useful things to incumbent interests while fancying themselves the smartest people in the room and Holders of the Keys of History can do enormous damage if they connect to good funding sources however palpably idiotic their actual views (as witness Nazis and Neocons and all the usual suspects in this dumb dreary disastrous vein).When the first transhumanist enters the white house (no, I don’t think that will ever happen) the very first thing he (considering the gender distribution among presidents and among transhumanist it is likely to be he) will do, will definitely be to build a huge army of cyborgs, and putting “The singularity is near” on the top of the schools curriculum. Then he will name anyone he dislikes “bioconservatives” and “neoluddites”, and hunt them down using the cyborg army. When they are finally in jail, he will torture them using nanobots chewing on the nerve-endings.
When the time for the midterm election arrives, he will reveal that he, in fact, is a Friendly AI, and therefore deserves a lifetime dictatorship, and that lifetime is quite long given that he’ll live forever.
Or maybe not. No really. What kind of damage do you think a transhumanist would do, in the very unlikely case that such a person entered a position of power?

Jimf wites:
Oh, OK, I didn't know who we were talking about here [...] (Translated from Danish by Google.)Oh, your google-fu is mighty… I just happen to like that essay…

jimf said...

> What kind of damage do you think a transhumanist would do,
> in the very unlikely case that such a person entered a position
> of power?

Well, 5 years ago (when I took these things a bit
more seriously than I take them now) I wrote:

The "Singularitarian" circus may just be getting started! But
seriously -- if you extrapolate this sort of hysteria
to the worst imaginable cases (something the
Singularitarians seem fond of doing)
then we might expect that:

1. The Singularitarian Party actually turns
into a bastion of anti-technology. The approaches
to AI that -- IMH non-expert opinion -- are likeliest to succeed
(evolutionary, selectionist, emergent) are frantically
demonized as too dangerous to pursue. The most
**plausible** approaches to AI are to be regulated
the way plutonium and anthrax are regulated today, or
at least shouted down among politically-correct
Singularitarians. IOW, the Singularitarian Party arrogates
to itself a role as a sort of proto-Turing Police out
of William Gibson. Move over, Bill Joy! It's very
Vingean too, for that matter -- sounds like the first book
in the "Realtime" trilogy (_The Peace War_).

2. The **approved** approach to AI -- an SIAI-sanctioned
"guaranteed Friendly", "socially responsible" framework
(that seems to be based, in so far as it's coherent at all,
on a Good-Old-Fashioned mechanistic AI faith in
"goals" -- as if we were programming an expert system
in OPS5), which some (more sophisticated?) folks have already
given up on as a dead end and waste of time, is to suck up all
of the money and brainpower that the SL4 "attractor" can
pull in -- for the sake of the human race's safe
survival of the Singularity.

3. Inevitably, there will be heretics and schisms in the
Church of the Singularity. The Pope of Friendliness will
not yield his throne willingly, and the emergence of someone
(Michael Wilson?) bright enough and crazy enough
to become a plausible successor will **undoubtedly**
result in quarrels over the technical fine points of
Friendliness that will escalate into religious wars.

4. In the **absolute worst case** scenario I can imagine,
a genuine lunatic FAI-ite will take up the Unabomber's
tactics, sending packages like the one David Gelernter
got in the mail.
---------------------------------

I was reacting to the following remark by one Michael Wilson,
who was, once upon a time, an insider at the "Singularity
Institute for Artificial Intelligence".


"To my knowledge Eliezer Yudkowsky is the only person that has tackled
these issues [of "Friendliness"] head on and actually made progress in producing
engineering solutions (I've done some very limited original work on low-level
Friendliness structure). Note that Friendliness is a class of advanced
cognitive engineering; not science, not philosophy. We still don't know
that these problems are actually solvable, but recent progress has been
encouraging and we literally have nothing to lose by trying
I sincerely hope that we can solve these problems, stop Ben Goertzel
and his army of evil clones (I mean emergence-advocating AI researchers :) and
engineer the apothesis. The universe doesn't care about hope though, so I will
spend the rest of my life doing everything I can to make Friendly AI a
reality. Once you /see/, once you have even an inkling of understanding
the issues involved, you realise that one way or another these are the
Final Days of the human era and if you want yourself or anything else you
care about to survive you'd better get off your ass and start helping.
The only escapes from the inexorable logic of the Singularity are death,
insanity and transcendence."

http://sl4.org/bin/wiki.pl?Starglider
---------------------------------


And seriously, don't you feel the slightest bit queasy in the presence
of somebody who can claim, with a straight face, to be a "perfect
altruist"?

From the archive:

From: [jimf] on 26/04/2006
Subject: Clarity & control ("Yes, dammit, I'm a complete strategic
altruist; you can insert all the little qualifiers you want. . .")

Speaking of explicit and consciously-held goals --
you'll **never**, in contemporary "transhumanist"
circles, escape from the distracting ideological
commitment to the perfectly responsible, perfectly
self-directed, perfectly self-aware mind
(the Ayn Randian baggage that seems to permeate
Extropian and transhumanist discourse).

http://www.nytimes.com/books/first/h/horgan-mind.html
---------------------
Another high-profile Freudophile is Gerald Edelman,
who won a Nobel prize for his work in immunology,
switched later to neuroscience, and now directs
the Neurosciences Institute in La Jolla, California.
Edelman dedicated _Bright Air, Brilliant Fire_, a
popular account of his theory of the mind, to
"two intellectual pioneers, Charles Darwin and
Sigmund Freud. In much wisdom, much sadness."
Edelman remarked in a chapter on the unconscious:

"My late friend, the molecular biologist Jacques Monod,
used to argue vehemently with me about Freud, insisting
that he was unscientific and quite possibly a charlatan.
I took the side that, while perhaps not a scientist in
our sense, Freud was a great intellectual pioneer,
particularly in his views on the unconscious and its
role in behavior. Monod, of stern Huguenot stock, replied,
'I am entirely aware of my motives and entirely responsible
for my actions. They are all conscious.' In exasperation
I once said, 'Jacques, let's put it this way. Everything
Freud said applies to me and none of it to you.'
He replied, 'Exactly, my dear fellow.'"
---------------------


---------------------
When Ayn [Rand] announced proudly, as she often did, 'I can
account for every emotion I have' -- she meant, astonishingly,
that the total contents of her subconscious mind were
instantly available to her conscious mind, that all of her
emotions had resulted from deliberate acts of rational
thought, and that she could name the thinking that
had led her to each feeling. And she maintained that
every human being is able, if he chooses to work at the
job of identifying the source of his emotions, ultimately
to arrive at the same clarity and control.
---------------------
Barbara Branden, _The Passion of Ayn Rand_
pp. 193 - 195


From a transhumanist acquaintance I once
corresponded with:

> Jim, dammit, I really wish you'd start with
> the assumption that I have a superhuman
> self-awareness and understanding of ethics,
> because, dammit, I do.


An interesting (if depressing) exchange between
Eliezer Yudkowsky and Ben Goertzel on SL4 back in January, 2002
(the links are no longer valid):

------------------
http://www.sl4.org/archive/0201/2638.html
Re: Ethical basics
From: ben goertzel (ben@goertzel.org)
Date: Wed Jan 23 2002 - 15:56:16 MST

Realistically, however, there's always going to be a mix
of altruistic and individualistic motivations, in any
one case -- yes, even yours...
------------------
http://www.sl4.org/archive/0201/2639.html
Re: Ethical basics
From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Wed Jan 23 2002 - 16:16:57 MST

Sorry, not mine. I make this statement fully understanding the size of
the claim. But if you believe you can provide a counterexample - any case
in, say, the last year, where I acted from a non-altruistic motivation -
then please demonstrate it.
------------------
http://www.sl4.org/archive/0201/2640.html
RE: Ethical basics
From: Ben Goertzel (ben@goertzel.org)
Date: Wed Jan 23 2002 - 19:14:47 MST

Eliezer, given the immense capacity of the human mind
for self-delusion, it is entirely possible for someone
to genuinely believe they're being 100% altruistic even
when it's not the case. Since you know this, how then can
you be so sure that you're being entirely altruistic?

It seems to me that you take a certain pleasure in being
more altruistic than most others. Doesn't this mean that
your apparent altruism is actually partially ego gratification ;>
And if you think you don't take this pleasure, how do you
know you don't do it unconsciously? Unlike a superhuman AI,
"you" (i.e. the conscious, reasoning component of Eli) don't
have anywhere complete knowledge of your own mind-state...

Yes, this is a silly topic of conversation...
------------------
http://www.sl4.org/archive/0201/2646.html
Re: Ethical basics
From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Wed Jan 23 2002 - 21:29:18 MST

> Yes, this is a silly topic of conversation...

Rational altruism? Why would it be? I've often considered
starting a third mailing list devoted solely to that. . .

No offense, Ben, but this is very simple stuff - in fact,
it's right there in the Zen definition of altruism I quoted.
This is a very straightforward trap by comparison with any
of the political-emotion mindtwisters, much less the subtle
emergent phenomena that show up in a pleasure-pain architecture.

I don't take pleasure in being more altruistic than others.
I do take a certain amount of pleasure in the possession and
exercise of my skills; it took an extended effort to acquire them,
I acquired them successfully, and now that I have them,
they're really cool.

As for my incomplete knowledge of my mind-state, I have a lot
of practice dealing with incomplete knowledge of my mind-state -
enough that I have a feel for how incomplete it is, where,
and why. There is a difference between having incomplete knowledge
of something and being completely clueless. . .

I didn't wake up one morning and decide "Gee, I'm entirely
altruistic", or follow any of the other patterns that are the
straightforward and knowable paths into delusive self-overestimation, nor
do I currently exhibit any of the straightforward external signs which are
the distinguishing marks of such a pattern. I know a lot about the way
that the human mind tends to overestimate its own altruism.

I took a couple of years of effort to clean up the major
emotions (ego gratification and so on), after which I was pretty
much entirely altruistic in terms of raw motivations, although
if you'd asked me I would have said something along the lines of:
"Well, of course I'm still learning... there's still probably
all this undiscovered stuff to clean up..." - which there was,
of course; just a different kind of stuff. Anyway, after I in
*retrospect* reached the point of effectively complete
strategic altruism, it took me another couple of years after
that to accumulate enough skill that I could begin to admit
to myself that maybe, just maybe, I'd actually managed to clean
up most of the debris in this particular area.

This started to happen when I learned to describe the reasons why
altruists tend to be honestly self-deprecating about their own altruism,
such as the Bayesian puzzle you describe above. After that, when I
understood not just motivations but also the intuitions used to reason
about motivations, was when I started saying openly that yes, dammit, I'm
a complete strategic altruist; you can insert all the little qualifiers
you want, but at the end of the day I'm still a complete strategic
altruist. . .
------------------
http://www.sl4.org/archive/0201/2649.html
RE: Ethical basics
From: Ben Goertzel (ben@goertzel.org)
Date: Thu Jan 24 2002 - 07:02:42 MST

> > Yes, this is a silly topic of conversation...
>
> Rational altruism? Why would it be? I've often considered
> starting a third mailing list devoted solely to that.

Not rational altruism, but the extended discussion of *your
own personal psyche*, struck me as mildly (yet, I must admit, mildly pleasantly) absurd...

> No offense, Ben, but this is very simple stuff

Of course it is... the simple traps are the hardest to avoid,
even if you think you're avoiding them.

Anyway, there isn't much point to argue on & on about how
altruistic Eli really is, in the depth of his mind. . .

The tricks the mind plays on itself are numerous, deep and
fascinating. And yet all sorts of wonderful people do emerge,
including some fairly (though in my view never completely) altruistic ones...
------------------

Hjalte said...

The "Singularitarian" circus may just be getting started! But seriously -- if you extrapolate this sort of hysteria to the worst imaginable cases (something the
Singularitarians seem fond of doing) then we might expect that:[...]
Hmm. I haven’t thought this through enough, it seems. This scenario is more plausible (and serious) than mine, not that it says a lot.

And seriously, don't you feel the slightest bit queasy in the presence
of somebody who can claim, with a straight face, to be a "perfect
altruist"?
Strictly speaking the answer is yes. But there is not far from madman to genius, and from one who intends to save the world a few personality-quirks must be expected.

jimf said...

> But there is not far from madman to genius, and from one who
> intends to save the world a few personality-quirks must be
> expected.

You can certainly expect "personality quirks" from someone who
"intends to save the world." The one thing you might not want
to hold your breath waiting for such a person to do is -- to
actually save the world.

From _Feet of Clay_, by Anthony Storr
http://www.amazon.com/exec/obidos/tg/detail/-/0684834952
-----------------
"The ideas that gurus have, unlike those of scientists
or mathematicians, are not exposed to critical scrutiny,
or subjected to the authority of an established church. They
then seek disciples. Acquiring disciples who wholeheartedly
embrace the guru's system of ideas is the final proof of
his superiority, the confirmation of his phantasies about himself.
Confidence tricksters are convincing because they
have come to believe in their own fictions. Gurus are
convincing because they appear sure that they are right.
They have to believe in their own revelation or else their
whole world collapses. The certainty shown by gurus should,
paradoxically, be the aspect of their behaviour which most
arouses suspicion. There is a reason to think that all gurus
harbour secret doubts as well as convictions, and that is
why they are driven to seek disciples."

"Gurus...offer faiths which are entirely dependent on belief
in the guru himself. Self-surrender to something or someone
who appears more powerful than the individual's weak ego
or will is an essential feature of conversion. People who give
up their independence to a guru's direction feel a similar
sense of relief, but put themselves at greater risk."

"Gurus are isolated people, dependent upon their disciples,
with no possibility of being disciplined by a church or of
being criticized by contemporaries. They are above the law.
The guru usurps the place of God. Whether gurus have suffered from
manic-depressive illness, schizophrenia, or any other form
of recognized, diagnosable mental illness is interesting but
unimportant. What distinguishes gurus from more orthodox
teachers is not their manic-depressive mood swings, not their
thought disorders, not their delusional beliefs, not their
hallucinatory visions, not their mystical states of ecstasy:
it is their narcissism."

"Those who remain narcissistic in adult life retain this
(child's) need to be loved and to be the centre of attention
together with the grandiosity which accompanies it. This is
characteristic of gurus...The need to recruit disciples is
an expression of the guru's need to be loved and his need
to have his beliefs validated; but, although he may seduce
his followers, he remains an isolated figure who does not
usually have any close friends who might criticize him on
equal terms. His status as a guru demands that all his
relationships are de haut en bas, and this is why gurus
have feet of clay."

"The charisma of certainty is a snare which entraps the
child who is latent in us all."

"The majority of mankind want or need some all-embracing
belief system which purports to provide an answer to
life's mysteries...their belief system, which they proclaim
as 'the truth', is (often) incompatible with the beliefs of other
people. One man's faith is another man's delusion."

"Delusions...preserve self-esteem by blaming others; interpret
anomalies of perceptual experience in ways which diminish the
threat of mental chaos; and, when grandiose, give a much needed
injection of self-confidence to a person who might feel isolated
and insignificant. Religious faiths serve similar functions
in the economy of the psyche.

Delusions have been defined as abnormal beliefs held with
absolute conviction; experienced as self-evident truths
usually of great personal significance; not amenable to
reason or modification by experience; whose content is often
fantastic or at best inherently unlikely; and which are not
shared by those of common social and cultural background."

"Faiths are no more amenable to reason than are delusions."

"It is because of this holistic, all-embracing characteristic
that it is just as difficult to argue with religious faith
as it is to argue with paranoid delusions."

"Both sets of beliefs are connected to some extent with the
preservation of self-esteem, with the conviction of being 'special'.
The self-esteem of the ordinary person is closely bound up with
personal relationships...but faith is even more important to
those in whose lives, for whatever reason, affectionate
relationships play little part. Gurus have often been isolated
as children, and tend to be introverted, narcissistic, and more
interested in what goes on in their own minds than in relationships
with others."

"If self-esteem entirely depends upon a private faith or upon
a delusional system, that faith or system is so precious that
it must not be shaken. No one can afford a total loss of self-esteem,
and those who come close to doing so when in the throes of
severe depression often commit suicide."
-----------------

Hjalte said...

I am not here in order to defend everything that man has said and done.
I just try to judge his ideas as ideas independently of rhetoric, personality-quirks, and stuff he wrote on the internet when he was a teenager.
If one tries this on the ideas proposed by someone like L Ron Hubbard, they will fail miserably (there are no body-thetans). It is not obvious to me that the same is the case with the various ideas proposed by EY, and I do think they have some potential though I don’t think his project will succeed. Enough about this.

jimf said...

> If one tries this on the ideas proposed by someone like
> L Ron Hubbard, they will fail miserably (there are no
> body-thetans).

Sez you (not that I'd disagree with you).

Nevertheless, thousands of Scientologists (including Tom Cruise
and John Travolta) see him as the savior of the human race.

Just as the Objectivists saw Ayn Rand as the savior of the human
race.

Just as the Singularitarians (not to mention the man himself)
see the gentleman under discussion as the savior of the
human race.

Yes, we have no body-thetans. We have no Artificial Intelligences
(let alone Superintelligences) either.

> I am not here in order to defend everything that man has said
> and done. I just try to judge his ideas as ideas independently
> of rhetoric, personality-quirks, and stuff he wrote on the internet
> when he was a teenager.

I'm sorry to have to point this out to you, but there isn't much
left when you take away the rhetoric, personality quirks, and, er,
stuff.

> Enough about this.

why does it keep falling to me to remind other commenters on this
blog that it's not **their** blog?

If you want to rejoin the chorus of Hosannas, take it back to
Accelerating Future.

jimf said...

Speaking of body-thetans, here's something cool.

"Endless celestial sex" eh?

http://www.youtube.com/watch?v=zy0d1HbItOo

jimf said...

http://en.wikipedia.org/wiki/Kolob

In the Latter Day Saint movement, Kolob is a star or planet mentioned. . .
as being nearest to the throne or residence of God. . .

In modern Mormonism, Kolob is a rare topic of discussion within
religious contexts. However, it is periodically a topic of discussion
with Mormon apologetics. The idea also appears within Mormon culture,
including as the subject of a Mormon hymn, and the inspiration for
the planet Kobol within the Battlestar Galactica universe, scripted
by Glen A. Larson, a Mormon.
--------------------------------


And here I thought "Kobol" was an homage to a programming
language. ;->

Unknown said...

"Every now someone on SL4 accuses us of groupthink or accuses me of being
the local guru. Usually this person is also incapable of correct spelling
or structured thinking and gets kicked out for that reason; the ones who can
write a decent post stay and usually learn better after a while."

Yeah? But how about those who are capable of writing catchy, hooky pop songs about you, Big Yud? Hmmmmmmm?