Using Technology to Deepen Democracy, Using Democracy to Ensure Technology Benefits Us All

Friday, October 26, 2007

A Superlative Schema

In the first piece I wrote critiquing Superlative Technology Discourse a few years ago, Transformation Not Transcendence," I wrote that
It pays to recall that theologians never have been able comfortably to manage the reconciliation of the so-called omnipredicates of an infinite God. Just when they got a handle on the notion of omnipotence, they would find it impinging on omniscience. If nothing else, the capacity to do anything would seem to preclude the knowledge of everything in advance. And of course omnibenevolence never played well with the other predicates. How to reconcile the awful with the knowledge of it and the power to make things otherwise is far from an easy thing, after all… As with God, so too with a humanity become Godlike. Any “posthuman” conditions we should find ourselves in will certainly be, no less than the human ones we find ourselves in now, defined by their finitude. This matters, if for no other reason, because it reminds us that we will never transcend our need of one another.
My point in saying this was to highlight the incoherence in principle of the superlative imaginary, to spotlight what looks to me like the deep fear of finitude and contingency (exacerbated, no doubt, by the general sense that we are all of us caught up in an especially unsettling and unpredictable technoscientific storm-churn) that drives this sort of hysterical transcendental turn, and to propose in its stead a deeper awareness and celebration of our social, political, and cultural inter-dependence with one another to cope with and find meaning in the midst of this change.

Of course, there is no question that no technology, however superlative, could deliver literally omni-predicated capacities, nor is it immediately clear even how these omni-predicates might function as regulative ideals given their basic incoherence (although this sort of incoherence hasn't seemed to keep "realists" from claiming interminably that vacuous word-world correspondences function as regulative ideals governing warranted assertions concerning instrumental truth, so who knows?). Rather like the facile faith of a child who seeks to reconcile belief with sense by imagining an unimaginable God as an old man with a long beard in a stone chair, Superlativity would reconcile the impossible omnipredicated ends at which it aspires with the terms of actual possibility through a comparable domestication: of Omniscience into "Superintelligence," of Omnipotence into "Supercapacitation" (especially in its "super-longevity" or techno-immortalizing variations), of Omnibenevolence into "Superabundance."

In such Superlative Technology Discourses, it will always be the disavowed discourse of the omni-predicated term that mobilizes the passion of Superlative Techno-fixations and Techno-transcendentalisms and organizes the shared identifications at the heart of Sub(cult)ural Futurisms and Futurists. Meanwhile, it will be the disavowed terms of worldly and practical discourses that provide all the substance on which these Superlative discourses finally depend for their actual sense: Superintelligence will have no actual substance apart from Consensus Science and other forms of warranted knowledge and belief, Supercapacitation (especially the superlongevity that is the eventual focus of so much "enhancement" talk) will have no actual substance apart from Consensual Healthcare and other forms of public policy administered by harm-reduction norms, Superabundance will have no actual substance apart from Commonwealth and other forms of public investment and private entrepreneurship in the context of general welfare. In each case a worldly substantial reality -- and a reality substantiated consensually, peer-to-peer, at that -- is instrumentalized, hyper-individualized, de-politicized via Superlativity in the service of a transcendental project re-activating the omni-predicates of the theological imaginary.

As with most fundamentalisms -- that is to say, as with all transcendental projects that redirect their energies to political ends to which they are categorically unsuited -- whenever Superlativity shows the world its Sub(cult)ural "organizational" face, it will be the face of moralizing it shows, driven by the confusion of the work of morals/mores with that of ethics/politics, a misbegotten effort to impose the terms of private-parochial moral or aesthetic perfection with the terms of public ethics (which formally solicits universal assent to normative prescriptions), politics (which seeks to reconcile the incompatible aspirations of a diversity of peers who share the world), and science (which provisionally attract consensus to instrumental descriptions).

Very Schematically, I am proposing these correlations:

OMNI-PREDICATED THEOLOGICAL / TRANSCENDENTAL DISCOURSE

Omniscience
Omnipotence
Omnnibenevolence

SUPER-PREDICATED SUPERLATIVE DISCOURSE

Superintelligence
Supercapacitation (often amounting to Superlongevity)
Superabundance

WORDLY SUBSTANTIAL (democratizing/p2p) DISCOURSE

Reasonableness -- that is to say, the work and accomplishments of Warranted Beliefs applied in their proper plural precincts, scientific, moral, aesthetic, ethical, political, legal, commercial, etc.
Civitas -- that is to say the work and accomplishments of Consensual Culture, where culture is presumed to be-extensive with the prosthetic, and health and harm reduction policy are construed as artifice.
Commonwealth -- that is to say, the work and accomplishments of collaborative problem-solving, public investment, and private entrepreneurship in the context of consensual civitas.

On one hand the Super-Predicated term in a Superlative Technology Discourse always deranges and usually disavows altogether -- but, crucially, while nonetheless depending on -- the collaboratively substantiated term in a Worldly Discourse correlated with it, while on the other hand activating the archive of figures, frames, irrational passions, and idealizations of the Omni-Predicated term in a Transcendental Discourse (usually religious or pan-ideological) correlated with it. The pernicious effects of these shifts are instrumental, ethical, and political in the main, but quite various in their specificities.

That complexity accounts for all the ramifying dimensions of the Superlativity Critique one finds in the texts collected in my Superlative Summary at this point. I would like to think one discerns in my own formulations some sense of what more technoscientifically literate and democratically invested worldly alternatives to Superlativity might look like. In these writings, I try to delineate a perspective organized by a belief in technoethical pluralism, on an insistence on a substantiated rather than vacuous scene of informed, nonduressed consent, on the consensualization of non-normative experimental medicine (as an elaboration of the commitment to a politics of Choice) and the diversity of lifeways arising from these consensual practices, on the ongoing implementation of sustainable, resilient, experimentalist, open, multicultural, cosmopolitan models of civilization, on the celebration and subsidization of peer-to-peer formations of expressivity, criticism, credentialization, and the collaborative solution of shared problems, and, through these values and for them, a deep commitment to the ongoing democratization of technodevelopmental social struggle -- using technology (including techniques of education, agitation, organization, legislation) to deepen democracy, while using democracy (the nonviolent adjudication of disputes, good accountable representative governance, legible consent to the terms of everyday commerce, collective problem-solving, peer-to-peer, ongoing criticism and creative expressivity) to ensure that technology benefits us all as Amor Mundi's signature slogan more pithily puts the point.

It should go without saying that there simply is no need to join a marginal Robot Cult as either a True Believer or would-be guru to participate in technodevelopmental social struggle peer-to-peer, nor to indulge in the popular consumer fandoms, digital plutocratic financial and developmental frauds, or pseudo-scientific pop-tech infomercial entertainment of more mainstream futurology. There is no need to assume the perspective of a would-be technocratic elite. There is nothing gained in identifying with an ideology that you hope will "sweep the world" or provide the "keys to history." There is nothing gained in claiming to be "pro-technology" or "anti-technology" at a level of generality at which no technologies actually exist. There is nothing gained in foreswearing the urgencies of today for an idealized and foreclosed "The Future" nor in dis-identifying with your human peers so as to better identify with imaginary post-human or transhuman ones. There is nothing gained in the consolations of faith when there is so much valuable, actual work to do, when there are so many basic needs to fulfill, when there is so much pleasure and danger in the world of our peers at hand. There is nothing gained by an alliance with incumbent interests to secure a place in the future when these incumbents are exposed now as having no power left but the power to destroy the world and the open futurity altogether.

The Superlative Technology Critique is not finally a critique about technology, after all, because it recognizes that "technology" is functioning as a surrogate term in these discourses it critiques, the evocation of "technology" functions symptomatically in these discourses and sub(cult)ures. The critique of Superlativity is driven first of all by commitments to democracy, diversity, equity, sustainability, and substantiated consent. I venture to add, it is driven by a commitment to basic sanity, sanity understood as a collectively substantiated worldly and present concern itself. The criticisms I seem to be getting are largely from people who would either deny the relevance of my own political, social, and cultural emphasis altogether (a denial that likely marks them as unserious as far as I'm concerned) or who disapprove of my political commitment to democracy, my social commitment to commons, and my cultural commitment to planetary polyculture (a disapproval that likely marks them as reactionaries as far as I'm concerned). There is much more for me to say in this vein, and of course I will continue to do so as best I can, and everyone is certainly free and welcome to contribute to or to disdain my project as you will, but I am quite content with the focus my Critique has assumed so far and especially by the enormously revealing responses it seems to generate.

26 comments:

jfehlinger said...

Dale wrote:

> "Any 'posthuman' conditions we should find ourselves in
> will certainly be, no less than the human ones we find
> ourselves in now, defined by their finitude. This matters,
> if for no other reason, because it reminds us that we
> will never transcend our need of one another."
>
> My point in saying this was to highlight the incoherence
> in principle of the superlative imaginary, to spotlight
> what looks to me like the deep fear of finitude and
> contingency. . .


The fact that we can die, that we can be
ill at all, is what perplexes us; the fact
that we now for a moment live and are well
is irrelevant to that perplexity. We need
a life not correlated with death, a health
not liable to illness, a kind of good that
will not perish, a good in fact that flies
beyond the Goods of nature...

This sadness lies at the heart of every
merely positivistic, agnostic, or naturalistic
scheme of philosophy. Let sanguine
healthy-mindedness do its best with its
strange power of living in the moment and
ignoring and forgetting, still the evil
background is really there to be thought
of, and the skull will grin in at the banquet.
In the practical life of the individual,
we know how his whole gloom or glee about
any present fact depends on the remoter
schemes and hopes with which it stands
related. Its significance and framing
give it the chief part of its value. Let
it be known to lead nowhere, and however
agreeable it may be in its immediacy,
its glow and gilding vanish...

The lustre of the present hour is always
borrowed from the background of possibilities
it goes with. Let our common experiences
be enveloped in an eternal moral order; let
our suffering have an immortal significance;
let Heaven smile upon the earth, and deities
pay their visits; let faith and hope be
the atmosphere which man breathes in; -- and
his days pass by with zest; they stir with
prospects, they thrill with remoter values.
Place round them on the contrary the
curdling cold and gloom and absence of all
permanent meaning which for pure naturalism
and the popular science evolutionism of our
time are all that is visible ultimately,
and the thrill stops short, or turns rather
to anxious trembling.

-- William James, _The Varieties of Religious Experience_,
Lectures VI and VII
"The Sick Soul"
( http://www.psywww.com/psyrelig/james/james6.htm )

------------------------

Oh, for Pete's sake. The crypto-religious manner in
which you've defined `immortality' (`can't die, won't die')
makes that `hypothesis' a contradiction in terms for
anything made of parts. In the real world -- any
real world we can conceive, I'd venture to say -- any
conscious being has to be built out of parts, ... [and] ...
that complex organization must be prey to
disruption...

-- Damien Broderick, on the Extropians'
mailing list.

------------------------

"'Thus far, then, I perceive that the great difference between
Elves and Men is in the speed of the end. In this only.
For if you deem that for the Eldar there is no death
ineluctable, you err.

'Now none of us know, though the Valar may know, the future
of Arda, or how long it is ordained to endure. But it
will not endure for ever. It was made by Eru, but He is
not in it. The One only has no limits. Arda, and
Ea itself, must therefore be bounded. You see us,
the Eldar, still in the first ages of our being, and the
end is far off. As maybe among you death may seem to a young
man in his strength; save that we have long years of life
and thought already behind us. But the end will come. That
we all know. And then we must die; we must perish utterly,
it seems, for we belong to Arda (in hroa [body] and fea [soul]).
And beyond that what? "The going out to no return," as you
say; "the uttermost end, the irremediable loss"?

...

'And yet at least ours is slow-footed, you would say?' said
Finrod. 'True. But it is not clear that a foreseen doom
long delayed is in all ways a lighter burden than one that
comes soon'"

-- J. R. R. Tolkien, "Athrabeth Finrod ah Andreth",
in _Morgoth's Ring_, Vol. 10 of _The History of Middle-earth_

------------------------

"What had seemed to us at first the irresistible march of
god-like world-spirits, with all the resources of the universe
in their hands and all eternity before them, was now
gradually revealed in very different guise. The great advance
in mental calibre, and the attainment of communal mentality
throughout the cosmos, had brought a change in the experience
of time. The temporal reach of the mind had been very
greatly extended. The awakened worlds experienced an aeon
as a mere crowded day. They were aware of time's passage
as a man in a canoe might have cognizance of a river which in
its upper reaches is sluggish but subsequently breaks into
rapids and becomes swifter and swifter, till, at no great
distance ahead, it must plunge in a final cataract down
to the sea... Comparing the little respite that remained with
the great work which they passionately desired to accomplish,
namely the full awakening of the cosmical spirit, they saw
that at best there was no time to spare, and that, more
probably, it was already too late to accomplish the task...

The sense of the fated incompleteness of all creatures and
of all their achievements gave... a charm, a sanctity,
as of some short-lived and delicate flower."

-- Olaf Stapledon, _Star Maker_
Chapter X, "A Vision of the Galaxy"

Utilitarian said...

Some thoughts on the 'no X' paragraph:

"There simply is no need to join a marginal Robot Cult as either a True Believer or would-be guru to participate in technodevelopmental social struggle peer-to-peer."
Yep. Although I would be wary of loosening the extension of the term 'Robot Cultist' to apply to anyone (including those who do not show psychosocial indicators of 'cultishness') who holds a view that strong AI is possible, likely to be developed within the century, and extremely relevant for the long-term well-being of humanity.

"There is no need to assume the perspective of a would-be technocratic elite."
Quibble: how else do you understand and combat would-be technocratic elites?

"There is nothing gained in identifying with an ideology that you hope will "sweep the world" or provide the "keys to history."
In one sense, but in another there is everything to be gained. Vegetarians think that they make real moral gains by refraining from consuming the flesh of suffering creatures and causing that suffering. The technologically aware ones also hope that their ideas will sweep the world, particularly as in vitro meat reduces the costs of joining them.

"There is nothing gained in claiming to be "pro-technology" or "anti-technology" at a level of generality at which no technologies actually exist."
I strongly agree.


"There is nothing gained in foreswearing the urgencies of today for an idealized and foreclosed future"
This is true but trivial if defined broadly, i.e. to include current investments in basic science that will not pay off for decades as an 'urgency of today.'

One problem I have with some of your critique is the fact that the expected benefits of outcomes in different technodevelopmental struggles today depend critically on our models of the future. For instance, a model of future life expectancy gains is essential to weigh efforts directed towards averting deaths among 5 year olds against efforts protecting 50 year olds. If there is a substantial chance of massive longevity gains (1000+ year healthspans) for the 5 year olds but not the 50 year olds, then that is reason to prefer to avert deaths among the younger group at the margin. Would you reject this reasoning as an instance of allowing the uncertain future to dominate the urgencies of today?

"nor in dis-identifying with your human peers so as to better identify with imaginary post-human ones."
Sure.

"There is nothing gained in the consolations of faith"
I agree that faith doesn't pay off on net, but this does sound contradictory: you gain the consolation! If you can get high on the opiate of the masses, then that's a benefit to be weighed. How about saying that it offers nothing that cannot be gained from engagement with the present?

"when there is so much valuable, actual work to do, when there are so many basic needs to fulfill, when there is so much pleasure and danger in the world of our peers at hand. There is nothing gained by an alliance with incumbent interests to secure a place in the future when these incumbents are exposed now as having no power left but the power to destroy the world and the future altogether."
Umm...that's a pretty relevant power! When the U.S.S.R.'s economic model stood revealed as broken, and it could no longer offer a positive challenge, its nuclear arsenal constituted a damn good reason to talk to it and consider its interests.

Dale Carrico said...

"There is no need to assume the perspective of a would-be technocratic elite."

Quibble: how else do you understand and combat would-be technocratic elites?


I don't agree that one has to be or aspire to be or pretend to be a member of a technocratic elite to combat such elites. The open access of peer-to-peer formations and then the peer-to-peer modes of ongoing substantiation, editing, credentialization, and so on seem to me to be at once more practical, more resilient, and more appealingly democratic than inter-implicated "industrial," "broadcast," "professional" models of knowledge/authority creation and dissemination. I'm resigned to being accused of being an elitist because of my love of words, but the truth is I'm done with elitism, it seems tired and dumb to me.

"There is nothing gained in identifying with an ideology that you hope will "sweep the world" or provide the "keys to history."
In one sense, but in another there is everything to be gained. Vegetarians think that they make real moral gains by refraining from consuming the flesh of suffering creatures and causing that suffering.


There are vegetarians who are such for ethical reasons, others for health reasons, others for who knows what reasons -- hankerings after purity, asceticism, whatever. There are vegans, lacto-ovo vegetarians, raw food vegetarians, even a few silly pesco or pollo vegetarians (so-called). I don't know that I agree that there is a vegetarian ideology, and I don't know that I would approve of it were one to successfully coalesce, nor would I be well pleased for just one vegetarian formation to sweep the world at the cost of the others -- which people assume for plenty good reasons, even when their reasons differ from my own reasons for being vegetarian -- and I certainly don't approve of those intellectual vegetarians I have met who do seem to think vegetarianism is a sort of movement with the keys to history in the ideological sense I mean. Keys fit in keyholes, and history doesn't have a keyhole as far as I'm concerned, history is open, unpredictable, dynamic, prone to outbreaks of novelty, unencompassable in singular visions without missing stuff that ends up mattering. Ideologies arise out of the recognition that politics is a struggle among contending stakeholders, but dream of overcoming, synthesizing, hierarchizing that plurality into a singular vision. I think this effort is doomed to failure, ugly in impulse, and stained historically in blood.

One problem I have with some of your critique is the fact that the expected benefits of outcomes in different technodevelopmental struggles today depend critically on our models of the future.

To some extent this is probably an artifact of the accident that so much of my analysis is being delineated in the form of a response to what seems to me an extreme version of futurism. I am not opposed to foresight or deliberation, obviously, but I do insist that the archive of knowledge to which we make recourse grows out of our ongoing engagements in technodevelopmental social struggle (an ongoingness that isn't actually monomaniacally contemporary in the least, but ongoing in a way that foregrounds the present, the emerging, and the proximately upcoming rather than the distant, the ideal, an ongoingness that registers the actual diversity of present stakeholders over the unilateral implementation of some abstract pre-ordained end). Yes, models, yes, imagination, yes, foresight -- but the form must reflect the future as a present arising out of the present, a contingency arising out of the ineradicability of stakeholder diversity, a fragmentation or multiplicity of scenarios reflecting the limits of perspective (which is not to deny the strengths of perspective), and so on.

"There is nothing gained in the consolations of faith"
I agree that faith doesn't pay off on net, but this does sound contradictory: you gain the consolation! If you can get high on the opiate of the masses, then that's a benefit to be weighed.


Yeah, I didn't put that point so well. I'm a crusty atheist myself, but the truth is I'm pretty cheerfully nonjudgmental about religion, spirituality, faithful practice and so on as a general affair. I just translate the religious claims people make into aesthetic terms and they usually make sense to me. But I do strongly disapprove the substitution of the proper esthetic and moral work of essentially religious outlooks for instrumental, ethical, or political work to which they are categorically unsuited and which tends to produce terrible effects. That's really the sort of claim I was imagining I was getting at there.

There is nothing gained by an alliance with incumbent interests to secure a place in the future when these incumbents are exposed now as having no power left but the power to destroy the world and the future altogether."

Umm...that's a pretty relevant power! When the U.S.S.R.'s economic model stood revealed as broken, and it could no longer offer a positive challenge, its nuclear arsenal constituted a damn good reason to talk to it and consider its interests.


I'm not advocating sticking our heads in the sand, I'm saying two wrongs don't make a right. Incumbency will either be defeated or it will destroy the world, in my view -- there is nothing to be gained in anything but the short term even in cynically playing up to the corporate-militarists at this point.

The point about the Soviet Union is well taken -- I worry enormously about loose nukes and wish the United States were devoted to multilateral treaties and well-funded international regulatory/monitoring regimes to check proliferation rather than making things worse as usual. By the way, I happen to think the economic model of the USSR was sufficiently continuous in key respects (big extractive industrialism, funded by wasteful militarism, centralized, hierarchical, and bureaucratic, a broadcast model mass culture too content with the semblance rather than the substance of consent, and so on) that it revealed more than one broken society. They just happen to go broke as well as being broken before we did. Now it's our turn.

Au fond, I'm pinning my hopes on p2p democratization and the magnificent planetary carrot of consensual rejuvenation and the stick of catastrophic climate change to nudge us into saving ourselves just in the nick of time.

AnneC said...

Weird! I swear, today while walking to the bathroom at work, it suddenly dawned on me that part of the reason I'm so bothered by "optimality" language is because it reminds me so much of the arguments I've heard so many fundamentalists make for the existence of the "omni god". And then I get home and read this. Makes a lot of sense.

However, I'd not necessarily parallel "superlongevity" with "omnipotence" -- longevity is certainly a kind of power, but I get the impression that the unapologetically superlative are interested in something more like -- well, like actual omnipotence, or like invulnerability. Someone can, after all, end up being very long-lived, but still quite vulnerable to things like being crushed by a truck.

Another comment: you say ...There is nothing gained in identifying with an ideology that you hope will "sweep the world" or provide the "keys to history."

I think I'm somewhat semantically confused as to what comprises "identifying with an ideology".

Sometimes I find that my views naturally align with particular strains of subcultural politics that pop up now and again, but such an alignment is always coincidental rather than deliberate.

When I came across the term "transhumanism", I pretty much just saw it as, "sort of like secular humanism, but with life extension". And then I came across a bunch of other people who called themselves "transhumanists", who seemed to be interested in discussing a lot of the same things I was interested in discussing. So I just figured, "Ah, I guess I'm a transhumanist, then." In a very casual, offhand, manner.

Now, a few years later, I do sometimes find myself thinking that the term "transhumanist" is too baggage-laden and incoherent as to be all that useful in the grand scheme of the real world. But at the same time, the term has so little power over what I actually think that I'm compelled to keep associating with it for now -- it's not like I would somehow have different opinions on longevity and biotech stuff if the word "transhumanism" had never been coined, after all!

If I find a particular position to be consistent with the principles I hold, I don't bother running it through a "subcultural filter" before adopting it, nor do I hesitate to reject things I'm told are consistent with my supposed affiliations if I don't happen to agree with those things.

If not being emotionally invested in the subculture makes me "not a real transhumanist" (who gets to decide that, anyway?), so be it. But enough people have gotten a "transhumanist" flavor from my writing such that I'm still guessing there's still some room there for some degree of ideological diversity there.

Either that, or my social oblivion is showing.

jfehlinger said...

"Utilitarian" wrote:

> I would be wary of loosening the extension of the term
> 'Robot Cultist' to apply to anyone (including those who do
> not show psychosocial indicators of 'cultishness') who
> holds a view that strong AI is possible, likely to be
> developed within the century, and extremely relevant for
> the long-term well-being of humanity.

You know, there have been three distinct phases in the conceptualization
of that relevance in the case of the "strong AI" advocate whose
voice has been the most powerful siren-call to "activism"
among the on-line >Hists (as Michael Anissimov can well
attest) over the past decade.

The first stage, 10 years ago, portrayed the relevance of AI
not in terms of the "long-term well-being of humanity" but
in terms of the long-term development of intelligence in our
corner of the universe. In this characterization, humanity's lease
was seen as likely to be coming to an end, one way or
another, and sooner rather than later. Out of the chaos
of technological transformation, and the death-agony of
the human race, there was the potential for greater-than-human
(and perhaps better-than-human, in some moral sense)
intelligence to be born -- the next stage in the evolution
of intelligence on this planet. Our duty to the future
of sentience, in this scenario, was to keep things going
long enough to accomplish that birth. "It is the goal of
evolution", as Gwyllm says to Cathy at the end of
"The Sixth Finger". It was seen as a race against time.

I was **very** attracted by this mythos. It had a noble
impartiality and a kind of Stapledonian grandeur. And I
found it plausible enough.

The second stage, a few years later, seemed to me to have
lost its nobility, its grandeur, and any claim to
plausibility it may once have had. In this scenario, AI was
seen as the deus-ex-machina that would **solve** the problems
threatening the extinction of the human race. Not only
that, but there was a sudden shift in emphasis toward the
personal immortality hoped for by the cryonicists (whom I
hadn't paid much attention to up to that point). Suddenly
the moral imperative became: every second the Singularity
(i.e., strong AI) is delayed equates to the death
(the **murder**, if we don't do our duty and create that
software) of X human lives. This marked the shift to
the Twilight Zone for me. Also, I was reminded of something
that the irascible Bobynin said in Solzhenitsyn's _The
First Circle_: "What d’you think science is - a magic wand that
you just have to wave to get what you want?"

The third stage, it seemed to me, has moved even further
away from reality, if that's possible. Now the primary threat
(the "existential" threat) to the human race isn't anything mundane
like nuclear weapons or climate change, it is AI **itself**
(along with grey goo and whatever else might come in Nanosanta's
stocking). Now the imperative becomes how to **mathematically
prove** the "Friendliness" of AI, all the while discouraging
"irresponsible" AI researchers (like Michael Wilson, before
he saw the light) from unleashing the apocalypse. By this
point, my disappointment had turned to outright scorn.

It also seems a little too convenient to me that the claim
"I know how to do it but I can't tell you until I can be
sure it's safe." relieves the pressure of actually
having to produce anything tangible.

In addition, the trajectory may share something in common with
the following description in Kramer & Alstad's _The Guru
Papers_:

"A time inevitably comes when the popularity and power of
the group plateaus and then begins to wane. Eventually it
becomes obvious that the guru is not going to take over the
world, at least not in the immediate future. When the
realization comes that humanity is too stupid or blind to
acknowledge the higher authority and wisdom of the guru, the
apocalyptic phase enters and the party is over. Then
one of two things generally happens: The first is that
the guru's message turns pessimistic or doomsday, voicing
somthing like this: 'Soon civilization is going to break
down and face amazing disasters -- except for us, who are
wisely withdrawing to protect ourselves and retain our
purity. This group will survive as a pocket of light
amidst the darkness; then afterwards we will lead forth
a new age.'

The other possibility is that in order to attract more people,
the guru makes increasingly extreme promises and bizarre
claims that offer occult powers, quick enlightenment,
or even wish-fulfillment in the mundane sphere around wealth,
love, and power. One guru went so far as to promise
levitation and invisibility; another group claims that
through proper daily chanting, people can achieve their
every desire, getting anything they want -- anything.
They justify such pandering to greed by saying that realizing
desire is the fastest path to detachment from desire.
Either of these tacks -- predicting disaster or making
grand promises -- is counter-productive in the long run,
since most people would prefer to align with an optimistic
viewpoint and are taken aback by outrageous claims."

Most people. ;->

jfehlinger said...

> I was **very** attracted by this mythos. It had a noble
> impartiality and a kind of Stapledonian grandeur.

And indeed, it's essentially what happens in the Kubrick/Spielberg
movie _AI_ from 2001, by which time the on-line AI advocates
had declined from their earlier compelling (to me) vision.

jfehlinger said...

Anne Corwin wrote:

> I get the impression that the unapologetically superlative
> are interested in something more like -- well, like actual
> omnipotence, or like invulnerability. Someone can, after all,
> end up being very long-lived, but still quite vulnerable
> to things like being crushed by a truck.

Indeed, the **unapologetically** superlative expect to
outlast the heat-death of the universe. They'll make a new
one, or figure out a way to reverse the entropy of this
one. They think. Frank Tipler will show them the way.

Utilitarian said...

James,

I do know about this transformation in Yudkowsky's views. I have always opposed the idea that superintelligence would eliminate the importance of values (and initially thought Yudkowsky's writings appalling for that reason). Yudkowsky's apparent past over-emphasis on near-term fatalities relative to total well-being over time also drew my criticism at the time.

On the other hand, it would be unfair not to note that Yudkowsky has tended to improve over time (on this, on libertopianism, and on admitting that others have superior relevant ability). At least, this is an improvement from my perspective, although I'm fascinated that you take the misanthropic position.

"Indeed, the **unapologetically** superlative expect to
outlast the heat-death of the universe."
EXPECT rather than hope? Wow, that's a lot of conjuncts to all go right, many of them in contravention of our current understanding of physics.

Utilitarian said...

"I don't agree that one has to be or aspire to be or pretend to be a member of a technocratic elite to combat such elites."
I meant that to understand and predict an opponent one has to occasionally empathize and imagine oneself in their position.

Dale Carrico said...

I meant that to understand and predict an opponent one has to occasionally empathize and imagine oneself in their position.

I definitely agree with you about that -- teaching "inhabited critique" to undergraduates in critical thinking courses is often a revelation for them. It is a matter of grasping just how much more one understands the real force even of a view with which one disagrees only when one realizes what it is that makes other people agree with it, rather than merely halting the inquiry into a view when one understands enough to know **that** one disagrees with it.

Dale Carrico said...

I'd not necessarily parallel "superlongevity" with "omnipotence" -- longevity is certainly a kind of power, but I get the impression that the unapologetically superlative are interested in something more like -- well, like actual omnipotence, or like invulnerability. Someone can, after all, end up being very long-lived, but still quite vulnerable to things like being crushed by a truck.

These are correlations, definitely not perfect parallelisms, so tracing all the entailments and complexities is pretty dense actually. But I do think superlongevity (which differs from longevity, which, as your blog has been elaborating nicely lately, ends up looking quite a bit like healthcare from a rhetorical standpoint) is a placeholder for what feels on the ground like the substance of an aspiration to omnipotence -- what is derided as the infantile denial of death by people who are not thereby declaring themselves in love with death as some techno-immortalist charges of "deathism" mistakenly assume. It's not a parallel, as you say, but one can trace the displacements that give these correlations their intuitive force and emotional satisfactions. I mean, consider the example I gave of the child who tries to pretend to imagine the unimaginable God by treating the old guy in the stone chair as a placeholder for the Divinity he presumably has faith in -- it isn't right to say that this correlation "Infinite Creator" "Old Bearded Guy" is a parallelism, but neither would it be wrong to deny the correlation has material traces one can follow: the monotheistic judeochrislamic religions of the Book are all decisively Patriarchal, for example; one might sometimes find in the docility and resentment of the faithful to their God a trace of the originary familial trauma of the father figure, and so on. Again, no inescapable parallel, no strict analogy, no one to one mapping providing a secret key, but definitely material fits one can trace to get a sense of how these correlations might be operating discursively. The healthcare --> superlongevity --> omnipotence correlations are usefully susceptible to such analyses, I would say. But I'm still tinkering with the details -- I do like the look of the overall schema of correlations in general so far, transcendental/superlative/collective, and the tendencies of superlative forms to depend and disavow in one direction, and to mobilize subcultural identification and depoliticizing idealization in the other direction. There's clearly much more to say, but I'm liking the look of the analysis as a first approximation weaving together lots of the observations that I've been exploring in the other pieces. It feels like a more general case is emerging, one that helps me make better sense of the positive programmatic alternative I find appealing, even while it helps me hold together the stands of negative critique I keep returning to as well. (Sorry if this was a rambling response, by the way -- I'm really thinking out loud here.)

Marc_Geddes said...

>"Now the imperative becomes how to **mathematically
prove** the "Friendliness" of AI, all the while discouraging
"irresponsible" AI researchers (like Michael Wilson, before
he saw the light) from unleashing the apocalypse. By this
point, my disappointment had turned to outright scorn."

Poor old Wilson. He joined SIAI expecting to soon be running the world, then conquering the universe, surfing ten thousand galaxies, transhuman glory and so on and so forth.

Poor man ended up wiritng optimization routines for "Dunkin doughnut" businesses.

Ah well. Let that be a lesson to us all.

jfehlinger said...

Dale wrote:

> ["Utilitarian" wrote:]
>
> > I meant that to understand and predict an opponent one has to
> > occasionally empathize and imagine oneself in their position.
>
> I definitely agree with you about that -- teaching "inhabited critique"
> to undergraduates in critical thinking courses is often a revelation
> for them. It is a matter of grasping just how much more one understands
> the real force even of a view with which one disagrees only when one
> realizes what it is that makes other people agree with it, rather than
> merely halting the inquiry into a view when one understands enough
> to know **that** one disagrees with it.

http://michaelprescott.typepad.com/michael_prescotts_blog/2005/07/index.html
----------------
The importance of being earnest

One of the most useful intellectual skills to
cultivate is the ability to enter into sympathetic
engagement with any idea or argument you are considering.
The only way to really understand what another person
is saying is to listen closely, and the only way to
listen closely is first to find, or at least pretend
to find, some common ground between the other person
and yourself. You need not maintain this sympathetic
engagement, this provisional or illusionary agreement,
for very long -- just long enough to absorb and
grasp the points at issue.

On the other hand, an inability or an unwillingness
to drop your guard and make room, even temporarily,
for an idea that you may find distasteful is the main
impediment to really understanding what other people
are saying and, therefore, to being able to effectively
refute what they say.

I thought of this today when flipping through a book
that I admit to having bought in the expectation of
a cheap laugh, and not for any intellectual merit
that it may possess: Ayn Rand's Marginalia. That's
right, her marginalia. In their continuing effort
to publish every word that Ayn Rand ever committed
to paper during the course of her 77 years, those
in charge of her estate have published her private
letters, her private journals, and yes, even the
scribbled notes in the margins of books she was
reading.

Supposedly, these notes give us an insight into Rand's
brilliant mind at work. No doubt this was editor
Robert Mayhew's intention, and no doubt this is how
the collection of jottings will be received by her
more uncritical admirers. Not being an admirer of
Ayn Rand myself, I had a rather different reaction.
I was simply amazed -- and amused -- at how consistently
she failed to understand the most basic points points
of the books in question.

In his introduction, Mayhew says he did not include many
of Rand's positive comments because they were generally
insubstantial. This collection, then, is not a representative
sample of her reactions to her reading material. Even
bearing this in mind, I found the fury and frustrated
rage exhibited by Rand in these remarks to be extraordinary.
Hardly a page goes by without encountering angry
exclamation points, and even double and triple exclamation
points, sometimes augmented by question marks in comic-book
fashion. ("!!?!") The terms "God-damn" and "bastard" are
unimaginatively and gratingly repeated. Repeatedly I came
across another burst of venom to the effect that whatever
sentence or paragraph Rand had just read is the worst,
most horrible, most abysmal, most corrupt, most despicable
thing she has ever, ever, ever encountered!!! The woman
lived in a simmering stew of her own bile.

She came at the books she read, it would seem, not from the
perspective of honestly and conscientiously trying to
understand the author's position, but instead by assuming
an adversarial and combative stance from the very start
and then finding the most negative and malicious spin to
put on the author's formulations. This approach enabled
her to vent a considerable amount of rage. It does not
seem to have aided her comprehension of the material in
front of her.

To me this is most obvious in her treatment of [C. S. Lewis's]
The Abolition of Man, which, other than John Herman Randall's
Aristotle and Ludwig von Mises's Bureaucracy, is the only book
in this collection that I've read. (I suppose someday I should
get around to reading Friedrich Hayek's The Road to Serfdom,
which is considered a classic of free-market polemic -- though
Rand of course finds it poisonously wrongheaded. The rest
of the books, except for von Mises's Human Action and two books
by Henry Hazlitt and John Hospers, are largely forgotten today.)

Lewis's book is hardly a difficult read. It was aimed at an
educated but not highbrow segment of the public, and his
cautions on the potential misuse of science seem chillingly
prescient in these days of genetic engineering, animal cloning,
and embryonic stem cell research. He develops his case
methodically, building on the premise that man's power over
nature translates into the power of some men over others.
Rand furiously contests this idea, though she makes precious
little argument against it, relying mainly on personal
invective against Lewis himself, who is variously characterized
as an "abysmal bastard ... monster ... mediocrity ... bastard ...
old fool ... incredible, medieval monstrosity ... lousy bastard ...
drivelling non-entity ... God-damn, beaten mystic ...
abysmal caricature ... bastard ... abysmal scum." (These
quotes give you the tenor of the master philosopher's coolly
analytical mind.)

In one marginal note Rand scrawls, "This monster literally
thinks that to give men new knowledge is to gain power (!)
over them." Of course what Lewis says is that it is the holders
and utilizers of new knowledge, who do not "give" it to
others but use it for themselves, who gain de facto power
over their fellow human beings. He is fearful of the emerging
possibilities of "eugenics ... prenatal conditioning [and]
education and propaganda based on a perfect applied psychology,"
which may someday be wielded by an elite he calls the
Conditioners. "Man's conquest of Nature, if the dreams of
some scientific planners are realized, means the rule of a
few hundreds of men over billions upon billions of men."
And "the power of Man to make himself what he pleases
means ... the power of some men to make other men what
they please." Should this come to pass, "the man-moulders
of the new age will be armed with the power of an omnicompetent
state and an irresistible scientific technique ...
They [will] know how to produce conscience and [will] decide
what kind of conscience they will produce."*

Lewis was clearly arguing against one possible vision
of the future, the dystopia best fictionalized in Aldous Huxley's
Brave New World. I find his points compelling, but of course
they are debatable. In order to be properly debated, however,
they must first be understood. Rand shows no interest in
even trying to understand what Lewis is saying -- which is
unfortunate, since recent headlines have made his concerns
more relevant than ever.

Earlier, Lewis develops the argument that basic moral values
cannot be rationally defended but must be accepted as given,
as part of the fabric of human nature, common to all
communities and societies, though not always equally
well-developed or implemented. This view, known as
moral intuitionism, is a serious ethical position and
one that has been defended by many prominent philosophers,
especially in the late 19th and early 20th centuries.
(It is enjoying something of a resurgence today.)
Rand was vehemently opposed to this view, believing that
it smacked of faith, which was, as she understood it,
the archenemy of reason.

Lewis argues that in the realm of values, as in other
realms of thought, you must begin with certain fundamental
assumptions; "you cannot go on 'explaining away' forever:
you will find that you have explained explanation itself.
You cannot go on 'seeing through' things forever."
Rand furiously rejects this idea, and you can practically
hear her pen stabbing at the page as she writes,
"By 'seeing through,' he means: 'rational understanding!'
Oh, BS! -- and total BS!" But Lewis's entire point is that
"rational understanding" must start somewhere, just as
geometry or set theory must begin with certain axioms
that cannot themselves be proven by the system in question.
It takes more than declarations of "BS!" to vanquish
this argument -- or, for that matter, any argument.

Rand is always telling the authors she reads what they
"actually" are saying. Most of the time what she thinks
they are "actually" saying bears no relationship whatsoever
to anything they have written or even implied. With
regard to Lewis, she says that his view boils down to the
claim that the more we know, the more we are bound by
reality: "Science shrinks the realm of his whim. (!!)"
This is a thorough misunderstanding of Lewis's essay --
an essay, let me repeat, aimed at the intelligent
general reader and not requiring any special expertise
to to decipher.

Thus, although Ayn Rand's Marginalia hardly demonstrates
the genius that Rand's admirers believe she possessed,
it does unintentionally serve an instructional purpose.
It shows how important it is to enter into a temporary
but sincere sympathy with an author whose view you are
trying to understand -- that is, if you are trying to
understand it at all. To put it another way, in reading,
it's important to be earnest -- to embrace a spirit of
respect, honest consideration, and goodwill. You'll find
those qualities in most serious thinkers. You will not
find them, I'm afraid, in Ayn Rand's marginal notes.
----------------

jfehlinger said...

Marc Geddes wrote:

> Poor old Wilson. . . Poor man ended up writing optimization routines
> for "Dunkin doughnut" businesses.

Well, if he's the one making it possible for my local Dunkin' Donuts
to stay open late at night, then my hat's off to him! ;->

(I have no contempt, BTW, for the mundane bit-fiddling of data
processing. It's how the world justifies **my** existence,
after all.)

gp said...

Anne: "part of the reason I'm so bothered by "optimality" language is because it reminds me so much of the arguments I've heard so many fundamentalists make for the existence of the "omni god""

Perhaps you guys are so scared of "superlative technology discourse" because you are afraid of falling back into the old religious patterns of thought, that perhaps you found difficult to shed.

Some of us, yours truly included, never gave much importance to religion. So we feel free to consider interesting ideas for their own sake, regardless of possible religious analogies.

G.

gp said...

Dale: "The criticisms I seem to be getting are largely from people who would either deny the relevance of my own political, social, and cultural emphasis altogether (a denial that likely marks them as unserious as far as I'm concerned) or who disapprove of my political commitment to democracy, my social commitment to commons, and my cultural commitment to planetary multiculture (a disapproval that likely marks them as reactionaries as far as I'm concerned)."

Not my case, as I do not deny the relevance of your own political, social, and cultural emphasis, and approve of your political commitment to democracy, your social commitment to commons, and your cultural commitment to planetary multiculture.

I criticize your intolerance for those who, while basically agreeing with you on the points above, have ideas different from yours on other, unrelated things, and affirm their right to think with their own head.

Because, my friend, you will never persuade me that one who finds intellectual or spiritual pleasure in contemplating nanosanta-robot god-superlative technology-etc. cannot be a worthy political, social and cultural activists.

I can believe in Santa Claus and Eastern Bunny if I like, and still agree with you on political issues. Unless, of course, you persuade me that the two things are really incompatible. I will gladly take the Robot God and Easter Bunny then.

G.

jfehlinger said...

> Perhaps you guys are so scared of "superlative technology discourse"
> because you are afraid of falling back into the old religious patterns
> of thought, that perhaps you found difficult to shed.
>
> Some of us, yours truly included, never gave much importance to religion.
> So we feel free to consider interesting ideas for their own sake,
> regardless of possible religious analogies.

Mr. Prisco, I'm afraid you just don't get it. You really, really
don't. "Analysis by eggbeater", indeed.

As Dale has endlessly reiterated, he's not "scared" of superlative
technology discourse. Concerned, certainly. Contemptuous,
very likely. Nor am I "scared", if that means being scared
by the portrayals of superlative technology themselves.
On the contrary -- as I've said more than once, if I could
step into the pages of a Greg Egan novel, and then step
through an Introdus portal with the swipe of a credit card,
I'd almost certainly do it.

What **does** scare me is how easy it
is for some people to get sucked into cults -- identity
movements that encourage the suspension of independent
thought and criticism, and how easy it is for people
with more "attitude" than sense to set themselves up as
oracles. This does not bode well for the human race, IMHO, but
after all, it's nothing new. And if the internet gives the
guru-wannabes a new megaphone through which to spread their
"Ya Gotta Believe Me!" memes, then it also provides a means
for the "little people", like me, to at least make visible their
tiny sparks of skepticism.

And as far as "religion" goes -- you've got it exactly
backward. I have an extremely tepid Protestant upbringing
(although my parents were conservative Republican types),
and the only reason I could sit through Sunday School with
a straight face (I **had** to go) by the time I was
in 9th grade was by amalgamating it with the
World Civilizations course I was taking in school at
the same time (fortunately, the teacher, a savvy lady,
was performing the same transformation -- what **would**
Reverend Ludlow have thought? ;-> ).

Since then, I've discovered that I like authors such as
J. R. R. Tolkien, C. S. Lewis, and G. K. Chesterton, and I
find the emotional tone of Tolkien's fictional world
**extremely** moving (C. S. Lewis's somewhat less so).
Nevertheless, I see gaping holes in Lewis's logic when
he's in apologist mode (though I simultaneously appreciate
some of his insights into human psychology).
Afraid of being tempted to slide into religious
belief? I sometimes wish!

**You**, on the other hand, and many of the >Hists --
well, as Madge the Manicurist used to say,
"You're soaking in it!"

Dale Carrico said...

I criticize your intolerance for those who, while basically agreeing with you on the points above, have ideas different from yours on other, unrelated things, and affirm their right to think with their own head.

I distinguish instrumental, moral, esthetic, ethical, and political modes of belief. Rationality, for me, consists not only in asserting beliefs that comport with the criteria of warrant appropriate to each mode, but also in applying to different ends the mode actually appropriate to it. I'm perfectly tolerant of estheticized or moralized expressions of religiosity, but I keep making the point that religiosity misapplied to domains, ends, situations for which it is categorically unsuited creates endless mischief. Superlativity is an essentially moral and esthetic discourse mistaking itself for or ambitious to encompass other modes of belief. This sort of thing is quite commonplace in fundamentalist formations.

Because, my friend, you will never persuade me that one who finds intellectual or spiritual pleasure in contemplating nanosanta-robot god-superlative technology-etc. cannot be a worthy political, social and cultural activists.

This line is total bullshit, and I'm growing quite impatient with it. Look, I'm a big promiscuous fag, a theoryhead esthete, and an experimentalist in matters of, well, experiences available at the extremes as these things are timidly imagined among the bourgeoisie. Take your pleasures where you will. Laissez les bons temps rouler. I'm a champion of multiculture, experimentalism, and visionary imagination, and that isn't exactly a secret given what I write about endlessly here and elsewhere. But -- now read this carefully, think about what I am saying before you reply -- if you pretend your religious ritual makes you a policy wonk expect me to call bullshit; if you demand that people mistake your aesthetic preferences and preoccupations for scientific truths expect me to call bullshit; if you go from pleasure in to proselytizing for your cultural and subcultural enthusiasms expect me to call bullshit; if you seek legitimacy for authoritarian circumventions of democracy in a marginal defensive hierarchical sub(cult)ural organization or as a way to address risks you think your cronies see more clearly than the other people in the world who share those risks and would be impacted by your decisions, all in the name of "tolerance," expect me to call bullshit,

"I can believe in Santa Claus and Eastern Bunny if I like, and still agree with you on political issues."

No shit Sherlock. I've never said otherwise. If you form a Santa cult and claim Santa Science needs to be taught in schools instead of Darwin, or if you become a Santa True Believer who wants to impose his Santa worldview across the globe as the solution to all the world's problems, or you try to legitimize the Santalogy Cult by offering up "serious" policy papers on elf toymaking as the real solution to global poverty and then complain that those who expose this as made up bullshit are denying the vital role of visionaries and imagination and so on, well, then that's a problem. Please don't lose yourself in the details of this off-the-cuff analogy drawn from your own comment, by the way, I'm sure there are plenty of disanalogies here, I'm just making a broad point here that anybody with a brain can understand.

Unless, of course, you persuade me that the two things are really incompatible.

I despair of the possibility of ever managing such a feat with you.

I will gladly take the Robot God and Easter Bunny then.

Take Thor for all I care. None of them exist, and any priesthood that tries to shore up political authority by claiming to represent them in the world I will fight as a democrat opposed to elites -- whether aristocratic, priestly, technocratic, oligarchic, military, "meritocratic" or what have you. I can appreciate the pleasures and provocations of a path of private perfection organized through the gesture of affirming faith in a Robot God, Thor, or the Easter Bunny. I guess. I have no trouble with spirituality, faith, estheticism, moralism in their proper place. I've said that so many times that your obliviousness to the point is starting to look like the kind of conceptual impasse no amount of argument can circumvent between us.

Perhaps you guys are so scared of "superlative technology discourse" because you are afraid of falling back into the old religious patterns of thought, that perhaps you found difficult to shed.

I've been a cheerful nonjudgmental atheist for twenty-four years. It wasn't a difficult transition for me, as it happens. And I'm not exactly sure what frame of mind you imagine I'm in when I delineate my Superlative Discourse Critiques when you say I'm "so scared." I think Superlativity is wrong, I think it is reckless, I think it is comports well with a politics of incumbency I abhor, I think it produces frames and formulations that derange technodevelopmental discourse at an historical moment when public deliberation on technoscientific questions urgently needs to be clear. But "so scared"? Don't flatter yourself.

Some of us, yours truly included, never gave much importance to religion. So we feel free to consider interesting ideas for their own sake, regardless of possible religious analogies.

You are constantly claiming to have a level of mastery over your conscious intentions that seems to me almost flabbergastingly naïve or even deluded. It's very nice that you feel you have attained a level of enlightenment that places you in a position to consider ideas "for their own sake," unencumbered by the context of unconscious motives, unintended consequences, historical complexities, etymological sedimentations, figural entailments, and so on. I would propose, oh so modestly, that no one deserves to imagine themselves enlightened in any useful construal of the term who can't see the implausibility of the very idea of the state you seem so sure you have attained.

jfehlinger said...

Giulio Prisco wrote:

> I can believe in. . . [the] Eastern Bunny if I like. . .

Is that something like the Dalai L(l)ama? ;->

jfehlinger said...

> Laissez les bons temps rouler.

Clams on the half-shell, and roller skates! roller skates!

jfehlinger said...

"Utilitarian" wrote:

> [I]t would be unfair not to note that Yudkowsky has tended
> to improve over time (on this, on libertopianism, and on
> admitting that others have superior relevant ability [Oh??]).
> At least, this is an improvement from my perspective, although
> I'm fascinated that you take the misanthropic position.

From my e-mail archive:

03/11/2005 02:27 PM
Subject: How did it happen?

I was just reading Hugo de Garis' latest blurb on his
Utah State Web site:
http://www.cs.usu.edu/~degaris/artilectwar2.html .

[I hope, BTW, that his editors catch stuff like the following before
the book comes out:

"Thus to the Terrans, the Cosmists are monsters incarnate,
far worse than the regimes of Hitler, Stalin, Mao, the Japs,
^^^^^^^^
or any other regime that murdered tens of millions of
people in the 20th century, because the scale of the
monstrosity would be far larger."

The **who**? Not only is it politically insensitive, it's
a non-parallel series. But that's not what I'm here to complain
about.]

You know, the two temperamental/philosophical/religious/political
positions that de Garis characterizes as "Terran" and "Cosmist"
seem quite realistic and compelling to me. de Garis comes clean
and admits that he is himself a bit "schizophrenic" about it --
able to partake of the night-terrors of the Terran position while
remaining a Cosmist at heart. But I appreciate de Garis' honesty
in admitting the existence of both points of view and putting
everything fully above board.

How come it's not that way with the Extropians and their
spin-off groups? One of the things that attracted me to
Eliezer's "Staring into the Singularity" in 1997 was its frank
Cosmist take on things. Then suddenly (or it seemed suddenly
to me, though I probably just wasn't watching very carefully)
in 2001, I discovered that the Cosmist line had been ruled
altogether out of court, not even a permissible topic of
discussion, not even permissble to acknowledge that it
ever **had** been a valid position (shades of _1984_),
and that everybody who was anybody was suddenly
a bleeding-heart "Terran". And that the **necessity** of being a
Terran had warped and distorted all the discourse surrounding AI.
Suddenly things **had** to be top-down, and morality **had**
to be derivable from first principles, and all that jazz, or else,
or else it was curtains for the human race (so what? a Cosmist
would say. But we're not allowed to say that anymore.)
And the reversal had been spearheaded by Eliezer himself (it
seemed to me).

So what's your take on all this?

Did it happen in a smoke-filled room? Did ___ and _______
take him aside and say "Look, son, you're gonna scare folks
with all this talk about the machines taking over. Here's the
line we want you to take. . .". Or did his rabbi sit down
with him and have a heart-to-heart? Or [was it simply]
impossible for him to accept that **he** might
be superseded? Or was everybody spooked by Bill Joy
being spooked by Ray Kurzweil?

I really, really wonder about this, you know. It's what,
more than anything else, caused me to lose respect for the
bulk of the on-line >H community. The shift went almost entirely
unremarked, as far as I can tell (unless the **real** discourse
isn't visible -- goes on in private e-mail, or at conferences,
or whatever). It's not **just** Eliezer, of course -- he's now
insulated and defended by a claque of groupies who **screech**
in outrage (like ________ ______) whenever the party
line is crossed.

Of course, not everybody buys it. Eugen Leitl doesn't buy it,
as far as I can tell. . .

Ah, well. I have my own theory about this, and it's (naturally)
a psychological one. I think it's nearly impossible for the
>Hists who are "in it" for their own personal gain -- immortality,
IQ boosts, bionic bodies, and all that -- the N's, in other
words -- to be sufficiently dispassionate to be Cosmists.
What a gaping blind spot!

It seems utterly ironic and contemptible to me to see
the self-congratulatory crowing about "shock levels" when
the reality of the SL4 list is "don't scare the Terrans".
Meaning "let's don't scare **ourselves** because **we're**
the Terrans". :-/


--------------------------------------

05/10/2005 12:52 PM
Subject: Huey, Dewey, and LUI

Another thing I find refreshing about [John] Smart's
[Acceleration Watch] site is the absence of pandering
to the cryonicists, life-extensionists, and hankerers
after immortality. In fact, Google finds no
references to either "cryonics" or "immortality" on the
site. One of the **weirdest** things that happened to
Eliezer is that he got caught up in this business of "it's
a moral outrage if even one more person dies before
the Singularity!" shtick.

[And that happened **before** the unfortunate death of
his 19-year-old brother three years ago -- almost exactly
three years ago, in fact.
http://yudkowsky.net/yehuda.html .]

--------------------------------------

03/30/2007 08:45 PM
Subject: Re: I can't overemphasize the little inconsistencies

> They want to say they are concerned about Friendliness because it looks
> good politically, and they are afraid of drawing hostile fire.

Yes, the appearance of the "Friendliness" business in 2001, or
whenever it was, certainly took me by surprise.

In an early version of "Staring into the Singularity", that
I first read in July, 1997 (and which I almost regret
not having saved), Eliezer made it perfectly clear
that he believed the cosmic purpose of the human
race is simply to give birth to its superintelligent successor.
Whether or not humanity itself survived those birth pangs was
secondary -- again, sort of like in Arthur C. Clarke's
_Childhood's End_. He was willing to take the SI's word,
whatever it turned out to be, as the standard of morality
on the question of whether, or how many, human beings
should continue to exist.

He also emphasized that there would be a race against time
to create a superintelligence before humanity destroyed itself
by less noble means -- war, ecological collapse, climate change,
whatever.

I rather enjoyed that frankly ice-cold, splash-in-the-face
detachment at the prospect of the (possibly tragic) transformation
of intelligence -- it had a certain Stapledonian grandeur
(OK, so call **me** a sociopath, if you like! ;-> ).

So when he started talking about FAI after the turn of the
century, I too at first assumed he'd simply caved in to political
pressure. I imagined that perhaps ___ and _______ had
taken him aside and said -- look, kid, you can't go around
scaring folks like that, not if you really want to get
the job done. Or maybe it was _____ and ______ ______,
when SIAI was first cooked up.

And that may still be part of what's going on -- it may account
for the "tension" that Kip Werking perceives.

Of course, it also lets Eliezer pose not just as the
"father" of superintelligence (he once actually named his
"seed AI" -- wait for it -- Elisson. With a straight face!),
but as the literal savior of the whole human race.

It also gives him a dodge for not getting much done --
he can simply claim, as he often has, that he isn't going to
unleash AI on the world until he's **sure** it can
be done safely, which is something only **he** can
judge adequately. And of course the latter claim gives
him a platform from which, like an Old Testament prophet,
to rail against competing views of AI -- . . . Goertzel's,
whosever -- as criminally irresponsible.

But Eliezer **loves** to play the game of "you don't know
what my latest thinking is because I haven't **told**
anybody what my latest thinking is". And he never willingly
talks about changes in his views. For that matter, there sometimes
seems to be damned little **continuity** in his views.
Of course, water isn't wet until **he's** realized it is,
and then he gets to "instruct" lesser minds on the
wetness of water while implying that they never would
have figured it out on their own. It gets downright
Orwellian at times -- "he who controls the present
controls the past".

> They are also power crazed. . .

[P]ower-crazed to the extent that he **must**
maintain the window dressing and enough of a star-struck
audience to lend sufficient plausibility to the image
he's taken up -- as Messiah, no less, it would seem.

> and sycophantic to a fault (The Rest).

Yes, that frightens me. What's the purpose of education
(or the World Wide Web, or Wikipedia, for that matter) if
not to give folks the backbone to challenge wannabe
gurus? Doesn't anybody read Bertrand Russell anymore?

Michael Anissimov actually wrote to me (in reply to a
note I sent him about his appearance in this month's
_Psychology Today_) "The reason why Eliezer was whining
[about his less-than-hagiographic treatment by Declan McCullagh
in the _Wired_ on-line article "Making HAL Your Pal" six
years ago] is because he was young and not yet familiar with
the way the media worked... also he was used to being
worshipped [sic!] so it was a big surprise to have someone
talk about him in that negative way." It crossed my mind
to reply to him "being worshipped -- do you think that's
**healthy**?" but there wouldn't be much point. He's one
of the 12 to Eliezer's Jesus.

> Thus, this tasty nugget from. . . ______ ______. . .
> Translation: "I, ______, am a Bear of Little Brain, so
> I let my pal Eliezer think for me instead."

Yep, he's another True Believer. As is, e.g., _______ ______.
Anissimov I really feel bad about, though. I wish he
could be -- deprogrammed, or something.

Nick Tarleton said...

In addition, the trajectory may share something in common with
the following description in Kramer & Alstad's _The Guru
Papers_:


This is a pretty weak analogy. Nowhere have I seen any Singularitarians saying that the Robot God will purify the world for them alone - rather, they expect unFriendly AI to kill everyone, or FAI to help everyone. Nor has anyone promised near-term enlightenment, unless writing about Bayesian rationality counts.

Did it happen in a smoke-filled room? Did ___ and _______
take him aside and say "Look, son, you're gonna scare folks
with all this talk about the machines taking over. Here's the
line we want you to take. . .". Or did his rabbi sit down
with him and have a heart-to-heart? Or [was it simply]
impossible for him to accept that **he** might
be superseded? Or was everybody spooked by Bill Joy
being spooked by Ray Kurzweil?


It's more prosaic: Eliezer realized that a universal, objective morality independent of moral agents is incoherent. See e.g. http://www.sl4.org/archive/0409/9856.html or http://www.spaceandgames.com/?p=5 .

I'm curious as to what you find so attractive about the "Cosmist" position over the "Terran" one, beyond "Stapledonian grandeur". (I have a really hard time understanding how "second/third-stage" Friendliness-concerned Singularitarianism could possibly be more Twilight Zone than "first-stage" ahuman Singularitarianism.) My wild stab at a psychological theory is that you regard any attachment to humanity as hopelessly partial and provincial.

jfehlinger said...

Nick Tarleton wrote:

> I'm curious as to what you find so attractive about the
> "Cosmist" position over the "Terran" one, beyond "Stapledonian grandeur".
> (I have a really hard time understanding how "second/third-stage"
> Friendliness-concerned Singularitarianism could possibly be more
> Twilight Zone than "first-stage" ahuman Singularitarianism.)

Is it so surprising? It has a respected tradition in SF.
Stapledon's _Odd John_, _Last & First Men_, _Star Maker_.
Clarke's _Childhood's End_. Kubrick's _2001: A Space Odyssey_.

Whereas "second-stage" and later S-ism smacks so much of this:

----------------------------------------
Wednesday, May 10, 2006

. . .


Transhumanism seems to have a particular appeal to the wealthy –
look at the Silicon Valley millionaires on the board of the
Foresight Nanotech Institute, for instance – and I think this
follows. A narcissistic rich person can control a great many things,
but there’s one threat that won’t go away: you’re going to die,
no matter how rich you are. Get rid of that one fly in the ointment,
and you’ve got it made: a static, timeless self-image of a rich guy.
(Failing that, freezing your head comes in as a valid second choice.)

-- In the Shadow of Mt. Hollywood
John Bruce's Observations on Education, Epistemology,
Writing, Work, and Religion
http://mthollywood.blogspot.com/2006_05_01_mthollywood_archive.html

> My wild stab at a psychological theory is that you regard any
> attachment to humanity as hopelessly partial and provincial.

Could be. I'm certainly aware of a misanthropic streak
in myself.

In any case, "first-stage" S-ism was, for me at least,
what Dale would call an "aesthetic" identification.

I certainly, as he does, call "bullshit" at both the
ethical and the scientific claims to seriousness of the
second and third stages.

jfehlinger said...

Nick Tarleton wrote:

> Eliezer realized that a universal, objective morality
> independent of moral agents is incoherent.

The idea of a "universal, objective" morality is pretty incoherent
anyway, IMHO.

And as far as my (and all >Hists') **aesthetic** prejudices
in favor of "intelligence" (loosely conceptualized) -- I'm perfectly
aware of its arbitrariness.

I wrote on the Extropians' list once:

> Another thing that crosses my mind from time to time...
> is... the **peculiarity** of the human prejudice that
> intelligence is desirable and important [*]. Sure, I can
> appreciate Kurzweil's curves of exponentially-increasing
> complexity, and all that -- life on the edge of the
> envelope -- but look at all the greenery still lying about,
> grooving on Our Mr. Sun, without benefit of nervous systems
> at all, thank you very much! Only heterotrophs -- the
> organisms that **cheated** by eating other organisms instead
> of soaking up the sun like good citizens (how's **that** for
> a definition of original sin!) needed nervous systems, to get
> away from other heterotrophs.
>
> And **carnivores** -- the heterotrophs that prey on other
> heterotrophs, instead of just gobbling up the autotrophs, are
> the smartest (and most admired) of all [**]. I had a chance
> to herd cows back in the '70's, and they're as dumb as posts!
> Get them all lined up on the beaten path, like boxcars on a
> rail siding, and they're controllable, but if they stray off
> this one-dimensional path, you'll spend an hour chasing them.
> They **deserve** to be eaten, damn it! ;-> No, it's the
> fang-bearers, like Rin Tin Tin, Lassie, Shere Khan, Bagheera,
> and Tony the Tiger that humans are crazy about. Elephants are
> herbivores, and are still supposed to be smart, but I've never
> met an elephant in person, and I'm very skeptical!
>
> This is the compost that gave rise to the flower of
> intelligence!

I'm certainly not the only one who's noticed the oddness
of the human mania for intelligence. There was, for example,
that wicked, wicked 1982 story by Bruce Sterling called
"Swarm" (it's one of the Shaper/Mechanist cycle,
included in the 1990 collection _Crystal Express_).

TERRIBLE SPOILERS FOR "SWARM"!!!!

The story concerns human contact with a mysterious,
seemingly non-sentient galactic hive organism called
the Swarm, comprised of castes of individual organisms
who are devolved descendants of ancient intelligent
races throughout the galaxy. The surprise ending of the
story reveals that the swarm only wakens into intelligence
when it senses threat from outside interference (which
is not necessarily contact itself, but rather any sign
that the Swarm's placid existence is about to be co-opted
to serve the contacting race's own purposes).

"'What are you?'

'I am the Swarm. That is, I am one of its castes. I am
a tool, an adaptation; my specialty is intelligence.
I am not often needed...

Your companion's memories tell me that this is one of
those uncomfortable periods when galactic intelligence
is rife. Intelligence is a great bother. It makes
all kinds of trouble for us...

You are a young race and lay great stock by your own
cleverness,' Swarm said, 'As usual, you fail to see
that intelligence is not a survival trait...

This urge to expand, to explore, to develop, is just
what will make you extinct. You naively suppose that you
can continue to feed your curiosity indefinitely. It
is an old story, pursued by countless races before you...

In a thousand years you will not even be a memory.
Your race will go the same way as a thousand others.'

'And what way is that?'

'I do not know... They have passed beyond my ken.
They have all discovered something, learned something,
that has caused them to transcend my understanding. It
may be that they even transcend **being**. At any rate,
I cannot sense their presence anywhere. They seem to
do nothing, they seem to interfere in nothing; for all
intents and purposes, they seem to be dead. Vanished.
They may have become gods, or ghosts. In either case,
I have no wish to join them...

Intelligence is very much a two-edged sword... It is
useful only up to a point. It interferes with the business
of living. Life, and intelligence, do not mix very well.
They are not at all closely related, as you seem to assume.

'But you... are a rational being--'

'I am a tool, as I said... When you began your pheromonal
experiments, the chemical imbalance became apparent to the
Queen. It triggered certain genetic patterns within her
body, and I was reborn. Chemical sabotage is a problem
that can best be dealt with by intelligence... Within
three days I was fully conscious. Within five days I had
deciphered these markings on my body. They are the
genetically encoded history of my race ... within five days
and two hours I recognized the problem at hand and knew what
to do. I am now doing it. I am six days old...

We have not killed any of the fifteen other races we have
taken for defensive study. It has not been necessary.
Consider that small scavenger floating by your head...
Five hundred million years ago its ancestors made the galaxy
tremble...

We are doing you a favor, in all truth. In a thousand
years your descendants here will be the only remnants of
the human race. We are generous with our immortality;
we will take it upon ourselves to preserve you...'"

Marc_Geddes said...

>It's more prosaic: Eliezer realized that a universal, objective morality independent of moral agents is incoherent. See e.g. http://www.sl4.org/archive/0409/9856.html or http://www.spaceandgames.com/?p=5 .


Golly gosh Nick, how amazingly grand for Eliezer to personally resolve the mystries of ethical philosophy that have eluded the best philosophers for millenia!

The reality is, this is yet another issue dear old father Eli is seriously mistaken on.

Plato offered strong arguments for timeless 'Platonic' forms that are beyond space and time, and these forms can include teleological (aesthetic and ethical) archetypes. The philosophy of encyclopedia entry on 'Platonism' makes it pretty clear the issue is far from resolved:

http://plato.stanford.edu/entries/platonism/

The issue is so confusing because there are actually three quite different senses of the word 'morality':

(1) There's the ethical rules themsleves (which every philosopher worth his salt can agree are not objective, but are made by humans)

(2) There's the cognitive process (the optimization target) that generates the ethical rules (ie the human mind). Again, not objective

(3) There's the inert platonic archetypes that *are* objective and consistute the explanatory basis for (1) and (2).

The platonic archetypes (in increasing order of abstraction) are:

a Virtue
b Liberty
c Beauty

With beauty being the final explanatory principle.

Eli has only penetrated to (1) and (2). Your dear leader has not yet understood (3). But fear not. No doubt in a few more years time he'll have a sudden 'marvellous insight' and hey, after sharing his revelation with you guys, you'll all 'see the light' too and be saying the same thing ;)

Cheers

Marc_Geddes said...

Marc Geddes wrote:

> Poor old Wilson. . . Poor man ended up writing optimization routines
> for "Dunkin doughnut" businesses.

Well, if he's the one making it possible for my local Dunkin' Donuts
to stay open late at night, then my hat's off to him! ;->

(I have no contempt, BTW, for the mundane bit-fiddling of data
processing. It's how the world justifies **my** existence,
after all.)

# posted by jfehlinger : 9:38 AM

Yeah, hey, hats off to old Wilson, out working hard for the ole corporations. At least in getting his programs to do wonderous things like running mazes and optimizing doughnut manufacture, he's finally doing something useful.

Still, bit of a come-down from the hoped-for glories of robot god-hood wouldn't you say?