Using Technology to Deepen Democracy, Using Democracy to Ensure Technology Benefits Us All

Tuesday, August 18, 2009

The Achievement of Superlative Futurology

Singularitarian Robot Cultist Michael Anissimov responds yet again to my critique of superlative futurology:
Dale Carrico, one of the more prominent critics of transhumanism, frequently refers to “superlongevity, superintelligence, and superabundance” as transhumanist goals, of course in a disparaging way. Yet, I openly embrace these goals. Superlongevity, superintelligence, and superabundance are a perfect summary of what we want and need. How can we achieve them?

Strictly speaking, I don't think superlative aspirations are "achievable" at all. I am not saying that because, as the Robot Cultists would have it, I lack their own "can do" attitude, or their boundless imaginations, or their sooper-science skill-sets, but because I do not think superlative aspirations are really the sorts of things that are meant to be "achieved" in the first place. I don't think Anissimov is right, really, even to call them "goals."

Let's live for thousands of years through "medical advancement" or through "transferring our selves" into invulnerable Robot Bodies isn't exactly the sort of "goal" that has any specifiable impact on conduct in the real world, beyond signaling membership in certain sub(cult)ures of futurological faith. I would maintain in fact that such sub(cult)ural signaling is indeed the actual substance of these assertions of superlative belief, such as it is, and that the work of these assertions is not to mobilize instrumental rationality at all but to mobilize moral rationality. That is to say, I think these faithful utterances aren't really about achieving goals so much as enabling the pleasures of subcultural identification, belonging, support for folks who happen to have found their way to a curious marginal futurological sub(cult)ure.

Let's have everything we want at no cost, let's arrive at always being right about everything, let's create something that solves all our problems for us… These utterances may have the superficial form of goals, of projects, of efforts but they don't so much orient pragmatic conduct in the world as protest against the pragmatic conditions under which we orient and conduct ourselves in the world in fact. As such, they are far more like the utterances of the more conventionally faithful: let's redeem our sinful natures or pasts, let's pray for guidance, let's be worthy of Paradise

Anissimov writes sometimes as though the super-predicated aspirations of superlative futurology (Robot Cultism) are just slightly more "ambitious" or "optimistic" versions of already ongoing technoscientific practice. The "goal" of superlongevity is just kinda sorta a more ambitious optimistic kind of everyday healthcare practices, the "goal" of superintelligence is just kinda sorta a more ambitious optimistic kind of everyday software coding practices, the "goal" of superabundance is just kinda sorta a more ambitious optimistic kind of everyday manufacturing practices.

This rhetoric might seem initially to lend a cozy coloration of plausibility to what upon closer scrutiny reveals itself to be batshit crazy articles of faith in a version of "The Future" in which immortal post-humans have somehow "uploaded" their "minds" into cyberspace or robot bodies to "live" in virtual or nano-slavebotic Treasure Caves under the watchful gaze of a history-shattering Robot God. But quite apart from the odd articles of faith it would countenance (which aren't after all really any odder than the articles of faith that fuel most essentially religious outlooks), Anissimov's claims are bedeviled by profound conceptual double-binds.

Either his viewpoint amounts to an affirmation of the idea of healthcare provision, software improvement, and advances in production at such a level of generality that one would be hard pressed to find anybody anywhere who disapproves in the first place (thus eliminating the need for affirming them at all, let alone affirming them in the form of joining a conspicuously self-marginalizing Robot Cult) or his viewpoint amounts to a commandeering of the idea of healthcare provision, software improvement, and advances in production in the service of some project at odds with these already-affirmed practices as they are already playing out in the world (thus eliminating the pretense that these assertions have anything to do with actual science at all, but nicely explaining why they would be affirmed especially by folks who have joined a conspicuously self-marginalizing Robot Cult).

No technoscientifically-literate person has any doubt that properly funded, regulated, accountable technoscience research and development directed to the solution of shared human problems can be enormously useful, nor that ongoing and proximately upcoming genetic, cognitive, and prosthetic medical research is yielding unprecedented impacts and enormously interesting results, nor that problems of software usability and network security are enormously thorny and increasingly important in globally mediated and surveilled societies, nor that advances in automation, distributed production, and materials science enabled by discoveries at the nanoscale and otherwise are enormously exciting and provocative. There are millions of people around the world who are involved in the ramifying inter-implications of these truisms, identifying problems and forming actual goals in respect to these problems everywhere all the time. But not one of these problems, not one of these goals is the least bit clarified by reading it through the lens of superlative aspiration.

No one working to solve a particular healthcare problem is helped in their valiant efforts by the insistence of some Robot Cultist that one day medicine will deliver "superlongevity" (although you can be sure that loose talk about "playing god" has done more than its fair share to ensure that medical research that might solve actual problems and save actual lives didn't get proper funding). No one working to make software more user friendly or address a particular network security problem is helped in their painstaking efforts by the insistence of some Robot Cultist that one day we will code a superintelligent Robot God who will solve all our problems for us (although you can be sure that loose talk about "artificial intelligence" has, as Jeron Lanier has endlessly documented, inspired no end of bad software that frustrates its users by simulating "thinking" for them and "making decisions" for them in ways they strongly disapprove). No one working to make particular materials or products safer, cheaper, less toxic, more useful, or more sustainable is helped in their diligent efforts by the insistence of some Robot Cultist that one day immersive virtualities, or ubiquitous robots, or cheap as chips programmable multi-purpose room temperature desktop nanofactories will one day deliver a superabundance that will circumvent the impasse of stakeholder politics in a finite world that is home to infinite and incompatible aspirations (although you can be sure that loose talk about Drexlerian "nanotechnology" has made it next to impossible to talk sense about regulating or funding or forming reasonable expectations about the problems and possibilities of nanoscale technoscience in actual reality).

Anissimov writes:
Achieving superlongevity, superintelligence, and superabundance will be incredibly challenging, but seemingly inevitable as long as civilization continues to progress and we don’t blow ourselves up or have a global fundamentalist dictatorship on our hands. There is no guarantee that we will achieve these goals in our lifetime — but why not try? Achieving any of these milestones would radically improve quality of life for everyone on Earth. The first step to making technological advancements available to everyone is to make them available for someone.

As I said, Anissimov sometimes talks as though superlative futurological aspirations are "challenges" and "goals" that can be "achieved" if we simply "try" hard enough. Of course, the first step to making technological advancements available is actually to engage in actual technoscientific practices of research, funding, regulation, publication, education, application in the real world. While Anissimov rallies the faithful with a Mouseketeer Cheer of "Let's Try!" it is notable that the immediate consequence of taking up superlative discourse is to disengage from the actual technoscientific practices in which one actually tries, works, participates in the efforts through which actual technoscience connects to the actual world, achieves actual results, solves actual problems.

It is especially intriguing that Anissimov raises the specter of "fundamentalism" as one that would be warded off by the can-do declarations of the futurologically faithful, because it is of course fundamentalist formations, with their authoritarian circuits of True Believers and would-be guru-priests, that the Robot Cults themselves most conspicuously resemble, and never so much as when they declare their most marginal beliefs as the ones that are freightest most with "certainty" and "inevitability" -- as Anissimov has freighted his futurological faith with "inevitability" at the beginning of the very sentence that concludes by disavowing fundamentalism.

It is no surprise that Anissimov turns the spotlight onto the archipelago of marginal Robot Cult organizations like the Singularity Institute for Artificial Intelligence and the SENS Foundation when he wants to make plain who he considers to be the "leaders" in the "fields" devoted to the "work" of superlative futurology. For nobody who isn't already a Robot Cultist would it ever occur to describe Aubrey de Grey, or Eric Drexler, or Eliezer Yudkowsky as "leaders" in any kind of actually-existing technoscientific field. These are not serious organizations. These are not people cited in serious peer reviewed publications. These are not projects with serious grant money at their disposal.

I mean no offense, really, since compared especially to the brainless nutcases who accumulate in orbit around them Aubrey de Grey, Eric Drexler, and, say, Nick Bostrom (neither Kurzweil nor Yudkowsky even passes muster as peers of oddball outliers like Drexler or de Grey, Bostrom is the closest you really get to a non-utter-nutjob Singularitarian) are all fairly genial intellectuals who have interesting things to say as often as not. In England there is a fairly robust and attractive tradition that encourages oddball intellectuals and even expects intellectuals to be oddballs. Nick Bostrom certainly isn't as off-the-wall as Wittgenstein was (nor as much a genius either, probably, but who's to say, really, when all is said and done?), and Aubrey de Grey actually even looks quite a bit like Lytton Strachey. All of this is quite par for the course in England.

And I have no doubt that just as ancient historians will regularly profess a fond bemused attachment to sword and sandal epics like "Quo Vadis" and "Ben Hur" as part of the initial inspiration that draws no small number of students into their fields, but would never mistake these gorgeous cinematic gargoyles as anything passing muster as the actual practice of the field of history itself, I have no doubt at all that plenty of biochemists and gerontologists will admit a fond debt to the popular handwaving issuing from Drexler and de Grey.

But anybody who thinks these figures are leaders in their fields, or even, frankly, manage to inhabit anything close to the consensus in which all the real work in these fields is getting done is demonstrating though such assertions their own complete ignorance of the actual science at hand.

As I often have occasion to say, superlative futurology is not itself science, but a constellation of faith-based initiatives that opportunistically frame themselves as scientific precisely to yield for their wish-fulfillment fantasies the reality-effect that attaches especially to scientific pronouncements in our own historical moment. The assertions of futurological faith do not function to mobilize instrumental rationality to implement goals in reality, but to substantiate the "reality" of articles of faith in idealized imaginations of "The Future" that do not exist in the present except through the mobilization of moral rationality that solicits shared identification, shared aspiration, the "real substance" of subcultural solidarity (especially the defensive solidarities of marginal sub(cult)ure).

It is at this point that I think many are apt to misunderstand the force of my critique. Although I am an atheist myself, I do not disparage people of faith for the same reasons I do not disparage people whose aesthetic practices of judgment and self-creation differ from my own: It is in my view the substance of freedom to assert moral, aesthetic, ethical, and political judgments to the hearing of the diversity of one's peers without any expectation that one's judgments will be shared or will prevail but only that they should be affirmed as legible as judgments. In offering up our judgments to our peers, and owning up to them (whether they are admired or ridiculed) in the hearing of our peers, we own ourselves, we arise as our own selves, we constitute ourselves as selves in the world. That is the work of freedom in my view, the work of meaning-making among our living peers in an otherwise mineral meaningless existence. To the extent that Robot Cultists are just indulging in a kooky poetical enterprise I have no complaints about their enterprise in the least.

Superlative futurology, as I have often taken pains to point out, shares no small amount of common ground with sf fandoms, and as a queergeek myself I have had plenty of occasion to wallow in speculative space operatic sensawunda. Who needs Yudkowsky when you can be reading A Fire Upon the Deep? Who needs Kurzweil when you can be reading the Dune cycle? I am the last person on earth to chuckle derisively at geeks who gawk at anime, or artist renderings of space elevators, or city-scaled space-freighters in cinematic flight. Let a bazillion flowers bloom, let your freak flag fly, make meaning where and as you would. I'm a silly nerd myself, for heaven's sake.

I personally disapprove of religiosity only when it pretends to scientificity, and I personally disapprove of morality only when it seeks to prevail over politics. It is not the religiosity of fundamentalist formations that makes them pernicious in my eyes so much as their authoritarian policing of facts and moralizing policing of political diversity.

It is precisely in its insistence that it is a kind of scientific practice (indeed, often that it amounts to an urgent championing of True Science against the "anti-science" of "relativists" and "pessimists" who do not share its idiosyncratic taste in "The Future"), and precisely in the curious tendency of its investment in scientificity to yield a politics couched as a "neutral" pre-politics or even an outright anti-politics (always in the service of incumbent interests figured as "natural" interests) that superlative futurology exhibits a chilling kinship with such fundamentalist formations. That the organizational archipelago of Robot Cultism is suffused with would-be gurus and True Believers is a symptom of the underlying rationality of futurology itself, it is not -- as the Robot Cultists themselves rationalize in the face of this sort of observation -- simply a matter of an unfair generalization from "extreme" but "unrepresentative" (of course!) sub(cult)ural figures and texts that keep unaccountably cropping up so conspicuously among their number.

But more than this, I think there is an endemic double danger in futurological discourse, not only (first) that it subverts scientificity by stealthing its faith-based initiatives as scientific practice and subverts sensible policy-making by declaring its sub(cult)ural solidarities as developmental deliberation, but also (second) that it subverts freedom itself, the understanding and practice of freedom that is the heart of the political.

If I disparage the notion of "The Future" it is emphatically because I champion what I describe instead as "futurity," by which I mean to evoke the open futurity inhering in the plurality of peers in the present, collaborating and contesting in their diversity the shared world in the present always in the form of presents-opening-onto-presents-to-come.

I believe that notions of "The Future," in whatever forms they take in the mouths of those who imagine themselves to see "It" more clearly than everybody else, and to speak like would-be Priests in "Its" name, are always ideological constructions, always bespeaking a parochial perspective in the present projecting onto the openness of futurity in an effort to domesticate and control that openness, to police and curtail that diversity.

The substance of the gesture of attributing "future-likeness" to ideas in the present or even to the style of artifacts in the present (this is something Daniel Harris has written about in his famous essay on "The Futuristic") is always just the repudiation of the present, often perversely so. The work of this gesture is ultimately political -- even though it typically cloaks itself in the langauge of pre-political or a-political instrumentality. It is a refusal or disavowal of the demanding substance of politics, plurality and freedom, and an infantile fantasy to substitute for these The One True Way That Ends History and an instrumental amplification of capacities through which the Elect Become Godlike by Eluding Human Finitude.

Robot Cultists cling to the insistence that the superlative outcomes they presumably are "fighting for" (a "work" that ultimately amounts to re-iterating in the presence of fellow Robot Cultists that they do indeed "believe" in "the future" and all its works) are possible in principle, even if they are not practically realizable in the present. That no actually-serious scientists or policy makers share their own preoccupations with non-existing non-proximate medicinal techniques to deliver thousand year lifespans, or upload minds into computers, or create superintelligent Robot Gods, or create cheap-as-dirt desktop Anything Machines never enters into their reckoning of their superior scientificity. They imagine themselves to be indulging in a scientific enterprise despite the fact no scientific consensus ever forms around their actual assumptions or models or goals, and so they confuse the equivalent of medieval monks debating the number of angels who can dance on pin-heads as some kind of hard-nosed sooper-scientific practice.

But quite apart from all that, the deeper pathology in play in Robot Cultism is its very specific repudiation of the political, a repudiation signaled by its preoccupation with "The Future" over futurity in the first place. Whatever they want to say about their "fearlessness" for "daring" to dream Big Dreams it seems to me that their discourse is one saturated by fear -- with "big dreams" that usually look to me more like infantile wish-fulfillment fantasies, testaments to sociopathic alienation from their peers in all their confusing and threatening diversity, damaged denials of their vulnerable error-prone bodily selves, authoritarian pretensions to certainty or the complementary desire to evade the responsibilities of uncertain existence through True Belief in charismatic gurus claiming to hold the Keys to History in the midst of the distress of disruptive change. Politics is a matter of reconciling the indefinitely many logically possible but also logically incompatible aspirations of the diversity of stakeholders in a shared world, peer to peer, and that makes politics prior to technoscience, especially to a "technoscience" evacuated of all practical substance and left with anemic assurances of "logically possible" outcomes.

And all of this is still just circling around the drain of the futurological imaginary, for the substance of the present politics of the superlative futurology of the Robot Cultists is not a matter of working (without ever really working) to bring about "The Future" in which they have invested their fervent faith, but the politics of indulging the delusion that "The Future" is already here, already now, in the eyes of the fellow-faithful, in the ritual re-iterations of Its possibility, Its palpability, Its inevitability. This is the faithful repudiation of fact by means of pseudo-scientific derangements of facticity as such, this is the moralizing identification with "The Future" by means of the anti-political dis-identification with the plurality of their peers in the open futurity of the present opening onto presents-to-come peer-to-peer.

This is the real achievement of superlative futurology.

15 comments:

jimf said...

> [C]ompared. . . to the brainless nutcases who accumulate
> in orbit around them Aubrey de Grey, Eric Drexler, and,
> say, Nick Bostrom. . . are all fairly genial intellectuals
> who have interesting things to say as often as not.
> In England there is a fairly robust and attractive tradition
> that encourages oddball intellectuals and even expects
> intellectuals to be oddballs. Nick Bostrom certainly isn't
> as off-the-wall as Wittgenstein was (nor as much a genius either,
> probably, but who's to say, really, when all is said and done?),
> and Aubrey de Grey actually even looks quite a bit like
> Lytton Strachey. All of this is quite par for the course
> in England.

There is, of course, a danger in conferring the mantle of
"genius" on every oddball who happens to be unpleasantly
self-aggrandizing.

"I used to have a neighbor who told his wife that he
was the youngest person since Sir Isaac Newton
to take a doctorate at Oxford. The neighbor gave
no evidence of a world-class education, so I looked
up Newton and found out that he had completed
his baccalaureate at the age of twenty-two (like most
people) and spent his entire academic career at
Cambridge. The grandiose claims of narcissists are
superficially plausible fabrications, readily punctured
by a little critical consideration. The test is performance:
do they deliver the goods? (There's also the special
situation of a genius who's also strongly narcissistic,
as perhaps Frank Lloyd Wright. Just remind yourself
that the odds are that you'll meet **at least** 1000 narcissists
for every genius you come across.)"

http://www.halcyon.com/jmashmun/npd/dsm-iv.html


On the other hand, some of the recognized geniuses in history
are not necessarily people you'd have wanted to live
with. Wittgenstein was one, according to his biographers.

Here's another:

The Monster
by Deems Taylor

He was an undersized little man, with a head too big for his body -- a sickly little
man. His nerves were bad. He had skin trouble. It was agony for him to wear
anything next to his skin coarser than silk. And he had delusions of grandeur.

jimf said...

He was a monster of conceit. Never for one minute did he look at the world or at
people, except in relation to himself. He was not only the most important person
in the world, to himself; in his own eyes he was the only person who existed. He
believed himself to be one of the greatest dramatists in the world, one of the
greatest thinkers, and one of the greatest composers. To hear him talk, he was
Shakespeare, and Beethoven, and Plato, rolled into one. And you would have
had no difficulty in hearing him talk. He was one of the most exhausting
conversationalists that ever lived. An evening with him was an evening spent in
listening to a monologue. Sometimes he was brilliant; sometimes he was
maddeningly tiresome. But whether he was being brilliant or dull, he had one sole
topic of conversation: himself. What he thought and what he did.

He had a mania for being in the right. The slightest hint of disagreement, from
anyone, on the most trivial point, was enough to set him off on a harangue that
might last for hours, in which he proved himself right in so many ways, and with
such exhausting volubility, that in the end his hearer, stunned and deafened,
would agree with him, for the sake of peace.

It never occurred to him that he and his doing were not of the most intense and
fascinating interest to anyone with whom he came in contact. He had theories
about almost any subject under the sun, including vegetarianism, the drama,
politics, and music; and in support of these theories he wrote pamphlets, letters,
books . . . thousands upon thousands of words, hundreds and hundreds of pages.
He not only wrote these things, and published them -- usually at somebody else's
expense -- but he would sit and read them aloud, for hours, to his friends and his
family.

He wrote operas, and no sooner did he have the synopsis of a story, but he
would invite -- or rather summon -- a crowed of his friends to his house, and read
it aloud to them. Not for criticism. For applause. When the complete poem was
written, the friends had to come again, and hear that read aloud. Then he would
publish the poem, sometimes years before the music that went with it was
written. He played the piano like a composer, in the worst sense of what that
implies, and he would sit down at the piano before parties that included some of
the finest pianists of his time, and play for them, by the hour, his own music,
needless to say. He had a composer's voice. And he would invite eminent
vocalists to his house and sing them his operas, taking all the parts.

He had the emotional stability of a six-year-old child. When he felt out of sorts, he
would rave and stamp, or sink into suicidal gloom and talk darkly of going to the
East to end his days as a Buddhist monk. Ten minutes later, when something
pleased him, he would rush out of doors and run around the garden, or jump up
and down on the sofa, or stand on his head. He could be grief-stricken over the
death of a pet dog, and he could be callous and heartless to a degree that would
have made a Roman emperor shudder.

jimf said...

He was almost innocent of any sense of responsibility. Not only did he seem
incapable of supporting himself, but it never occurred to him that he was under
any obligation to do so. He was convinced that the world owed him a living. In
support of this belief, he borrowed money from everybody who was good for a
loan -- men, women, friends, or strangers. He wrote begging letters by the score,
sometimes groveling without shame, at others loftily offering his intended
benefactor the privilege of contributing to his support, and being mortally
offended if the recipient declined the honor. I have found no record of his ever
paying or repaying money to anyone who did not have a legal claim upon it.

What money he could lay his hands on he spent like an Indian rajah. The mere
prospect of a performance of one of his operas was enough to set him to running
up bills amounting to ten times the amount of his prospective royalties. No one
will ever know -- certainly he never knew -- how much money he owed. We do
know that his greatest benefactor gave him $6,000 to pay the most pressing of
his debts in one city, and a year later had to give him $16,000 to enable him to
live in another city without being thrown into jail for debt.

He was equally unscrupulous in other ways. An endless procession of women
marched through his life. His first wife spent twenty years enduring and forgiving
his infidelities. His second wife had been the wife of his most devoted friend and
admirer, from whom he stole her. And even while he was trying to persuade her
to leave her first husband he was writing to a friend to inquire whether he could
suggest some wealthy woman -- any wealthy woman -- whom he could marry for
her money.

He was completely selfish in his other personal relationships. His liking for his
friends was measured solely by the completeness of their devotion to him, or by
their usefulness to him, whether financial or artistic. The minute they failed him --
even by so much as refusing dinner invitation -- or began to lessen in usefulness,
he cast them off without a second thought. At the end of his life he had exactly
one friend left whom he had known even in middle age.

The name of this monster was Richard Wagner. Everything that I have said
about him you can find on record -- in newspapers, in police reports, in the
testimony of people who knew him, in his own letters, between the lines of his
autobiography. And the curious thing about this record is that it doesn't matter in
the least.

jimf said...

Because this undersized, sickly, disagreeable, fascinating little man **was** right all
the time. The joke was on us. He **was** one of the world's greatest dramatists; he
**was** a great thinker; he was one of the most stupendous musical geniuses that,
up to now, the world has ever seen. The world did owe him a living.

When you consider what he wrote -- thirteen operas and music dramas, eleven
of them still holding the stage, eight of them unquestionably worth ranking among
the world's great musico-dramatic masterpieces -- when you listen to what he
wrote, the debts and heartaches that people had to endure from him don't seem
much of a price. Think of the luxury with which for a time, at least, fate rewarded
Napoleon, the man who ruined France and looted Europe; and then perhaps you
will agree that a few thousand dollars' worth of debts were not too heavy a price
to pay for the Ring trilogy.

What if he was faithless to his friends and to his wives? He had one mistress to
whom he was faithful to the day of his death: Music. Not for a single moment did
he ever compromise with what he believed, with what be dreamed. There is not a
line of his music that could have been conceived by a little mind. Even when he
is dull, or downright bad, he is dull in the grand manner. There is greatness about
his worst mistakes. Listening to his music, one does not forgive him for what he
may or may not have been. It is not a matter of forgiveness. It is a matter of being
dumb with wonder that his poor brain and body didn't burst under the torment of
the demon of creative energy that lived inside him, struggling, clawing, scratching
to be released; tearing, shrieking at him to write the music that was in him. The
miracle is that what he did in the little space of seventy years could have been
done at all, even by a great genius. Is it any wonder that he had no time to be a
man?
-------------------

YMMV.

Dale Carrico said...

Dag, Jim, that's a mighty tall wall folks are gonna have to scale if they should feel inclined to address the points in this post! I mean, there's a lot in what you say here, I sympathize with the force of your point, but, golly, ellipses are your friend. Well, it's not like my other responses won't mostly be put-upon know-nothing Robot Cultists whining about how mean I am or how I'll regret my faithlessness come Tech-Heaven, anyway, so prolly no biggie.

Chad Lott said...

There is something about Aubrey de Grey that is endearing.

When asked about what sort of life extending vitamin pills and whatnot he took to augment his chances of living forever he said he took none because his wife is a pretty good cook.

He is in a great position to sell some snake oil (like Kurzweil advocates) and doesn't. That alone makes me more apt to listen to him.



Anyway, what's up with that Kim Stanley Robinson class?

ZARZUELAZEN said...

(The crowd rises to its feet, clapping at Dale's speech).

Wowee, that was some speech Dale!

So old Yudkowsky didn't even make your list of 'notable eccentrics' eh? Truth is I can't stand most of the folks in Yudkowsky's orbit, even though I'm sympathetic to the Singularity idea. They seem unable to view people in anything other than purely functional terms (i.e in terms of people's external decisions or economic output). This is probably connected to the strong prevalance of Libertarian views among them and suggests serious cognitive deficits ( a form of brain damage in other words).

They are operating off a limited, narrow conception of intelligence (rationalistic, instrumental, bayesian intelligence) and seem completely blind to any other kind of intelligence. (Again, all of this hints at serious cognitive deficits).

Nick Bostrom, is as you point out, probably the closest thing to a sane Singularitarian, but even he fell for the Bayesian Reductionist bullshit. Still it must be admitted that Nick does not appear to suffer from the cognitive deficits afflicting most of the Yudkowksy-fanboy crowd.

Yes to the signaling theory, if these Yudkowsky-fanboys are so smart, why aren't they out getting scientific papers published, winning Nobel prizes, raking in funding, designing winning software etc etc. In a great turn of phrase you used earlier ; 'I think we all know the answer...' ;)

It's got nothing to do with Singularity/AGI, and all about 'belonging' to a club of like-minded people strong in one particular narrow area of intelligence and fooling themselves into thinking they're really truly smart. The messageboarding/media attention is nothing but a social game of signalling. All smoke and mirrors my dear Dale, all smoke and mirrors. Sad, very sad.

Narcissim is unfortunately the default state on the Internet, and these people are suffering from the worst cases of it we have ever seen.

Mitchell said...

May I propose a crude anti-superlative slogan:

YOU CANNOT ESCAPE FINITUDE, ERROR, OR POLITICS.

If I have to pick sides in this debate, I'm pro-superlative, ultimately, because I think the deGrey/Yudkowsky/Drexler axis is generally right about what technology can foreseeably accomplish. However, the sensibility (for want of a better word) that has developed around it is prone to all sorts of derangement, and as a critic of that sensibility, Dale is the most helpful opponent transhumanism has ever had. And I feel it might help move things along if we had Dale For Dummies, distilled into a single sentence for those with short attention spans or no interest in Hannah Arendt. So here it is again:

YOU CANNOT ESCAPE FINITUDE, ERROR, OR POLITICS.

Dale Carrico said...

Marc, I am glad you enjoyed this essaylet, but I am curious that you declare yourself "sympathetic to the singularity idea" (which one?) without exhibiting any sympathy for anything about it in particular. My other note would be to say that I regard it as a dangerous business to be too quick to decry those with whom we regularly disagree as having "cognitive deficits." That smacks of diagnosis rather than simple good old fashioned abuse (I much prefer the latter) and I regard the reductionist tendency to stealth moral views and aesthetic tastes in medical terms as a dangerous and dishonest thing. There are many neuro-atypical folks who lead perfectly flourishing, functional, and civic-minded lives and we need to take care not to pathologize peers carelessly. I am esepcially sensitive to this issue precisely because I am a real latecomer to proper sensitivity to this issue and am still myself learning how to navigate it. I think it is perfectly adequate to say of someone like Yudkowsky that he seems to me wrongheaded and extremely foolish and absurdly self-aggrandizing and rather a fraud... I think there is no reason at all to assume that this is a consequence of some "illness" in need of "treatment."

Dale Carrico said...

Mitchell -- so, you are a part of a kind of would-be conservative wing of transhumanism? You want a more modest or go-slow approach to the techno-godhood project? Fascinating.

I think probably most Robot Cultists will not share your enthusiasm for the "contribution" I am making through my relentless critiques of every aspect of their pseudo-scientific and authoritarian sub(cult)ural "movement" to sweep the world and install tech-heaven, but I suppose stranger things have happened.

I do occasionally write bumber stickers rather than extended critiuqes, and you can find many of them anthologized under the heading in the sidebar "Futurological Brickbats." I hope you find them enjoyable and clarifying.

I must admit I find it a little hard to square the repeated complaints I get from some Robot Cultists that my writing is too dense, difficult, extended and so on with their insistence otherwise that they constitute an elite klatch of soopergeniuses holding the Keys to History, out of step with scientific consensus and political commonsense not so much because they are the crackpots they appear to be to humble atavisms like myself but a scientific avant-garde and Robot Priesthood in whose ears the cosmos speaks plain and in whose minds the coming of the Robot God is palpable. But, hey, I read literature and philosophy, I don't assemble molecular nanomachines on CAD rigs, what do I know, eh?

jimf said...

> . . .just as ancient historians will regularly
> profess a fond bemused attachment to sword and
> sandal epics like "Quo Vadis" and "Ben Hur"
> as part of the initial inspiration. . .

Ah yes, beefcake movies.

A movie aficionado friend of mine refers to a
related genre -- e.g., _Land of the Pharaohs_
(Joan Collins et al.) -- as "tits 'n sand
movies".

;->

Dale Carrico said...

"Billy, do you like movies about gladiators?"

Michael Anissimov said...

I use the above quote myself a lot of the time in real-life conversation, btw.

Dale, you have to realize that Marc is essentially an "outcast" from the SL4 and transhumanist community ("depersoning", in your paranoid language, but really he is just annoying as fuck, seriously), and believes in all the superlative stuff you critique, except he likes your critiques because you specifically focus on Eli and the SIAI crowd a lot, which he dislikes intensely because he appears to be a lonely nerd with no job and no life who has a years-long grudge. (What a lame way to live, really.) He thinks his *own* AI theory will lead to seed AI and thereby a Singularity.

Eliezer sometimes takes himself too seriously, it is true. I used to see him as a guru figure when I discovered his writings when I was 17. I also viewed Aubrey and Eric as such, but then I eventually met them all and now see them as normal human beings. We have to remember that Aubrey's supporters, Eric's supporters, and Eliezer's supporters are also all part of different yet overlapping cliques. The reason why I hesitate to call them subcultures is that there is minimal social pressure to conform to a certain set of standard ideas. In fact, we welcome diversity. It is sort of annoying and boring how you (and James Hughes btw) sometimes say, "well, maybe Anissimov has more of a clue than the others or some shit, he buys into progressive politics somewhat and is probably just a good-hearted youngster deceived by the feverish spinning of one Eliezer S. Yudkowsky", but this critique is lame because everyone who knows me knows that I criticize Eliezer whenever I feel like it and openly disagree with him on a number of issues.

Blah blah blah, what else. Instrumental vs. moral BS. I prefer only to invoke the moral nonsense as a tool to inspiring instrumental action. Not everyone that cooperates or works with AI even necessarily is a Singularitarian in our sense of the word. Our org is big and legitimate enough that we are now interfacing with the mainstream.

You attribute all sorts of slyness, cleverness, and spinning to me and my writing when in reality I am too lazy to front. Everything I say is my honest opinion. No one in SIAI is spinning things round and round. We are either totally insane and really believe everything, or totally sane, not insane and simultaneously deceptive in a Machiavellian way. Another aspect of our subculture that you hesitate to mention is that we are obsessed with categorizing and observing flaws in ourselves, sometimes even to excess. Our obsession with self-questioning is what has lead me to read your blog relatively regularly despite your sometimes mean-spirited attributions to "Robot Cultists" that slightly (sometimes but not as much as it used to) hurt my feelings because you seem to be attributing qualities to us, or even more specifically me, and I swear ain't there. Deliberate deception or serving "incumbent interests" for instance.

How do superlative discourses help incumbent interests, btw? "Incumbent interests" I would guess are things like gas, oil, and military contracting companies? How the hell does talk about Friendly AI support these incumbent interests more than right-wingers actively lobbying for fewer restrictions on business?

All the superlative organizations you fear have limited influence and limited funds. If our philosophy is as hollow as you propose, there is little danger of our influence growing, or these ideas "retaining their rigor", as Richard Jones put it.

But particularly, I must disagree with you that pointing out that aging is an engineering challenge is unhelpful to medicine. Most gerontologists look at aging as a mystical, unstoppable force, which has negative implications for quality of life for everyone. Attacking the biochemical foundations of aging is a medical research path with high potential returns on investment, even if it never leads to an indefinitely extended lifespan. Mainstream gerontology really is blind to this approach, and it shouldn't be.

Dale Carrico said...

Instrumental vs. moral BS. I prefer only to invoke the moral nonsense as a tool to inspiring instrumental action.

You are quite free to think the distinction "BS," Michael, indeed I expect it of you. You can rest assured that I take enormously seriously the folly of those who mistake instrumental amplification for actual freedom.

How do superlative discourses help incumbent interests, btw?

I have made this case countless times, often directly in response to you. What is the point of rehearsing it again if you are so palpably insensible to the case? I believe you recently recommended in your own blog a link to a piece of mine in which I sketched no small number of the reasons for that claim. Is it that you didn't read the text you recommended to your readers' attention or that you no longer retain the substance of the case?

If our philosophy is as hollow as you propose, there is little danger of our influence growing

Well, as someone with a lifetime's devotion to and even a degree in philosophy I object to your application of that word to describe the interminable informercial that is futurology. But quite apart from that, this is another question I have answered already many times before. If my answer hasn't satisfied you what is the point of reiterating it? You either are incapable of grasping it or being moved by it, or (a new possibility occurs to me) remembering it. If you aren't reachable by this time, given the attentions I've devoted to you over the years, I am cheerfully reconciled to judging you not worth reaching -- as is certainly the case with the overabundant majority of people enamored of superlative futurological discourses in my view.

Your discourses really are mostly interesting to me as clarifyingly extreme symptoms of more prevailing reductionist, elitist, eugenicist, neoliberal developmental discourses, as I've also already said countless times if you care to grasp or recall the point.

Most gerontologists look at aging as a mystical, unstoppable force

Oh, what utter nonsense. Medical researchers and healthcare providers solve actual problems as they can, they aren't bolstered by Robot Cult hyperbole or hampered by a malign "deathism." As if humanity hasn't been immortalized to wallow forever in robo tech-heaven only because so many of the rest of us lack the boastful can-do assertiveness of futurological fanboys in a circle-jerk. It is hard imagine anything more ridiculous than the lot of you, honestly.

You attribute all sorts of slyness, cleverness, and spinning to me and
my writing when in reality I am too lazy to front.


Yes, I begin to see I was hasty in attributing cleverness to you after all.

As for Yudkowsky or this Marc person, or whoever else is positioned however within the various sectarian squabbles of the Robot Cultists, you can be sure I am quite happy to regard you all as more or less equally clownish figures.

jimf said...

Michael Anissimov wrote:

> All the superlative organizations you fear have limited
> influence and limited funds. If our philosophy is as hollow
> as you propose, there is little danger of our influence
> growing, or these ideas "retaining their rigor", as
> Richard Jones put it.

"If SIAI does not scare the crap out of you, you don't take us
seriously. SIAI has scared the crap out of me since late 2000 when I first
realized it was theoretically possible for someone to screw up the
Singularity. For that matter, we should scare the crap out of you even if
you think we have a 100% probability of success on FAI."

Re:SIAI has become slightly scary
http://www.sl4.org/archive/0406/9076.html