"[n]ot "eternal life", [but] the indefinite prolongation of life. Aubrey de Grey does a good job of arguing this point in all his talks. And don't tell us that we don't know what's it like to be alive. Honestly, I'm not easily angered, but eventually the constant attacks (like this) piss me off.
I'm afraid that this little effort at word magic doesn't get our techno-immortalist friends off their particular painful fish-hook.
Every successful remediative therapeutic intervention manages to prolong healthy lifespan for the one whose condition it heals. If that's what you mean by "the indefinite prolongation of life" then you should just call what you mean healthcare like everybody else in the world already does and be done with it.
Nobody has to join a Robot Cult to affirm the value of healthcare.
But if by the "indefinite prolongation of life" you really mean the accomplishment of a discretionary mortality more or less under human control, or a mortality so statistically negligible as to no longer trouble the sleep of the mortally-afraid, then you should just call what you mean immortality like every other charlatan making the same promise has always done and be done with it.
Robot Cults are enormously useful to charlatans peddling eternal life to the fearful.
It isn't that hard to clear the waters that flim-flam artists like to muddy for the marks.
Barring climate catastrophe, neoliberal economic breakdown, or neoconservative military conflagration, I too expect emerging and proximately upcoming medical therapies to continue to intervene in hitherto customary capacities and limits, at least for some lucky people, including interventions into at least some of the conditions we presently associate with what is somewhat folkishly designated as "the aging process."
I don't expect the arrival of a "longevity singularity" -- the demographic moment when average life-expectancy increases one year per year -- to arrive as soon as my transhumanist readers do (but I could certainly be surprised without being too surprised by this expectation). More to the point, nor do I think the achievement of this longevity singularity moment should be a higher priority than treating neglected diseases in the overexploited world nor treating the conditions of the actually elderly in the world as I daresay most of my transhumanist readers do.
But what is key to grasp here in my view is how different this discourse of mine is from their own, even if I share with them a certain acceptance of the possible (even likely) significance of emerging genetic, prosthetic, and cognitive modification techniques. There is nothing in what I say that would lend comfort to those who would fearfully deride the finitude of the human condition.
I fully expect the play of actual and available and legible genetic, prosthetic, and cognitive modifications to express the historical complexities of human plurality, wants, passions, and violations. It will no doubt exacerbate many given injustices and provide creative recourse to many who would overcome historical deadlocks. It will not transcend nor will it circumvent the impasse of divers stakeholder politics, but constitute the field in which that politics plays out in the world. It will not transcend nor will it circumvent the basic dilemma of human finitude in the face of the openness of human freedom and futurity.
I offer no comfort to those who would disguise their disgust at human vulnerability in a denialist championing of techno-immortality. I offer no comfort to those who would disguise their disgust at human variation in a genocidal championing of hygiene or optimality. I offer no comfort to those who would disguise their disgust of human freedom in an anti-political championing of technocracy or any parochial future owned here and now by a tribe of the few.
And, by way of conclusion, let me return to Michael's little fit of pique there at the end: And don't tell us that we don't know what's it like to be alive. Honestly, I'm not easily angered, but eventually the constant attacks (like this) piss me off.
Life's a vulnerable metabolic process in a demanding finite environment, and not a perpetual motion machine. You can pretend I'm gratuitously insulting you when I say you haven't taken into account what all that actually means at a basic level when you claim to anticipate some imminent techno-immortalization, but it isn't that hard to grasp the force of my actual point on its actual terms. I don't doubt in the least that you are getting angry from my "constant attacks" at this point, but I venture to suggest that this is as much the anger of exposed fraud or questioned True Belief as anything else, and I cannot honestly promise you my arguments to come be will be any more comfort to you. Your options will remain to leave my critiques unanswered and pay the price of that, or to respond to my critiques on their actual terms and pay the price of that.
29 comments:
What was that blog that offered critiques of transhumanism and Ray Kurzweil? It featured a post about life expectancy increases and how, if you adjust for early-age causes of death, geriatric care has added very little lifespan.
Basically, if you didn't contract a communicable disease or die in violent conflict, you stood a reasonable chance of reaching 60 or 70, even hundreds of years ago. So today it's 80. Not that impressive.
as I daresay most of my transhumanist readers do
I don't, and I don't think he does.
I offer no comfort to those who would disguise their disgust at human vulnerability in a denialist championing of techno-immortality. I offer no comfort to those who would disguise their disgust at human variation in a genocidal championing of hygiene or optimality. I offer no comfort to those who would disguise their disgust of human freedom in an anti-political championing of technocracy or any parochial future owned here and now by a tribe of the few.
Deaf people suck, blind people suck, politics is inefficient, the Singularity will arrive in five years, technology will make us immortal, etc., etc. You are a luddite.
("!!!21121111eleven...")
Well Dale, I'm not really sure how this, which you are calling "immortality" and dismissing as charlatanry
the accomplishment of a discretionary mortality more or less under human control, or a mortality so statistically negligible as to no longer trouble the sleep of the mortally-afraid, then you should just call what you mean immortality like every other charlatan making the same promise has always done and be done with it.
differs from this
the arrival of a "longevity singularity" -- the demographic moment when average life-expectancy increases one year per year
which you think is at least possible though not likely as soon as the THists. I also tend to think that at some level SENS research and traditional assaults on developing world diseases will merge since the same nanotech that can extend life in the rich world can hunt down and destroy parasites in the poor world. Of course if what we're talking about is diseases that have actually already been cured but are still rampant in the poor world because money is scarce, there's no clean water, rich world industries are dumping pollution, etc... then that's not a technical/research problem at all but political. And of course if the nanobots or synthetic lifeforms (an exciting new area) are controlled the way AIDS drugs are then the same political problems of getting the technology to the poor people who need it will still prevent the poor from benefitting.
m said:
What was that blog that offered critiques of transhumanism and Ray Kurzweil?
http://www.infeasible.org/
Even if a longevity singularity were to arrive -- and I'm skeptical that it will arrive, and even more skeptical that it will arrive any time soon -- I wouldn't expect it to be endlessly prolongable once it did arrive. "Larger social implications" rarely equal techno-utopian transcendence, as I seem to have been clumsily hammering away over and over all the livelong day today.
Re: "Your options will remain to leave my critiques unanswered and pay the price of that, or to respond to my critiques on their actual terms and pay the price of that."
Oh my, I am scared!
And what are your "actual" terms?
You say everything and the contrary of everything to prove that transhumanists are jerks.
You repeat cheap Sunday school assumptions like a broken disk, and refuse to discuss their validity.
You claim that others said things that they have never said and, when presented with the evidence of the contrary, you climb on mirrors and use cheap verbal tricks to confuse the reader.
You reject statements that others make on their own motivations, and always claim that you know better.
You attack others based on (your perception of) their identity, and always disregard what they actually say. On the other hand you claim that you are not "identity-focused". Just give me a break, _you_ are the one who tries to turn scientific and societal debates into identity-based religion wars.
Compared with rational and moderate positions, like those expressed by Jamais, you sound like a hysterical fundamentalist preacher who screams that Thou Shalt Not Fly Because The Book Says So.
Being these your terms, the only possible way to "respond to your critiques on their actual terms" is referring, as I may have done on occasions, to your non-arguments as mental masturbations of a useless armchair philosopher or third rate TV preacher. But of course you don't like that (the taste of one's own medicine can be bitter).
Seriously, I think some valid point that you make, and I do concede that you make many valid points, would be much stronger if you could leave the histrionics aside.
Somebody needs his diaper changed.
Re: "Somebody needs his diaper changed"
I did not intend to put it in these terms but, if you say so...
Who's turn is it to change Dale's diaper?
Just kidding. We love you Dale, though you have cemented your role as l'enfant terrible of transhumanism.
You said something, Dale, that seemed to me rather odd. Let me recount on that:
"I offer no comfort to those who would disguise their disgust at human vulnerability in denialist championing of techno-immortality."
Perhaps I'm misunderstanding. If I were inclined to jump to far-fetched conclusions about this, I might think you were suggesting we all "lie down and die," as it were. Let's forget about the "denialist," "techno-immortality" bits, for now. I thought it was a "disgust at human vulnerability" that drove most real "progress?" (scare quotes for tentativeness of definition) I know "lie down and die" is not what you're meaning to say, but I'm not certain what you are doing? Surely if we didn't care about the shitty aspects of life we'd never do any social works to begin with. Perhaps I am being foolish to ask this of you, but what do you think is the primary distinction between genuine techno-progressive entrepreneurship, philanthropy, and superlative "denialism?"
Perhaps I'm misunderstanding. If I were inclined to jump to far-fetched conclusions about this, I might think you were suggesting we all "lie down and die," as it were.
Perhaps you are indeed misunderstanding. For if I were inclined to jump to "far-fetched conclusions" (in the sense of the expression "far-fetched" as it is actually used by non-insane users of the English language), I would think you were suggesting that people hitherto have been mortal just because they have been resigned in some way to "lying down and dying?" I would think you were suggesting that people hitherto have been mortal because they lack the "can-do" attitude of techno-immortalists?
Let's forget about the "denialist," "techno-immortality" bits, for now. I thought it was a "disgust at human vulnerability" that drove most real "progress?"
Really? I'm trying to understand where you are coming from here. Do you think, for example, that engineering is an expression of disgust at the fact of gravity, rather than an effort to accomplish instrumental ends in the face of given constraints? Solving shared problems is not an existential revolt against the basic facts of human finitude.
Perhaps I am being foolish to ask this of you, but what do you think is the primary distinction between genuine techno-progressive entrepreneurship, philanthropy, and superlative "denialism?"
Superlativity is an essentially religious attitude (which no doubt could be a fine source for aesthetic meaning and inspiration in its proper place for those who connect to it) in my view, but insanely misapplied to technical or political ends to which it is profoundly unsuited.
The pursuit of universal and consensual healthcare is not about techno-immortalism, facilitating the growth of knowledge is not about coding artifical intelligence, democratizing stakeholder politics is not about a quest for techno-abundance.
Is that clear enough for you yet?
Hmmm, yes, thank you very much.
It looks like a largely semantic misunderstanding. Perhaps I would not have used the phrase "disgust at human vulnerability" (as I probably shouldn't have used the potentially misleading phrase "lie down and die"), but I would have defined the very act of solving shared problems as arising from an attitude of, "why can't this be better than it is now," which seems to me to arise, albeit implicitly, from an attitude of minor indifference to "human vulnerabilities."
I don't suppose that engineers, for instance, sit around before a construction project thinking to themselves, "F*** gravity, we'll show them!" but, that very aspect of seeking the instrumental utility of, for instance, building an extra level on top of a structure, seems to me to evince such an implicit contempt for the difficulty involved in doing so (which I would consider to be a facet of human finitude).
I understand the injuriousness of superlativity in this context (I was merely confused about your usage of the term denialism, I suppose), and I agree with your critique of transhumanism in this regard, although I have probably injected my own biases into the way I interpreted your statement.
For instance, in my own various discussions on the subjects of bioethics, I have had no problem in instances where somebody says, for instance, "No, I don't think so. Sounds unlikely," but I have a particular annoyance when people say or imply something along the lines of, "No, you shouldn't even try," as a matter of principle, unless there is genuinely practical reasoning in support of it. As I said (obtusely), I was uncertain if you had meant to imply the latter or not, regarding my interpretation of the phrase in question (though as I also said, I found that very unlikely, however, the wording of that phrase was significant to me due to the aforementioned biases). Thanks again for taking the time to clear it up, though.
Great post, JM Inc.
"Fuck gravity!" is the right attitude for an engineer, and every creative engineer knows the intellectual orgasm after finding an effective way to do it.
Of course every good engineer also knows that in order to fuck gravity one must know and understand it up to the degree and level of detail appropriate to the specific objective sought. Otherwise, of course, it is gravity who will fuck him.
Good engineers do not try to fuck gravity on the first date, but go through all the moves with great care. But "Fuck gravity!" is a good basic attitude.
So fuck gravity, fuck aging, fuck disease, fuck death, fuck all limits, and fuck all luddite fundamentalists who stand in the way.
Dale, both "healthcare" and "immortality" seem to be highly misleading and inexact terms when applied to say, SENS (Strategies for Engineered Negligible Senescence) - so why are you suggesting that transhumanists should pick one or the other?
Certainly there have been and are charlatans promising cures for aging. There have been just as many promising cures for cancer. The difference is that these days, cancer is sometimes actually cured, with skilled practioners and the aid of modern medical technology.
My question is, in the days *before* cancer was curable, would you have damned as robo-cultists and charlatans the people who supported the scientists engaged in researching these newfangled concepts of "chemotherapy" and "radiotherapy" as charlatans and techno-utopians trying to beguile people into spending their philanthropic and/or taxpayer dollars on futuristic research rather than on providing proven therapies for those currently unable to afford them? (ie, Healthcare)
If not, then you'd have to admit that there's a difference between the scientists and technologists (and their supporters) who suggest that are probably cure(s) within reach if one spends enough time and money looking in the directions they've identified, and the charlatans claiming that they already *have* a cure -- if you'll only pay them enough to provide it.
Hmmm. I probably should have proofread that post a bit bitter.
Also, I think I should point out that in general I agree with you about the problems with technophila, but I've had a close look at SENS (at least, as close as you can get from New Zealand), and to me it seems plausible - they appear to be doing real science, even if some/many of their supporters *are* the sort of technophiles and retro-futurists you are opposed to.
> So fuck gravity, fuck aging, fuck disease, fuck death,
> fuck all limits, and fuck all luddite fundamentalists
> who stand in the way.
You know, there's a fine line between optimism, enthusiasm,
and zest on the one hand, and outright psychopathology on
the other.
The latter, when it occurs, is usually not just an emotional
imbalance -- it's almost always a cognitive deficit as well.
There's a rich literature on the subject these days.
You might start with
"Narcissistic Leaders: The Incredible Pros, the Inevitable Cons"
by Michael Maccoby
The Harvard Business Review January-February, 2000
"Look around you. Self absorption. Greed. Frivolity.
Social anxiety. Lack of empathy. Exploitation. Abuse.
These are not marginal phenomena. These are the defining
traits of the West and its denizens. The West's is a
narcissistic civilization. It upholds narcissistic values and
penalizes the alternative value-systems. From an early age,
children are taught to avoid self-criticism, to deceive
themselves regarding their capacities and achievements,
to feel entitled, to exploit others. Litigiousness is
the flip side of this inane sense of entitlement. The
disintegration of the very fabric of society is its outcome.
It is a culture of self-delusion. People adopt grandiose
fantasies, often incommensurate with their real, dreary,
lives. Consumerism is built on this common and
communal lie of 'I can do anything I want and possess
everything I desire if I only apply myself to it'."
"It Is My World: Narcissism and Omnipotence"
http://samvak.tripod.com/journal37.html
fuck gravity, fuck aging, fuck disease, fuck death, fuck all limits,
Like that? Is it case for or against your views?
Well, at least Fyodorov and Tsiolkovsky spoke of thousands, even millions of years. They probably were right. So much for the accelerationism.
BTW, fact not mentioned in the Wikipedia and often left out of biographies: Tsiolkovsky was well-known for his denialism of theory of relativity. 8*** the speed of light!
My question is, in the days *before* cancer was curable, would you have damned as robo-cultists and charlatans the people who supported the scientists engaged in researching these newfangled concepts of "chemotherapy" and "radiotherapy" as charlatans and techno-utopians trying to beguile people into spending their philanthropic and/or taxpayer dollars on futuristic research rather than on providing proven therapies for those currently unable to afford them?
And yes, having nothing but "Argument from the Wright Brothers" is still next best thing to wearing "I'm a crackpot" T-shirt and tin-foil hat if you want to not to be taken seriously by anyone who is indeed working on relevant science or engineering.
Show me that flying machine of yours, please :)
Re: "Tsiolkovsky was well-known for his denialism of theory of relativity. 8*** the speed of light!"
Right, fuck that too. I propose to name the first FTL drive after Tsiolkovsky.
> BTW, fact not mentioned in the Wikipedia and often left out
> of biographies: Tsiolkovsky was well-known for his denialism
> of theory of relativity. 8*** the speed of light!
The late Arthur C. Clarke, whom I practically worshipped
as a pre-adolescent (I created a mini-controversy, and
sparked some real heat from the teacher, when I brought up Clarke's
prediction of personal "immortality" by 2100 [from _Profiles of
the Future_] in a sixth-grade classroom discussion -- yes,
Giulio, I was a rip-snortin' Transhumanist when I was 11,
though the word hadn't yet been invented. I grew up, eventually,
though it took decades.) had some -- quirks -- that I certainly
never knew about before the advent of the Web.
He was apparently a major a**hole, for one thing
http://books.guardian.co.uk/print/0,,333210386-110738,00.html
and at at least by the end of his life seems to
have been an out-and-out crackpot:
http://www.findarticles.com/p/articles/mi_m1511/is_n5_v18/ai_19332985
-------------------------------------------------------------
An odyssey of sorts - author Arthur C. Clarke -
Special Issue: The Coming Age of Exploration - Interview
Discover, May, 1997 by Fred Guterl
Recently the 80-year-old Clarke took time out
of his hectic schedule to talk with Discover's
Fred Guterl by telephone from his home in Sri Lanka.
Here are some of the highlights of their conversation. . .
[Guterl:] I'd like to talk to you about your ideas concerning
space travel.
[Clarke:] I've written dozens of books on the subject
and I'm sick and tired of talking about it. I've got
nothing new to add, except I think more and more that
the new space age, and the new everything age, is
linked more and more to the new energy revolution.
[Guterl:] What energy revolution is that?
[Clarke:] For one thing, there is this so-called
cold fusion. Which is neither cold nor fusion. Very few
Americans seem to know what is happening, which is
incredible. It's all over the world, except the
United States. There are hundreds of laboratories
doing it, they've got patents all over the place.
The prototypes are on sale now. There are 7,000 units
operating in Russia right now and no one in the
United States seems to know about it.
-------------------------------------------------------------
Was it senile dementia, I now wonder, or had he **always**
entertained such blatantly fringe beliefs in the teeth
of contrary evidence? (I wouldn't count his speculations on
space travel and colonization as being quite in the same league
as the one about the, er, alternative energy source
that the Russians are using but that America has apparently
been ignoring -- golly, d'ya think we'd have bothered with
Iraq if we'd only listened to ACC about "cold fusion"? :-/ )
He was still at it in 2000:
http://news.bbc.co.uk/1/hi/in_depth/sci_tech/2000/festival_of_science/919953.stm
More forgivably human, perhaps, but still annoying, the socio-cultural
aspects of his "futurism" lagged sorely behind his
techno-boosterism, as exemplified by the fact that he was a
great big fag who **never** came out of the closet.
(Hey, even Richard Chamberlain came out after it was clear
he wasn't going to be making any more money as a hetero
lust-object in the talkies.)
And there's an almost unutterably smug
episode recounted in Julie Phillips' biography of
SF author "James Tiptree, Jr." (Alice B. Sheldon),
in which Clarke comes off particularly badly.
This happened in 1975, more than a decade after
Valentina Tereshkova had gone into orbit for the
Russians. And Christ, it was years after _Star
Trek_ had come and gone. And _2001: A Space Odyssey_,
for that matter.
(pp. 330 - 331):
"The science fiction community as a whole was in an
odd position regarding feminism. On one hand, most
of the writers and fans were men. In 1974, women still
made up less than 20 percent of SFWA's membership.
And most of those men, even those who were using SF
to address other social issues, were still not ready
to question gender relationships. The "rocket jocks"
(who had also hated the New Wave) insisted women
couldn't write real, "hard" science fiction and
probably shouldn't even be reading it. Other men
were more open in theory, but had trouble understanding
the problem.
Arthur C. Clarke, for example, had recently sent a
letter to the editor of _Time_ magazine agreeing with
astronaut Mike Collins. Collins had told _Time_ that
women could never be in the space program, since in
zero G a woman's breasts would bounce and keep the men
from concentrating. Clarke proudly claimed he had
already predicted this "problem." In his novel
_Rendezvous with Rama_ he had written, "Some women,
Commander Norton had decided long ago, should not
be allowed aboard ship: weightlessness did things
to their breasts that were too damn distracting."
When Joanna Russ tried privately to explain why this
was insulting, Clarke, responding publicly in the
SFWA newsletter, asked why Commander Norton shouldn't
be attracted to women -- didn't Russ want him to be?
He added that though some of his best friends
were women, the level of discourse of the "women's
libbers" clearly wasn't helping their cause.
The whole exchange appeared in the _SFWA Forum_ in
February and March 1975. It drew a storm of comment
from all directions, most of it expressive of how
new feminism was to most men and how automatically
many reacted by kicking slush. The newsletter's
editor, Ted Cogswell, illustrated an issue with
pictures of naked women -- intended, he said, as a
joke. [SF author] Suzy Charnas informed him that
this kind of "joke" was aggression disguised as
humor. Some of the letters, from men and women,
were open and intelligent, but even the more reasonable
men often reduced the argument to the sexual or
the physical, as if all sexism was about was, as
one man put it, the shape of a person's plumbing.
On the other hand, the SF community had a great deal
invested in the idea of tolerance. It was, and is,
in principle sympathetic to all who feel themselves
different. Science fiction itself is, at its best
moments, a literature of difference, alienation,
change. Russ said in _Khatru_ that she wrote it
because she could make it hers. 'I felt that I
knew nothing about "real life" as defined in college
writing courses (whaling voyages, fist fights, war,
bar-room battles, bull-fighting, &c.) and if I
wrote about Mars nobody could tell me it was (1) trivial,
or (2) inaccurate.'"
:-/
"-- yes,
Giulio, I was a rip-snortin' Transhumanist when I was 11,
though the word hadn't yet been invented. I grew up, eventually,
though it took decades"
I knew you are almost one of us Jim! You are a lost sheep, but perhaps you can still Save Your Immortal (oops, indefinitely-lived) Mindfile! You need to trust the Robot God, and let Him heal your soul.
Seriously. I also was a transhumanist child, with very similar stories to tell. I was also a warm, kind and friendly child. I grew up, eventually, though it took decades. Now I am as mean and selfish as most people our age. In this respect, I don't think growing up was such a good thing.
"in zero G a woman's breasts would bounce and keep the men from concentrating"
Well, last time I checked also men had things that would bounce in zero G and keep the women from concentrating.
I have a great respect for feminists, especially for those with some sense of humour.
> Tsiolkovsky was well-known for his denialism of theory of relativity. . .
They say Ayn Rand didn't think much of it, either.
I have no idea how much she actually knew about what it means,
but you **know** she wouldn't have liked the name. ;->
Apparently, she disapproved of all 20th-century physics --
thought it had become "corrupted" by the virus of modernism
that was infecting all of Western civilization.
> [L]ast time I checked also men had things that would bounce
> in zero G and keep the women from concentrating. . .
Speaking of which. ;->
There's a story -- I think in Barbara Branden's _The Passion
of Ayn Rand_ -- recounted by one of Rand's friends: Joan
Blumenthal, I think.
Anyway, the two ladies had gone to the ballet in New York,
where Rudolf Nureyev was performing. Blumenthal became
annoyed at Nureyev's on-stage antics (apparently he
was mincing it up and flirting with the guys in the
corps de ballet), and she mentioned this to Rand, saying
something like "I wish he'd stop camping it up and
play the program straight." Rand couldn't imagine what
her friend was talking about, so Blumenthal told
her bluntly that Nureyev was known to be a homosexual.
At this, Rand scoffed, and indicating Nureyev's, er,
codpiece, commented "With a package like that, there's
**no way** he's queer."
Rand's friend bit her lip, and contained her mirth.
Dear anonymous, the "argument from the Wright Brothers" is all that is needed to attack the equally sophisticated "It simply can't be done, end of story" argument. Of course Dale's areguments are (usually) much more sophisticated than that.
In the specific case of SENS, cancer is a much more sophisticated analogy. It was/is a really tough problem to solve with the knowledge and resources available at the time. They certainly haven't *completely* solved it but they do keep on improving.
Likewise, curing aging isn't going to be a matter of "wow, the thing actually got off the ground, they built a flying machine". The aging problem, which of course includes cancer as a sub-problem, is exponentially harder, but SENS researchers have exponentially greater resources available to them, and if they can show some real progress then the public will demand that they get given public funding, which would make an enormous difference. So why shouldn't they have a crack at it? The risk/reward ratio has got to be better than most things people waste their time and money on.
As I said, I've dug into the nitty gritty of what they're saying on sens.org, and it appears to be legit. I'm pretty skeptical about most things, I can spot a crackpot a mile off, and Aubrey, despite his appearance, does not appear to be a crackpot.
Oh and when I said "Exponentially harder" I meant exponentially harder than curing (some types of) cancer (for a while), compared to which, building a flying machine is exponentially easier, even if it seemed impossible at the time.
Seth Wagoner wrote:
> I'm pretty skeptical about most things, I can spot a crackpot
> a mile off, and Aubrey, despite his appearance, does not appear
> to be a crackpot.
Well. I know very little about de Grey -- I've never read anything
by him, I've never heard him speak, and I've certainly never met him.
However, it would seem to me to be, at the very least, a bit disquieting
that he (or his promoters, with his tacit consent) seems to have
traded on the **ambiguity** surrounding his association with
Cambridge University.
See, e.g.,
http://www.cryonet.org/cgi-bin/dsp.cgi?msg=23641
http://blog.infeasible.org/2007/11/19/aubrey-de-grey-just-the-computer-guy-after-all.aspx
http://www.youtube.com/watch?v=UC_DMxxa4sM&feature=related
In a similar way, Eliezer Yudkowsky, who left school after
8th grade, is often described as an "artificial intelligence
researcher" and has even, on occasion, been given an
"honorary" doctorate by articles mentioning his name.
I do not accuse either of these two folks of fabricating
credentials or biography, but the fact that notorious crackpots
(L. Ron Hubbard comes to mind here) have done just that, makes
even the suggestion of "trading up" on one's celebrity
(if one gets mentioned enough, one ends up as "Dr. So-and-so")
more than a little unsavory, don't you think?
Likewise, curing aging isn't going to be a matter of "wow, the thing actually got off the ground, they built a flying machine".
Well, neither was building of flying machines themselves. Although Wright brothers were first to demonstrate powered, controllable, manned, heavier-than-air flying machine, they were neither pursuing this goal alone, nor that was the first flying machine they had built. (IIRC there were 3 piloted gliders and one kite-like prototype.)They had extensive correspondence with other gliding enthusiasts, and even had shown one of their gliders to Octave Chanute.
That's more or less common pattern, - what to laypeople seems to be sudden appearance of disruptive innovation is gradual evolution for "disruptive innovators" themselves. (early rocketry being another good example)
Now, how many people actually _doing_ computer science, or nanochemistry, or gerontology are expecting strong AI, assemblers, or
bio-engineered "immortality"? And what sort of tangible results can proponents show for their claims? Is there a trend for these results to improve, and improve due to better understanding of what we are doing, not because of availability of more brute force? (As famous Internet joke goes "With sufficient thrust, pigs fly just fine. However, this is
not necessarily a good idea. It is hard to be sure where they
are going to land, and it could be dangerous sitting under them
as they fly overhead.")
Dale,
Interesting that M.Anissimov looks visibility older in his latest photos - the kids getting old. So is E.Yudkowsky. The human body peaks at 20. Have no doubt, Yudkowsky's brain cells are now dying. A few more years and his intellectual capacity will be noticeably reduced. The Singularitains appear to be only mortal after all...
None of the SL4ers fantasies have come true. In 2002, they pranced and strutted around like mad peacocks, telling me loudly and proudly that they 'had all the answers to super-AI's' and 'only a few patterns need to be filled in' by those (themselves of course) with super high IQ's'.
Of course, 6 years later, as I noted earlier, crappy interfaces (keyboards), verbiose plodding programming languages ('Java') and buggy operating Systems are *still* the order of the day.
Giulio Prisco wrote:
> I knew you are almost one of us Jim! You are a lost sheep,
> but perhaps you can still Save Your Immortal (oops,
> indefinitely-lived) Mindfile! You need to trust the Robot God,
> and let Him heal your soul.
You know, Giulio, in a sense I am still a transhumanist.
But only in an Iain M. Banks sense (or perhaps in an
Olaf Stapledon sense) -- and they are most definitely
**not** what is meant when one sees the word "transhumanism"
on the Web.
"The market is a good example of evolution in
action; the try-everything-and-see-what-works
approach. This might provide a perfectly morally
satisfactory resource-management system so long
as there was absolutely no question of any sentient
creature ever being treated purely as one of
those resources. The market, for all its (profoundly
inelegant) complexities, remains a crude and essentially
blind system, and is - without the sort of drastic
amendments liable to cripple the economic efficacy
which is its greatest claimed asset - intrinsically
incapable of distinguishing between simple non-use
of matter resulting from processal superfluity
and the acute, prolonged and wide-spread suffering
of conscious beings.
It is, arguably, in the elevation of this profoundly
mechanistic (and in that sense perversely innocent)
system to a position above all other moral, philosophical
and political values and considerations that humankind
displays most convincingly both its present intellectual
immaturity and - through grossly pursued selfishness
rather than the applied hatred of others - a kind of
synthetic evil.
Intelligence, which is capable of looking farther
ahead than the next aggressive mutation, can set up
long-term aims and work towards them; the same amount
of raw invention that bursts in all directions from
the market can be - to some degree - channeled and
directed, so that while the market merely shines (and
the feudal gutters), the planned lases, reaching out
coherently and efficiently towards agreed-on goals.
What is vital for such a scheme, however, and what
was always missing in the planned economies of our
world's experience, is the continual, intimate and
decisive participation of the mass of the citizenry
in determining these goals, and designing as well
as implementing the plans which should lead towards
them."
-- Iain M. Banks, "Some Notes on the Culture"
http://nuwen.net/culture.html
Ain't it funny that while the "official" transhumanists
(I'm thinking of the highly-influential Eliezer Yudkowsky/
Michael Anissimov axis here) utterly pooh-pooh the
notion that evolutionary processes (caricatured above
as the "try-everything-and-see-what-works approach")
**must** continue to play some role in both the creation
and ongoing operation of an intelligent system, natural
or artificial (so say Gerald M. Edelman, Jean-Pierre
Changeux, et al.) and continue to beat the dead GOFAI
horse in their emphasis on the top-down imposition
of "goals" (by "intelligence, which is capable of looking
farther ahead than the next aggressive mutation"),
they're totally amenable to the dog-eat-dog Ayn Randian
market fundamentalism of the Extropians and their
fellow-travelers. Go figure.
More "Notes on the Culture":
"The Culture is quite self-consciously rational, skeptical,
and materialist. Everything matters, and nothing does. Vast
though the Culture may be - thirty trillion people, scattered
fairly evenly through the galaxy - it is thinly spread,
exists for now solely in this one galaxy, and has only
been around for an eyeblink, compared to the life of the
universe. There is life, and enjoyment, but what of it?
Most matter is not animate, most that is animate is not
sentient, and the ferocity of evolution pre-sentience
(and, too often, post-sentience) has filled uncountable
lives with pain and suffering. And even universes die,
eventually. . .
Philosophically, the Culture accepts, generally, that questions
such as 'What is the meaning of life?' are themselves meaningless.
The question implies - indeed an answer to it would demand -
a moral framework beyond the only moral framework we can
comprehend without resorting to superstition (and thus abandoning
the moral framework informing - and symbiotic with - language itself).
In summary, we make our own meanings, whether we like it or not. . .
This is where I think one has to ask why any AI civilization -
and probably any sophisticated culture at all - would want to
spread itself everywhere in the galaxy (or the universe, for that
matter). It would be perfectly possible to build a Von Neumann
machine that would build copies of itself and eventually, unless
stopped, turn the universe into nothing but those self-copies,
but the question does arise; why? What is the point? To put it
in what we might still regard as frivolous terms but which the
Culture would have the wisdom to take perfectly seriously,
where is the fun in that?
Interest - the delight in experience, in understanding - comes from
the unknown; understanding is a process as well as a state, denoting
the shift from the unknown to the known, from the random to the
ordered... a universe where everything is already understood perfectly
and where uniformity has replaced diversity, would, I'd contend,
be anathema to any self-respecting AI.
Probably only humans find the idea of Von Neumann machines frightening,
because we half-understand - and even partially relate to - the obsessiveness
of the ethos such constructs embody. An AI would think the idea mad,
ludicrous and - perhaps most damning of all - boring.
This is not to say that the odd Von-Neumann-machine event doesn't crop
up in the galaxy every now and again (probably by accident rather
than design), but something so rampantly monomaniac is unlikely to
last long pitched against beings possessed of a more rounded wit,
and which really only want to alter the Von Neumann machine's software
a bit and make friends....
Philosophy, again; death is regarded as part of life, and nothing,
including the universe, lasts forever. It is seen as bad manners to
try and pretend that death is somehow not natural; instead death is
seen as giving shape to life. . .
None of this, of course, is compulsory (nothing in the Culture is
compulsory). Some people choose biological immortality; others have
their personality transcribed into AIs and die happy feeling they continue
to exist elsewhere; others again go into Storage, to be woken in
more (or less) interesting times, or only every decade, or century,
or aeon, or over exponentially increasing intervals, or only when it
looks like something really different is happening....
Megalomaniacs are not unknown in the Culture, but they tend to be
diverted successfully into highly complicated games. . .
The way the Culture creates AIs means that a small number of them suffer
from similar personality problems; such machines are given the choice of
cooperative re-design, a more limited role in the Culture than they
might have had otherwise, or a similarly constrained exile. . ."
It pleases me that Banks, and Greg Egan, and Bruce Sterling,
and even William Gibson, are still the primary purveyors of these SFnal
memes to the general educated reading public.
Not that backwater libertopian/techno-apocalyptic clique, highly visible
on the Web but largely invisible at Barnes & Noble (except in the chapter
epigraphs of a certain not-terribly-popular SF author) that
calls itself "transhumanist". Bleh!
Post a Comment