Using Technology to Deepen Democracy, Using Democracy to Ensure Technology Benefits Us All
Tuesday, April 28, 2009
Superlatvity Exposed
Upgraded and adapted from the Moot:
I hate to break it to you but these figures Kurzweil, Drexler, Moravek, even enormously likable fellows like de Grey (and don't even get me started on that atrocity exhibition Yudkowsky) and so on you like to cite as your authorities are quite simply not taken seriously outside the small circle of superlative futurology itself -- at least not for the claims you are investing with superlative-endorsing significance.
Scientists rightly and reasonably cherish outliers, they benefit from provocation, and at their best will give a serious hearing to the extraordinary so long as it aspires to scientificity -- but there is a difference between this appreciation and the actual achievement of the standard of scientific consensus, just as there is a difference between the achievement of a popular bestseller and that of passing muster as science.
Ever heard of a citation index? You claim to care about facts above all. Well, citation indexes tell a story about the relation of superlativity to scientific consensus that there is no denying if you are truly the reality-based person you want to sell yourself as.
You can't claim at once to be a paragon of science while eschewing its standards.
You simply can't.
You keep trying to divert these discussions of the conceptual difficulties and figurative entailments of your futurological discourse into superficially "technical" discussions about superficially predictive "differences of opinion" about trumped up technodevelopmental timelines -- but you have not earned the right to be treated as somebody having a technical or predictive discussion in these matters.
No developmental "timeline" will spit out the ponies you are looking for at the end of the rainbow. This isn't a question of "predictions."
Pining for an escape from error-proneness, weakness, or mortality isn't the same thing as debating how best to land a rocket on the Moon or cure polio.
I am a teacher of aesthetic and political theory in the academy, precisley the sort of person many superlative futurologists like to deride as a muzzy effete fashionably-nonsensical relativist, but I am for all that a champion of consensus science, a champion of more public funding for research and more public science education, and as a proper champion of consensus science I am the one who tells you that consensus science is no ally to Robot Cultism, no ally of yours.
The proper questions provoked by the phenomena of superlative futurology are: just what renders the aspirations to superintelligence, superlongevity, and superabundance so desirable and so plausible to those who are personally invested in superlative futurological sub(cult)ures organized by shared desire for and faith in these transcendentalizing aspirations?
Turning to these questions one no longer participates in any of the preferred topics that preoccupy the Robot Cultists themselves, who like to treat pseudo-science and superficially scientific forms as shared public rituals, the indulgence in which substantiates in the present the reality effect of their wish-fulfillment fantasies about "The Future," so-called. No, when we treat superlativity as it is, as a narrative genre and a faithful sub(cult)ure, then quickly and quite properly the discussion instead turns terminological, discursive, literary, psychological, ethnographic.
It is no wonder that so many would-be superlative futurologists, as the pseudo-scientists they are, so disdain the thinking of humanities scholarship, which -- while it is indeed non-scientific is not ultimately anti-scientific like their own tends to be -- is precisely the most relevant and capable of exposing them for what they are.
I hate to break it to you but these figures Kurzweil, Drexler, Moravek, even enormously likable fellows like de Grey (and don't even get me started on that atrocity exhibition Yudkowsky) and so on you like to cite as your authorities are quite simply not taken seriously outside the small circle of superlative futurology itself -- at least not for the claims you are investing with superlative-endorsing significance.
Scientists rightly and reasonably cherish outliers, they benefit from provocation, and at their best will give a serious hearing to the extraordinary so long as it aspires to scientificity -- but there is a difference between this appreciation and the actual achievement of the standard of scientific consensus, just as there is a difference between the achievement of a popular bestseller and that of passing muster as science.
Ever heard of a citation index? You claim to care about facts above all. Well, citation indexes tell a story about the relation of superlativity to scientific consensus that there is no denying if you are truly the reality-based person you want to sell yourself as.
You can't claim at once to be a paragon of science while eschewing its standards.
You simply can't.
You keep trying to divert these discussions of the conceptual difficulties and figurative entailments of your futurological discourse into superficially "technical" discussions about superficially predictive "differences of opinion" about trumped up technodevelopmental timelines -- but you have not earned the right to be treated as somebody having a technical or predictive discussion in these matters.
No developmental "timeline" will spit out the ponies you are looking for at the end of the rainbow. This isn't a question of "predictions."
Pining for an escape from error-proneness, weakness, or mortality isn't the same thing as debating how best to land a rocket on the Moon or cure polio.
I am a teacher of aesthetic and political theory in the academy, precisley the sort of person many superlative futurologists like to deride as a muzzy effete fashionably-nonsensical relativist, but I am for all that a champion of consensus science, a champion of more public funding for research and more public science education, and as a proper champion of consensus science I am the one who tells you that consensus science is no ally to Robot Cultism, no ally of yours.
The proper questions provoked by the phenomena of superlative futurology are: just what renders the aspirations to superintelligence, superlongevity, and superabundance so desirable and so plausible to those who are personally invested in superlative futurological sub(cult)ures organized by shared desire for and faith in these transcendentalizing aspirations?
Turning to these questions one no longer participates in any of the preferred topics that preoccupy the Robot Cultists themselves, who like to treat pseudo-science and superficially scientific forms as shared public rituals, the indulgence in which substantiates in the present the reality effect of their wish-fulfillment fantasies about "The Future," so-called. No, when we treat superlativity as it is, as a narrative genre and a faithful sub(cult)ure, then quickly and quite properly the discussion instead turns terminological, discursive, literary, psychological, ethnographic.
It is no wonder that so many would-be superlative futurologists, as the pseudo-scientists they are, so disdain the thinking of humanities scholarship, which -- while it is indeed non-scientific is not ultimately anti-scientific like their own tends to be -- is precisely the most relevant and capable of exposing them for what they are.
Subscribe to:
Post Comments (Atom)
7 comments:
> . . .and don't even get me started on that [J. G. Ballard novel]
> Yudkowsky. . .
Re: Volitional Morality and Action Judgement
From: Eliezer Yudkowsky
Date: Wed Jun 02 2004 - 11:22:53 MDT
http://www.sl4.org/archive/0406/8943.html
------------
In 2003 I tried to be Belldandy, sweetness and light. It didn't work. It
was not until that point, when I grew mature enough to for the first time
aspire to something that didn't easily fit my personality, that I
understood just how hard it is to cut against the grain of one's character.
Striving toward total rationality and total altruism comes easily to me.
Sweetness and light doesn't; I tried and failed. Now I have much more
sympathy for people whose personalities don't happen to easily fit
rationality or altruism; it's *hard* to cut against your own grain.
But y'know, this shiny new model of Friendly AI *does not require* that I
be Belldandy, or even that I *approximate* Belldandy. I can't be the
person I once aspired to be, not without hardware support. So while I am
human, I will try to enjoy it, instead of torturing myself. And ya know
what? I'm arrogant. I'll try not to be an arrogant bastard, but I'm
definitely arrogant. I'm incredibly brilliant and yes, I'm proud of it,
and what's more, I enjoy showing off and bragging about it. I don't know
if that's who I aspire to be, but it's surely who I am. I don't demand
that everyone acknowledge my incredible brilliance, but I'm not going to
cut against the grain of my nature, either. The next time someone
incredulously asks, "You think you're so smart, huh?" I'm going to answer,
"*Hell* yes, and I am pursuing a task appropriate to my talents." If
anyone thinks that a Friendly AI can be created by a moderately bright
researcher, they have rocks in their head. This is a job for what I can
only call Eliezer-class intelligence. I will try not to be such an ass as
Newton, try hard not to actually *hurt* anyone, but let's face it, I am not
one of the modest geniuses. The best I can do is recognize this and move on.
------------
http://www.sl4.org/archive/0406/8952.html
Re: the practical implications of arrogance
From: Eliezer Yudkowsky
Date: Wed Jun 02 2004 - 12:46:25 MDT
------------
[I]t's time for people to get used to the fact that I AM NOT
PERFECT. One hell of a badass rationalist, yes, but not perfect in other
ways. I'm just here to handle the mad science part of the job. It may
even be that I'm not very nice. Altruistic towards humans in general, yes,
but with a strong tendency to think that any given human would be of
greater worth to the human species if they were hung off a balloon as
ballast. So frickin' what? I'm not SIAI's PR guy, and under the new
edition of FAI theory, I don't have to be perfect. Everyone get used to
the fact that I'm not perfect, including people who have long thought I am
not perfect, and feel a strong need to inform me of this fact. I'm not
perfect, and it doesn't matter, because there are other people in SIAI than
me, and I'm *not* a guru. Just a mad scientist working happily away in the
basement, who can say what he likes. Believe it, and it will be true.
------------
http://www.sl4.org/archive/0406/8954.html
RE: the practical implications of arrogance
From: Ben Goertzel
Date: Wed Jun 02 2004 - 13:06:55 MDT
------------
So ... this tendency of yours makes me feel like it would be a fairly
bad idea to trust you with my own future, or the future of the human
race in general.
I do not trust this sort of altruism, which is coupled with so much
unpleasantness toward individual humans.
I would place more trust in someone who acted more compassionately and
reasonably to other humans, even if they made fewer (or no) cosmic
proclamations regarding their beautiful altruism.
Being an arrogant jerk and a excellent scientist or philosopher is not
contradictory. Being an arrogant jerk when you're trying to raise funds
to help you save the world, is not intelligent, because you're asking
people to trust you for more than just science and philosophy, you're
asking them to trust you with their lives.
------------
> It may even be that I'm not very nice. Altruistic towards humans
> in general, yes, but with a strong tendency to think that any
> given human would be of greater worth to the human species if
> they were hung off a balloon as ballast. So frickin' what?
> I'm not SIAI's PR guy. . .
No, that's Michael Anissimov's job (whether or not he claims it
as his current job **title** is irrelevant).
**He** gets to be "Belldandy".
Ain't that just dandy?
I feel pretty good about the fact that I work professionally in AI, have for a decade, and had to look up this Yudkowsky bloke after seeing his name here several times. He can be the mortal god to the fanboys if he wants, because his name is non-existent in my field.
Also: I was teaching Aquinas on the Cosmological Argument on Monday. One of the famous "5 Ways" arguments for the existence of god is, straightforwardly, "Superlativity." We know 'good' and 'not as good,' therefore, there must be something that is 'the most good' causing those things, and this is god. My Intro Philosophy students who know it's the last week of classes and have their minds focused firmly on sitting outside and never doing philosophy again were ALL able to point out how absurd this argument is, and *why*. Yet somehow, the self-described scientific geniuses can't see the conceptual misunderstandings here and how their robotgod is the same as Aquinas' heavenlygod?
'I hate to break it to you but these figures...are quite simply not taken seriously outside the small circle of superlative futurology itself'.
And I hate to break it to you, but today you are surrounded by technology that was once only taken seriously a small circle of superlative futurologists. The consensus opinion of mainstream science was 'it cannot be done'. Does that guarantee popular opinion is wrong and the mavericks are right? Of course not. But it should make one think...'maybe'.
I would also point out that people like yourself are mistaken in their beliefs over who qualifies as a good critic of those futurologists.
Think of the present as represented by a vast block of granite. Think of our possible futures as the shapes this block could take on.
Some people are right up against its surface, actively chipping away at it. These are the people who really matter, not the thinkers who spend their days musing about how the future might be. They matter, because it is they who are doing the hands-on, practical work. Unforunately, most will toil away in obscurity without hope of the rewards, book deals, or media attention Kurzweil has received. But they carry on, regardless which is something we should all be very grateful for.
As these people are right up against the surface of the block, their position provides each one with great detail regarding their local area. They are, in other words, experts specialising in a particular branch of science. They are also keenly aware of all the cracks, fissures and other weaknesses that threaten to undermine their work. In other words, they know about all the problems currently inherent in their field of expertise. Mainly for that reason, these people tend to be cautious when assessing prospects for current work.
However, due to their position they have too narrow a view to appreciate work going on elsewhere that might fill in those cracks and fissures. This is NOT because they are narrow-minded. It is simply because they are too busy and too focused to be more than intermittently conscious of developments happening in other fields outside of their area of expertise.
Then there are people like Kurzweil, who stand further away. This position offers them a slightly wider view than that afforded to the people right up against its surface, and so they can see a bit more clearly the shape of things to come (they do not, however, stand far enough back to see the whole thing. Only God would have that kind of viewpoint). They are able to appreciate how work currently done in seperate fields will converge and how that might help resolve problems the specialists bewail as intractible.
These people are nowhere near as important as those others chipping away at the block, because they do not do much practical work. What they mostly do is think about how the future is going to shape up. Because they stand futher back, they do not have a detailed view of a local area- their knowledge is not as fine-tuned as that of a specialist.
Now, one might think a specialist is the person whose opinion you should seek when wondering 'is this person's vision of things-to-come at all plausible?'. But here's why that assumption is likely to be wrong. Kurzweil's scenarios result from the way multiple
scientific fields and technologies converge on each other. On one
level, it requires an understanding of how genetics and robotics and
information technology and nanotechnology influence each other. Going
deeper, each one of those fields emerges from multiple areas of
knowledge too. For instance, material sciences, mechanical
engineering, physics, life sciences, chemistry, biology, electrical
engineering, and computer science are all relevant in one way or
another to 'nanotechnology'.
Ironically, the people critics of Kurzweil assume are best placed to
debunk his arguments are actually the WORST. Inevitably, critics turn
to specialists (in biology, or chemistry, or any other single field of
science/technology you care to name) and ask for their 'expert'
opinion on his ideas. Such a person then argues purely from their own
narrow field of view and correctly concludes that what they know, what
CAN be known only from their particular branch of expertise, is
profoundly unlikely to bring about the changes Ray forsees.
Specialists, in short, are exactly the wrong sort of people to
critique an argument that takes such an enormously generalist,
convergent, interconnected view of technological evolution. They are
simply too busy cracking some narrowly-defined problem in one
particular field to have anything other than the vaguest idea of how
their research might be part of an immense web of cause and effect.
Robin Zebrowski wrote:
> I feel pretty good about the fact that I work professionally in AI,
> have for a decade, and had to look up this Yudkowsky bloke after
> seeing his name here several times. He can be the mortal god to
> the fanboys if he wants, because his name is non-existent in my field.
He works on an altogether higher plane than any mere professional
could imagine. Yes, the Way of Rationality is difficult to follow. . .
http://lists.extropy.org/pipermail/extropy-chat/2004-April/005888.html
"The overall rationality of academia is simply not good enough to handle
some necessary problems, as the case of Drexler illustrates. Individual
humans routinely do better than the academic consensus. . . .
Yes, the Way of rationality is difficult to follow. As illustrated by the
difficulty that academia encounters in following [it]. The social process of
science has too many known flaws for me to accept it as my upper bound.
Academia is simply not that impressive, and is routinely beaten by
individual scientists who learn to examine the evidence supporting the
consensus, apply simple filters to distinguish conclusive experimental
support from herd behavior. Robyn Dawes is among the scientists who have
helped document the pervasiveness of plausible-sounding consensuses that
directly contradict the available experimental evidence. Richard Feynman
correctly dismissed psychoanalysis, despite the consensus, because he
looked and lo, there was no supporting evidence whatsoever. Feynman tells
of how embarassing lessons taught him to do this on individual issues of
physics as well, look up the original experiments and make sure the
consensus was well-supported.
Given the lessons of history, you should sit up and pay attention if Chris
Phoenix says that distinguished but elderly scientists are making blanket
pronunciations of impossibility *without doing any math*, and without
paying any attention to the math, in a case where math has been done. If
you advocate a blanket acceptance of consensus so blind that I cannot even
apply this simple filter - I'm sorry, I just can't see it. It seems I
must accept the sky is green, if Richard Smalley says so.
I can do better than that, and so can you."
--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
Extropia, to be a convincing generalist it isn't enough to have some superficial impression of large areas of science that you've picked up from popular science books and science reports; you do have to demonstrate to the specialists you interact with that you understand, perhaps not the technical details of their current work, but the basics of their field at the level, say, of a graduate in that area. So, as a physicist, I'm not going to be impressed by people who, say, don't understand the Carnot limit on heat engine efficiency, or how you do a normal mode analysis of vibrations in a solid, or who don't seem able actually to read and understand the papers they cite.
Of course, the place your line of argument is leading is to say that technical knowledge isn't important in deciding the plausibility of the claims of the transhumanists. In which case, it's not a scientist that we need to examine the claims, but someone trained in critically dissecting the hidden assumptions underlying these arguments. Over to you again, Dale.
I learned a new word the other day. It is 'Agnotology', meaning 'the study of deliberately created ignorance- such as falsehoods about evolution that are spread by creationists'.
I had not read your criticism of Kurzweil's book until today, but now that I have it has made me wonder if he is not a prime candidate for the study of agnotology?
Post a Comment