Wednesday, October 03, 2007

The Superlative Summary

Updated, go: here.

20 comments:

  1. Anonymous11:37 PM

    Dale,

    This is a helpful map, and combined with the recent discussion has been helpful in getting a grip on your views, which I now take to include the following:

    1. Some common rhetoric of Superlative Discourse literally refers to things that are radically unlikely to ever be achieved in our apparent universe, e.g. 'immortality' or a 'post-scarcity economy.' [I am in full agreement here.]
    2. While medicine may ultimately be capable of preventing the progressive increase in mortality rates after adolescence that we know as aging, discussing this as a goal is counterproductive as it raises passions opposed to long lifespans and associations with quack anti-aging products. [This is an empirical question in my view, and the history of activists for specific diseases, such as cancer and HIV, reshaping research and regulatory priorities suggests to me that it may be most effective to discuss long-term life extension. This is especially so for triangulation purposes, enabling the Longevity Dividend to occupy the 'moderate' space. However, I don't hold this view with high confidence]
    3. Drexlerian nanotech enthusiasts underweight the importance of political considerations in determining the economic and security implications of advanced manufacturing technology. [I agree. However, it seems that some proportion of this critique is aimed not at failure to consider political considerations, but failure to share particular political preferences, e.g. the valorization of democracy (defined as letting people have a say on issues that affect them, although this definition seems quite problematic) as an intrinsic rather than instrumental good. With respect to that portion, I agree only in the (many) places where our values overlap.]
    4. Many self-styled transhumanists underweight the evidentiary value of mainstream skepticism about near-term development of life extension, AI, etc. Some, like Ben Goertzel and Peter Voss, frequently make extreme claims about the near-term practicality of world-shaking technologies and have been visibly miscalibrated for years. On a related point, transhumanists tend to underestimate the degree to which the development of major technologies such as healthspan extension and molecular manufacturing will occur through ordinary research institutions (large corporations, universities, governments). [This seems correct, although I think that we need to have very wide error bars, given the powerful general tendency to have unduly narrow ones, and should weight our efforts appropriately.]
    5. You are irritated with libertarian and Objectivist dogmatism and tribalism that pervades much of the transhumanist community because of the historical role of the Extropians. [I agree that libertarian/Randian ideologues are frustrating and prevalent among transhumanist circles. However, insofar as your condemnation of Superlative discourse on AI, for example, is an attack on libertarianism or extropianism as ideologies, I am relatively uninterested (since I do not share those ideologies).
    6. You defer more considerations to future generations (or our future selves) and place much less weight on the argument that reducing existential risk should be our overwhelming ethical priority, while placing more value on solving immediate problems.
    www.nickbostrom.com/astronomical/waste.html
    [Nick's argument does seem convincing to me, and it does lead me to place less weight on near-term well-being than on the further future. Insofar as this is the cause of disagreement here, I am not troubled by it.]
    7. You place less credence in the feasibility of superintelligent AI within the next 25, 50, and 1000 years than I do, although I'm not sure how much less, and how your model incorporates potential improvements in computing hardware, brain scanning, and biological intelligence enhancement for programmers. [The estimates I use in allocating my effort are more conservative than many or most transhumanists, and have been adjusted for the history of AI, and the mixed opinion among computer scientists on the distance to human-level A, so I am not further adjusting them in any significant way in response to your belief without new particular arguments.]
    8. Singularitarians are religious cultists, with all the trappings of eschatology, super-human beings, shedding the flesh, etc. They are psychologically unusual or distorted. [I agree that there are similarities in some basic motivations and evocative comparisons to be religion can be made, but the motives behind use of homeopathic medicine and antibiotics are similar. Similar arguments can be applied to animal welfare movements, environmentalism, socialism, etc. Analysis of substance seems unavoidable, and that substance is vastly, vastly more plausible and better-supported than religious claims.]
    9. Discussion of possible advanced AI is a projection/transcendentalization/warped outgrowth of concerns about 'networked malware.' [This one just totally baffles me. James Hughes has written and spoken about evolving computer viruses on the Internet, and expecting advanced AI to come about through such a process, which seems to be tremendously less plausible than building an AI intentionally (including through the use of evolutionary algorithms or brain emulation). Alternatively, it seems absurd to think that fears about computer viruses and about arbitrary utility-maximizing intelligences are related, even psychologically (fears about computer viruses are not fears about agents).]
    10. Talk of a 'Singularity' or intelligence explosion is transcendentalizing. [A possible region of rapid positive feedback in the self-improvement trajectory just doesn't seem transcendental to me.]

    From my current understanding of the arguments, the ones addressing areas where we do not currently agree are insufficient to suggest a major change in activities for me, although if I were investing energies on healthspan extension I would take your opinion on de Grey's rhetoric as meaningful evidence.

    ReplyDelete
  2. "Utilitarian" wrote (in comments directed to Dale):

    > [I]nsofar as your condemnation of Superlative discourse
    > on AI, for example, is an attack on libertarianism or
    > extropianism as ideologies, I am relatively uninterested
    > (since I do not share those ideologies).

    Do not underestimate the (embarrassing) degree to which Objectivism
    (and even Dianetics, via such SF authors as A. E. Van Vogt) --
    and the science fiction "megatext" (in which so many extropians and
    libertarians are steeped) -- permeates and distorts the (naive)
    "Superlative" discourse on AI.

    http://www.theatlasphere.com/columns/050601-zader-peter-voss-interview.php
    -------------------
    TA: How has Ayn Rand's philosophy influenced your work?

    Peter Voss: I came across Rand relatively late in life,
    about 12 years ago.

    What a wonderful journey of discovery — while at the
    same time experiencing a feeling of "coming home." Overall,
    her philosophy helped me clarify my personal values
    and goals, and to crystallize my business ethics,
    while Objectivist epistemology in particular inspired
    crucial aspects of my theory of intelligence.

    Rand's explanation of concepts and context provided
    valuable insights, even though her views on consciousness
    really contradict the possibility of human-level AI.

    TA: Which views are you referring to?

    Voss: Primarily, the view that volitional choices do
    not have antecedent causes. This position implies that
    human-level rationality and intelligence are incompatible
    with the deterministic nature of machines. A few years
    ago I devoted several months to developing and writing
    up an approach that resolves this apparent dichotomy.
    -------------------


    "The secret of Sophotech thinking-speed was that they
    could apprehend an entire body of complex thought,
    backward and forward, at once. The cost of that speed
    was that if there were an error or ambiguity anywhere
    in that body of thought, anywhere from the most definite
    particular to the most abstract general concept, the
    whole body of thought was stopped, and no conclusions
    reached. . .

    Sophotechs cannot form self-contradictory concepts, nor
    can they tolerate the smallest conceptual flaw anywhere
    in their system. Since they are entirely self-aware
    they are also entirely self-correcting. . .

    They regard their self-concept with the same objective
    rigor as all other concepts. The moment we conclude
    that our self-concept is irrational, it cannot proceed. . .

    Machine intelligences had no survival instinct to override
    their judgment, no ability to formulate rationalizations,
    or to concoct other mental tricks to obscure the true
    causes and conclusions of their cognition from themselves. . .

    Sophotech existence (it could be called life only by
    analogy) was a continuous, deliberate, willful, and
    rational effort. . .

    For an unintelligent mind, a childish mind. . . their beliefs
    in one field, or on one topic, could change without
    affecting other beliefs. But for a mind of high intelligence,
    a mind able to integrate vast knowledge into a single
    unified system of thought, Phaethon did not see how
    one part could be affected without affecting the whole."

    -- John C. Wright,
    _The Golden Transcendence_, pp. 140 - 146


    "Utilitarian" wrote:

    > You defer more considerations to future generations (or our future selves)
    > and place much less weight on the argument that reducing existential
    > risk should be our overwhelming ethical priority, while placing more
    > value on solving immediate problems.
    > www.nickbostrom.com/astronomical/waste.html
    > [Nick's argument does seem convincing to me, and it does lead me to
    > place less weight on near-term well-being than on the further future.
    > Insofar as this is the cause of disagreement here, I am not troubled by it.]

    Others are troubled by the invitation to fanaticism implicit in
    talk of "existential risk" based on extremely, extremely dubious
    technological projections. I can only quote Bertrand Russell at
    you here:

    WOODROW WYATT: If you're not enthusiastic, you don't get
    things done, but if you're over-enthusiastic, you
    run the danger of becoming fanatical. Well, now, how
    do you make certain that what you're doing is all
    right, and that you haven't become, uh, in a fanatical
    state?

    BERTRAND RUSSELL: Certainty is not ascertainable. But what
    you can do, I think, is this: you can make it a
    principle that you will only act upon what you think
    is **probably** true... if it would be utterly disastrous
    if you were mistaken, then it is better to withhold
    action. I should apply that, for instance, to burning
    people at the stake. I think, uh, if the received
    theology of the Ages of Persecution had been **completely**
    true, it would've been a good act to burn heretics
    at the stake. But if there's the slightest little
    chance that it's not true, then you're doing a bad
    thing. And so, I think that's the sort of principle
    on which you've got to go.

    --------------------------------

    WYATT How would you summarize the value of philosophy in the present world and
    in the years to come?

    RUSSELL Well, I think it's very important in the present world. First, because, as
    I say, it keeps you realizing that there are very big and very important questions
    that science, at any rate at present, can't deal with and that a scientific attitude
    by itself is not adequate. And the second thing it does is to make people a little
    more modest intellectually and aware that a great many things which have been thought
    certain turned out to be untrue, and that there's no short cut to knowledge.
    And that the understanding of the world, which to my mind is the underlying purpose
    that every philosopher should have, is a very long and difficult business about
    which we ought not to be dogmatic.

    (1959 interview, reprinted in _Bertrand Russell Speaks
    His Mind_, 1960)

    "Utilitarian" wrote:

    > Singularitarians are religious cultists, with all the trappings of eschatology,
    > super-human beings, shedding the flesh, etc. They are psychologically unusual or
    > distorted. [I agree that there are similarities in some basic motivations and evocative
    > comparisons to be religion can be made, but the motives behind [many other movements]
    > are similar. . . Analysis of substance seems unavoidable, and that substance is
    > vastly, vastly more plausible and better-supported than religious claims.]

    The surface plausibility of the **content** of the Singularitarians' arguments
    is not sufficient. OK, so it's a cult more in tune with the times (at least
    for the digerati) than older cults.

    But truly impartial analysis of the "substance", from within the movement, is
    not happening, because of the cultism.

    "We define 'cult' as a group where the leader is unchallengeable and considered
    infallible. the term 'guru' is used generically for any such leader."

    _The Guru Papers: Masks of Authoritarian Power_, by Joel Kramer and Diana Alstad,
    p. 83.

    ReplyDelete
  3. Anonymous7:03 PM

    "Do not underestimate the (embarrassing) degree to which Objectivism [etc] permeates and distorts the (naive)
    "Superlative" discourse on AI."
    I don't think that I am. I was well aware of Voss' Objectivist silliness, the more extraordinary fuzzy thinking from the extropians list, of Sasha Chislenko (http://www.goertzel.org/benzine/extropians.htm), of Eliezer Yudkowsky's youthful mild libertarianism, of Robin Hanson's libertarian tendencies, etc.


    "Others are troubled by the invitation to fanaticism"
    It seems to me that even if existential risks were somehow known to be zero, moral behavior would still appear fanatical to almost everyone.
    http://www.utilitarian.net/singer/by/1972----.htm

    "based on extremely, extremely dubious
    technological projections"
    It may be 'interminably calculating the Robot God Odds' but I nevertheless would like to hear what "extremely, extremely" corresponds to numerically, and which projections precisely? I want to distinguish between disagreements about technology and values.

    "you can make it a
    principle that you will only act upon what you think
    is **probably** true... if it would be utterly disastrous
    if you were mistaken, then it is better to withhold
    action. I should apply that, for instance, to burning
    people at the stake."
    Well, I don't buy the omission/commission distinction, so failing to put all my charitable resources into proven treatments like oral rehydration therapy and saving many lives that way can substitute for burning people at the stake. But if I accepted this principle it would almost always preclude me from contributing to political action, since the probability of my actions changing policy would almost always be closer to 0.00001 than 0.51. Likewise for contributing to malaria or cancer research.

    "But truly impartial analysis of the "substance", from within the movement, is
    not happening, because of the cultism. We define 'cult' as a group where the leader is unchallengeable and considered
    infallible. the term 'guru' is used generically for any such leader."

    We should define the 'movement' in question, and in some way other than its technological projections. If we include Ray Kurzweil, Nick Bostrom, Eliezer Yudkowsky, Anders Sandberg, Martine Rothblatt, Ben Goertzel, Michael Wilson, Marvin Minsky, Hans Moravec, etc I would find it very difficult to identify a single 'guru,' and there is substantial critique of one another's ideas. Kurzweil, Goertzel, and Yudkowsky have more excessively enthusiastic and uncritical fans than the rest, but for all three the most capable less-famous 'Singularitarians' do not seem to attribute infallibility to them.

    In my view, while Kurzweil, Bostrom, Yudkowsky, et al are very intelligent people, the key area for 'Singularitarian' activism now is getting people who are still smarter than them to examine these problems carefully. For instance the Singularity Institute's stated mission will only have a substantial chance of success if it winds up serving as a nucleus for a significant chunk of the world's best relevant brainpower (leaving Yudkowsky's subsequent personal contributions relatively unimportant). That's not utterly implausible if individuals like Peter Thiel choose to make a major effort, and SIAI has demonstrated the ability to draw such attention while promoting discussion and coming up with some interesting insights.

    ReplyDelete
  4. Anonymous7:40 AM

    I should also say that I appreciate as a useful contribution the rhetorical critique of the terms 'transhumanist' and 'Singularitarian' as distracting, laden with baggage (in the first case), excessively polysyllabic, and reminiscent of identity politics. (Although as descriptive terms they do seem to denote at least as well as 'environmentalist,' 'technoprogressive,' 'feminist,' etc). I don't use the terms in my non-pseudonymous life, save for sociological discussion like this ongoing conversation for these reasons.

    ReplyDelete
  5. "Utilitarian" wrote:

    > [E]ven if existential risks were somehow known to be zero,
    > moral behavior would still appear fanatical to almost everyone.

    I would hope there's a significant distinction to be made.

    Again, I appeal to Bertrand Russell:

    --------------------
    WOODROW WYATT: Lord Russell, what is your definition of
    fanaticism?

    BERTRAND RUSSELL: I should say that, uh, fanaticism
    consists in thinking some one matter so overwhelmingly
    important that it outweighs everything else at all.
    To take an example: I suppose all decent people dislike
    cruelty to dogs. But if you thought that cruelty to
    dogs was so atrocious that no other cruelty should be
    objected to in comparison, then you would be a fanatic.

    . . .

    WYATT: But, why do you think people **do** get seized in
    large numbers with fanaticism?

    RUSSELL: Well, it's partly that it gives you a cosy
    feeling of cooperation. A fanatical group all together
    have a comfortable feeling that they're all friends
    of each other. Uh, they're all very much excited about
    the same thing. You can see it in any, uh, political
    party -- there are always a fringe of fanatics in any
    political party -- and they feel awfully cosy with
    each other. And when that is spread about, and is
    combined with a propensity to hate some other group,
    you get fanaticism well-developed.

    . . .

    WYATT: But might not fanaticism, at times, provide
    a kind of mainspring for good actions?

    RUSSELL: It provides a mainspring for actions, all
    right, but I can't think of any instance in history
    where it's provided a mainspring for good actions.
    Always, I think, for bad ones. Because it is partial,
    because it almost inevitably, um, involves some
    kind of hatred. You hate the people who don't share
    your fanaticism. It's, uh, almost inevitable.
    --------------------

    > It may be 'interminably calculating the Robot God Odds'
    > but I nevertheless would like to hear what "extremely,
    > extremely" corresponds to numerically. . .

    What would be the point of a number here, except to provide
    a false sense of precision and certainty?

    > . . .and which projections precisely?

    Precisely?! All right, here's something fairly precise. About
    three years ago, Michael Wilson (whom you mention below)
    had materialized as an insider in SIAI circles (a position which
    he no longer seems to occupy). At any rate, he was posting in
    2004 rather frequently on the SL4 mailing list. At one point,
    he made a post (I'm not going to look it up and quote it
    word-for-word here, you can do the work yourself if you need
    that much precision) in which he castigated himself (and
    this didn't seem tongue-in-cheek to me in the context, though
    in most contexts such claims would clearly be so) for
    having "almost destroyed the world last Christmas" as a
    result of his own attempts to "code an AI", but now that he
    had seen the light (as a result of SIAI's propaganda) he
    would certainly be more cautious in the future. Now, no
    one on that list seemed to find his remarks particularly
    outrageous, which suggests that he was more-or-less in tune
    with the Zeitgeist there.

    The implicit "projection" I infer from this episode is that
    the SIAI groupies believe that the world is close enough to
    being able to "Code an AI" that a single enthusiastic amateur
    might "accidently" cook one up during his Christmas vacation.

    I find this "extremely, extremely" unlikely. If you want a
    number, call the probability a fat zero.

    > In my view, while Kurzweil, Bostrom, Yudkowsky, et al are very
    > intelligent people, the key area for 'Singularitarian' activism
    > now is getting people who are still smarter than them to examine
    > these problems carefully.

    You might as well be calling for the "people who are still smarter"
    than Tom Cruise to be "carefully examining" the Scientologists'
    case against psychiatry. You'll recall that Ayn Rand was piqued
    that the mainstream philosophical community never deigned to
    take her ideas seriously enough to discuss them. I suspect
    that the really smart people simply have better things to do.

    > [T]he Singularity Institute's stated mission will only have a
    > substantial chance of success if it winds up serving as a nucleus
    > for a significant chunk of the world's best relevant brainpower. . .

    Do you realize how much like cultspeak this sounds?

    ReplyDelete
  6. Anonymous10:41 AM

    "The implicit "projection" I infer from this episode is that
    the SIAI groupies believe that the world is close enough to
    being able to "Code an AI" that a single enthusiastic amateur
    might "accidently" cook one up during his Christmas vacation.
    I find this "extremely, extremely" unlikely. If you want a
    number, call the probability a fat zero."
    Approaching zero, yes of course I agree with respect to that claim. I attach a higher credence to the idea that a serious non-profit effort could be created, and to the idea that published research on the implications of AI can help improve the odds that future corporate or military projects will take all the safety issues into account.

    "> [T]he Singularity Institute's stated mission will only have a
    > substantial chance of success if it winds up serving as a nucleus
    > for a significant chunk of the world's best relevant brainpower. . .

    Do you realize how much like cultspeak this sounds?"

    I was expressing my disagreement with the idea that 'coding an AI' would be feasible for any project without extraordinary human resources, given the repeated failures of very talented computer scientists to do so. I was also pointing out that I don't think of any of these people as indispensable (indeed I see serious flaws), but see a potentially useful starting point to build a project outside of military or corporate environments. If I were to substantially downgrade my estimate of the competence of these individuals then I am ready to look elsewhere, e.g. to building up utilitarian organizations or trying to endow chairs in machine ethics at university comp sci departments. Doubtless I could have phrased things to better convey this, but so it goes.


    "You might as well be calling for the "people who are still smarter"
    than Tom Cruise to be "carefully examining" the Scientologists'
    case against psychiatry."
    Unlike Cruise, Kurzweil has demonstrated both a high level of intelligence and a strong grasp of technology. While his predictions have included systematic errors on the speed of consumer adoption of technologies, he has done quite well in predicting a variety of technological developments (including according to Bill Gates), not to mention inventing many innovative technologies. Bostrom has published numerous articles in excellent mainstream journals and venues, from Nature to Ethics to Oxford University Press. Yudkowsky is not conventionally credentialed, but was a prodigy and clearly has very high fluid intelligence.
    The charge against these people has to be bias rather than lack of ability.

    "You'll recall that Ayn Rand was piqued
    that the mainstream philosophical community never deigned to
    take her ideas seriously enough to discuss them."
    This is an important point (although there have been some philosophical thrashings of Rand written up over the years). One thing I would like to see (and would pay for) is a professional elicitation of opinion from the AI community, like the expert elicitations on global warming conducted for the IPCC.


    "I suspect
    that the really smart people simply have better things to do."
    Yes, e.g. string theory, winning a Fields medal, becoming a billionaire. These are better things to do for them personally, but not necessarily for society.

    There are many absolutely brilliant philosophers who work on ethics, but almost none of them even try to change society in accord with that. Peter Singer is not as talented a philosopher as (theoretically consequentialist) Derek Parfit, but he had the motivation to take his ideas to the public and spark the animal welfare movement, as well as encouraging many people to care about global poverty.

    Fewer of the best medical researchers would be working on HIV if not for the plentiful funding available for it, and that funding is the result of a movement of activists and donors. Many of the activists join because the cause has become popular and a source of social status, but the initial core of activists had special motivations to get them going (loss of loved ones to the disease, positive status themselves, membership in a defined community disproportionately affected, etc).

    The slow development of asteroid watch efforts also looms large in my thinking here, as substantial investments in investigation were delayed until a sub-optimally high level of certainty on the danger had been reached.

    ReplyDelete
  7. Anonymous10:44 AM

    Dale, BTW, do you have any objections to the length of the voluminous comments with which I have been cluttering the MundiMoot?

    ReplyDelete
  8. Dale, BTW, do you have any objections to the length of the voluminous comments with which I have been cluttering the MundiMoot?

    Conversation isn't clutter! I'm following these exchanges with interest -- and whenever they expand to great length before I have a chance to weigh in myself (mid-week I tend to get knee-deep in teaching and have to play catch-up during the weekends) I'm just likely to post my own comments onto the blog proper rather than in the moot itself, and start up a new thread. By all means keep hashing out the issues here.

    ReplyDelete
  9. Unlike Cruise, Kurzweil has demonstrated both a high level of intelligence and a strong grasp of technology. While his predictions have included systematic errors on the speed of consumer adoption of technologies, he has done quite well in predicting a variety of technological developments (including according to Bill Gates),

    ?

    not to mention inventing many innovative technologies.

    I'll grant Kurzweil has a few nifty inventions to his credit, but his pop futurological writing is, in my view, consistently unserious and utterly hyperbolic. He is much more interesting therefore as a technocultural symptom than as any kind of original technocultural thinker in his own right in my estimation.

    Bostrom has published numerous articles in excellent mainstream journals and venues, from Nature to Ethics to Oxford University Press.

    I think Bostrom is an original and often quite interesting thinker. I take him seriously even though I disagree with a large number of his formulations quite a bit. Before I got my PhD. in Rhetoric I was trained in analytic philosophy and I still find a real measure of pleasure in the sort of argumentation Bostrom engages in, the more so because he directs his thinking into provocative places. I also happen to like him as a person. The worst thing about Bostrom in my view is the cover of completely undeserved respectability he provides for robot cultists like Yudkowsky and that crew. Identity politics can often be counted on to make people's standards fall out of their heads -- and I attribute Bostrom's coziness with Singularitarians to his highly unfortunate affirmation of the sub(cult)ural / ideological-movement politics of "transhumanism," so called. I'm afraid his reputation is very likely to suffer enormously and unecessarily by those associations over the longer term in my view. It's a pity.

    Yudkowsky is not conventionally credentialed, but was a prodigy and clearly has very high fluid intelligence.

    Oh dear. I think Yudkowsky is nothing short of a dot-eyed freakshow. I really do. Just because he doesn't go about in a foil suit (so far as I know) it shouldn't really be that difficult to grasp that Yudkowsky is would-be High Priest and guru of a would-be Robot Cult, as he sanctimoniously intones about the Way and peddles his vacuities to wayward boys eager, as Shaw put it, for a thick set of lips to kiss them with and a thick set of boots to kick them with.

    The charge against these people has to be bias rather than lack of ability.

    I honestly don't think any of the so-called soopergeniuses in the Superlative Futurological Congress exhibit much more than quotidian intelligence. This includes the few among them I happen to like enormously. Surely there is nothing wrong in this. All these weird attributions of prodigious talent and ability one hears inside the circled-wagon circle-jerk of transhumanish subcultures just sound to me like pampered superannuated boy-infants howling squalidly after the echo of Mommy's warm approval, rather in the manner of the chiseled closet-cases in La Rand's Amway Romance Potboilers. It's all too embarrassing and awkward for words.

    ReplyDelete
  10. 6. You defer more considerations to future generations (or our future selves)

    I recognize that future generations and our future selves will articulate the shape of technodevelopmental social struggles and concrete outcomes and so I talk about technoscientific change in a way that reflects this recognition and I highlight the limitations of models of technoscientific change that fail to reflect this recognition properly.

    and place much less weight on the argument that reducing existential risk should be our overwhelming ethical priority,

    I stress the need to democratize deliberation about technoscientific change so that the distribution of technodevelopmental costs, risks, and benefits better reflects the expressed sense of the stakes of the actual diversity of the stakeholders to that technoscientific change. I do not object to democratic deliberation about risks (including existential risks) in the least.

    I will say that I do object to the ways in which existential risk discourse has taken on what looks to me like the reactionary coloration of Terror and Security discourse in this era of neoliberal/neoconservative corporate-militarist distress, and I especially disapprove the move of would-be professional futurologists who seem to believe now that the mark of their seriousness is precisely the skewing of futurism into a preoccupation with hyperbolic risk and superlative tech of a kind that mobilizes authoritarian concentrations of police power and facilitates endless welfare for the already rich stealthed as "defense."

    while placing more value on solving immediate problems.

    Our ongoing collective collaborative solution of contemporary problems becomes the archive to which we will make indispensable recourse as we seek to address future problems.

    Foresight will mean different things to those who advocate, as I do, peer-to-peer democracy rather than elite technocracy, as I think many would-be professional futurists do.

    7. You place less credence in the feasibility of superintelligent AI within the next 25, 50, and 1000 years than I do,

    I won't talk about feasibility in principle or timescale estimation for the technodevelopmental arrival of post-biological superintelligent entities until I am persuaded that their advocates know what the word "intelligent" means in the first place.

    9. Discussion of possible advanced AI is a projection/transcendentalization/warped outgrowth of concerns about 'networked malware.' [This one just totally baffles me....]

    My point is that the closest Singularitarian discourse ever comes to touching ground in my view is when it touches on such issues of networked malware. Needless to say, one needn't join a Robot Cult to contribution to policy in this area -- and indeed, I think it is fair to say one is more likely to so contribute if one doesn't join a Robot Cult. As you say, this sort of thing doesn't get us to entitative superintelligent AI. You'll forgive me if I suggest that this is a merit and not a flaw.

    James Hughes has written and spoken about evolving computer viruses on the Internet, and expecting advanced AI to come about through such a process,

    James is a close friend and respected colleague. But I don't think that this is a particularly compelling line of his.

    which seems to be tremendously less plausible than building an AI intentionally (including through the use of evolutionary algorithms or brain emulation).

    I think these scenarios are both sufficiently close to zero probability that squabbles about their relative plausibility is better left to angels-on-pinheads pinheads.

    Alternatively, it seems absurd to think that fears about computer viruses and about arbitrary utility-maximizing intelligences are related, even psychologically (fears about computer viruses are not fears about agents).

    I don't agree that it is absurd to connect these fears in the least -- as even a cursory summary of the tropes of science fiction will attest. All fears and fantasies of technodevelopment are, by the way, connected to fears and fantasies about agency (the discursive poles are impotence omnipotence). Many who would calculate the Robot God Odds are indulging in a surrogate meditation on the relative technodevelopmental empowerment and abjection they are subjected to immersed, as are we all, in the deranging churn of ongoing planetary technoscientific change.

    ReplyDelete
  11. "Utilitarian" wrote:

    > The charge against these people has to be bias rather than lack of ability.

    Here's one kind of bias that shows up in the on-line >Hist
    community. It's a more-or-less respectable kind, as it has
    a distinguished history in the annals of philosophical discourse,
    as noted a century ago by William James.

    "The Truth: what a perfect idol of the rationalistic mind!
    I read in an old letter -- from a gifted friend who died too
    young -- these words: 'In everything, in science, art, morals
    and religion, there must be one system that is right and
    every other wrong.' How characteristic of the enthusiasm
    of a certain stage of youth! At twenty-one we rise to such a
    challenge and expect to find the system. It never occurs to
    most of us even later that the question 'what is the truth?' is
    no real question (being irrelative to all conditions) and that
    the whole notion of the truth is an abstraction from the
    fact of truths in the plural, a mere useful summarizing phrase
    like the Latin Language or the Law."

    -- William James, _Pragmatism_ (Lecture 7, "Pragmatism
    and Humanism"), 1907

    From the archives (2004 -- it was a very good year ;-> ):

    Dale wrote (in "Trouble in Libertopia",
    Monday, May 24, 2004):

    "Lately, I have begun to suspect that at the temperamental
    core of the strange enthusiasm of many technophiles for so-called
    'anarcho-capitalist' dreams of re-inventing the social order,
    is not finally so much a craving for liberty but for a fantasy,
    quite to the contrary, of TOTAL EXHAUSTIVE CONTROL.
    This helps account for the fact that negative libertarian technophiles
    seem less interested in discussing the proximate problems of nanoscale
    manufacturing and the modest benefits they will likely confer,
    but prefer to barrel ahead to paeans to the "total control over matter."
    They salivate over the title of the book From Chance to Choice. . .
    as if biotechnology is about to eliminate chance from our lives and
    substitute the full determination of morphology -- when it is much
    more likely that genetic interventions will expand the chances we
    take along with the choices we make. Behind all their talk of efficiency
    and non-violence there lurks this weird micromanagerial fantasy
    of sitting down and actually contracting explicitly the terms of
    every public interaction in the hopes of controlling it, getting it
    right, dictating the details. As if the public life of freedom can be
    compassed in a prenuptual agreement, as if communication
    would proceed more ideally were we first to re-invent language
    ab initio (ask these liber-techians how they feel about Esperanto
    or Loglan and you will see that this analogy, often enough,
    is not idle).

    But with true freedom one has to accept an ineradicable
    vulnerability and a real measure of uncertainty. We live in
    societies with peers, boys. Give up the dreams of total
    invulnerability, total control, total specification. Take a
    chance, live a little. . . Liberty is so much less than freedom."

    Temperamental core, indeed! As William James also
    observed a century ago:

    "Please observe that the whole dilemma revolves pragmatically
    about the notion of the world's possibilities. Intellectually,
    rationalism invokes its absolute principle of unity as a
    ground of possibility for the many facts. Emotionally, it
    sees it as a container and limiter of possibilities, a
    guarantee that the upshot shall be good. Taken in this way,
    the absolute makes all good things certain, and all bad
    things impossible (in the eternal, namely), and may be
    said to transmute the entire category of possibility into
    categories more secure. One sees at this point that
    the great religious difference lies between the men who
    insist that the world **must and shall be**, and those who
    are contented with believing that the world **may be**, saved.
    The whole clash of rationalistic and empiricist religion
    is thus over the validity of possibility. . ."

    -- William James, _Pragmatism_,
    Lecture 8, "Pragmatism and Religion"
    http://www.authorama.com/pragmatism-9-p-25.html

    Quote without comment:

    ----------------------------------------
    Re: Volitional Morality and Action Judgement
    From: Eliezer Yudkowsky
    Date: Tue May 25 2004 - 16:07:40 MDT

    Ben Goertzel wrote:
    >
    > Michael Wilson wrote:
    >
    > > The correct mode of thinking is to constrain the behaviour of
    > > the system so that it is theoretically impossible for it to
    > > leave the class of states that you define as desireable. This
    > > is still hideously difficult,
    >
    > I suspect (but don't know) that this is not merely hideously
    > difficult but IMPOSSIBLE for highly intelligent self-modifying
    > AI systems. I suspect that for any adequately intelligent
    > system there is some nonzero possibility of the system reaching
    > ANY POSSIBLE POINT of the state space of the machinery it's
    > running on. So, I suspect, one is inevitably dealing with
    > probabilities.

    Odd. Intelligence is the power to know more accurately and
    choose between futures. When you look at it from an
    information-theoretical standpoint, intelligence reduces
    entropy and produces information, both in internal
    models relative to reality, and in reality relative to a
    utility function.

    Why should high intelligence add entropy?

    It seems that if I become smart enough, I must fear making
    the decision to turn myself into a pumpkin; and moreover I
    will not be able to do anything to relieve my fear because
    I am too smart.
    ----------------------------------------

    I have more respect, BTW, for Ben Goertzel than for some
    of the other AI enthusiasts that have shown up on SL4.
    It pains me to hear him say stuff like

    > $5M . . . is a fair estimate of what I think it would
    > take to create Singularity based on further developing
    > the current Novamente technology and design.

    ( http://www.mail-archive.com/singularity@v2.listbox.com/msg00269.html )

    especially in light of the fact that he has been a voice of
    reason and moderation so often on SL4.

    ReplyDelete
  12. "Utilitarian" wrote:

    > . . .bias rather than lack of ability. . .

    A less respectable form of bias that shows up all-too-often
    in on-line >Hist circles is not altogether separable from
    "lack of ability".

    From Joanna Ashmun's Web site on Narcissistic Personality
    Disorder:

    "Lacking empathy is a profound disturbance to the narcissist's thinking
    (cognition) and feeling (affectivity). Even when very intelligent, narcissists
    can't reason well. One I've worked with closely does something I characterize
    as "analysis by eggbeater." They don't understand the meaning of what people
    say and they don't grasp the meaning of the written word either -- because
    so much of the meaning of anything we say depends on context and affect,
    narcissists (lacking empathy and thus lacking both context and affect) hear
    only the words. (Discussions with narcissists can be really weird and
    disconcerting; they seem to think that using some of the same words means
    that they are following a line of conversation or reasoning. Thus, they will
    go off on tangents and irrelevancies, apparently in the blithe delusion
    that they understand what others are talking about.) And, frankly, they
    don't hear all the words, either. They can pay attention only to stuff
    that has them in it. This is not merely a bad habit -- it's a cognitive
    deficiency."

    http://www.halcyon.com/jmashmun/npd/traits.html

    ReplyDelete
  13. "Utilitarian" wrote:

    > Yudkowsky is not conventionally credentialed, but was a prodigy
    > and clearly has very high fluid intelligence.

    Dale wrote:

    > All these weird attributions of prodigious talent and ability
    > one hears inside the circled-wagon circle-jerk of transhumanish
    > subcultures just sound to me like pampered superannuated
    > boy-infants howling squalidly after the echo of Mommy's warm approval. . .

    http://web.archive.org/web/20061128141921/http://singularity.typepad.com/anissimov/2006/09/happy_birthday_.html

    (the pictures are gone but the caption remains ;-> ).

    Seriously though, this notion of "child prodigy" has a
    dark underbelly.

    Some of the advocates of "gifted" kids have strange, strange
    agendas of their own. Linda Kreger Silverman, for example,
    who took a serious blow to her reputation over the case of
    Justin Chapman
    ( http://denver.rockymountainnews.com/justin/index.shtml )
    was later unflatteringly associated with Brandenn Bremmer, a teenage
    "prodigy" who committed suicide, in a piece in a January 2006
    issue of the _New Yorker_ ("Prairie Fire" by Eric Konigsberg
    http://www.newyorker.com/archive/2006/01/16/060116fa_fact_konigsberg )

    As reported at
    http://emdashes.com/2006/01/eustace-google-the-strange-sad.php :

    > Konigsberg expresses open skepticism only once, in a brief aside
    > when listening to the afterlife theories of Hilton Silverman,
    > who’s married to Linda Silverman of the Gifted Development Center. . .
    >
    > “Well, I can tell you what the spirits are saying,” [Hilton Silverman]
    > said. “He was an angel.”
    >
    > [Linda] Silverman turned to face me. “I’m not sure how much you
    > know about my husband. Hilton is a psychic and a healer. He has cured
    > people of cancer.”
    >
    > “It kind of runs in my family: my grandfather was a kabbalistic rabbi
    > in Brooklyn, and my father used to heal sick babies with kosher salt,”
    > Hilton said. “Brandenn was an angel who came down to experience the
    > physical realm for a short period of time.”
    >
    > I asked Hilton how he knew this. He paused, and for a moment I wondered
    > if he was pulling my leg and trying to think up something even more
    > outlandish to say next. “I’m talking to him right now,” he said.
    > “He’s become a teacher. He says right now he’s actually being taught
    > how to help these people who experience suicides for much messier reasons.
    > Before Brandenn was born, this was planned. And he did it the way he
    > did so that others would have use for his body. Everything worked out
    > in the end.”

    And of course, if you can believe "an entity called Kryon", then
    some of these prodigies are so-called "Indigo Children" sent to
    save the world:
    http://www.cinemind.com/atwater/QuantumLeap.html

    Y'know, they have exterrestrial souls, sort of like the "Canopeans"
    in Doris Lessing's strange science-fictionish series
    _Canopus in Argos: Archives_ (_Shikasta_ et seq.).

    http://www.jerrypippin.com/UFO_Files_ann_andrews.htm
    http://www.paolaharris.it/jason.htm

    ReplyDelete
  14. Anonymous1:42 PM

    "> [E]ven if existential risks were somehow known to be zero,
    > moral behavior would still appear fanatical to almost everyone.

    I would hope there's a significant distinction to be made.

    Again, I appeal to Bertrand Russell:

    --------------------
    WOODROW WYATT: Lord Russell, what is your definition of
    fanaticism?

    BERTRAND RUSSELL: I should say that, uh, fanaticism
    consists in thinking some one matter so overwhelmingly
    important that it outweighs everything else at all.
    To take an example: I suppose all decent people dislike
    cruelty to dogs. But if you thought that cruelty to
    dogs was so atrocious that no other cruelty should be
    objected to in comparison, then you would be a fanatic."

    If you can relieve more than ten times as much poverty or save a hundred times as many lives by donating benefit the poor in Botswana rather than the poor in San Francisco, and you have to allocate $1 million dollars between the two tasks, then I think you should put 100% into Botswana. They need it more, and more good will be done. That attitude appears fanatical to most people, but I am fairly strongly convinced that it's right. That doesn't mean I value the lives of San Francisco people less than Botswanans, or don't object to their suffering.

    Also, my fanaticism doesn't lead me to hate an out-group (negative utilitarians who want to eliminate all sentient life to reduce suffering, perhaps?).

    "I honestly don't think any of the so-called soopergeniuses in the Superlative Futurological Congress exhibit much more than quotidian intelligence."
    I don't think that any of these people are Nobel/Fields caliber, but maybe we have a different definition of 'quotidian intelligence.' I would say that the one in ten thousand level is more than quotidian, even if several hundred such minds are cranked out every year in the United States, and more are imported from around the world. Further, most of them are concerned with their own particular fields (although at the level of 'will we develop superintelligence?' you do have individuals like Stephen Hawking, Bill Gates, Warren Buffett, etc giving an affirmative answer).

    At least, I think that the more informative ad hominems would be along the lines of the comment about Nick's transhumanist affiliations, or Kurzweil's fear of death (and the personal and financial benefits he has reaped from his popularization), and Yudkowsky's troubled childhood and seeming messiah complex (while he is careful to avoid making predictions or claims about probability of success in light of the psychological research on overconfidence, this does seem to characterize his emotional responses). These are all important criticisms that I would characterize as about bias rather than general ability.

    "I have more respect, BTW, for Ben Goertzel than for some
    of the other AI enthusiasts that have shown up on SL4.
    It pains me to hear him say stuff like

    > $5M . . . is a fair estimate of what I think it would
    > take to create Singularity based on further developing
    > the current Novamente technology and design."
    I've mentioned such statements as ridiculous earlier in the conversation. This is self-deception, salesmanship, or both, particularly after WebMind and other previous failed work by Goertzel.

    "Lacking empathy is a profound disturbance to the narcissist's thinking
    (cognition) and feeling (affectivity). This is not merely a bad habit -- it's a cognitive
    deficiency."
    I agree that narcissistic traits are over-represented among transhumanists, and moreso among people purporting to be capable of rapidly building AI (or proving that P!=NP, for that matter). However, the extreme presentation in the link you sent seems to describe only a small minority, and others are in fact quite kind and sympathetic people. Did you want to name any names in particular?

    ReplyDelete
  15. "Utilitarian" wrote:

    > I agree that narcissistic traits are over-represented among
    > transhumanists. . . However, the extreme presentation in the
    > link you sent seems to describe only a small minority. . .

    Well, more than 10 years ago, Mike Darwin (Mike Federowicz),
    who should know, said:

    http://www.cryonet.org/cgi-bin/dsp.cgi?msg=7510
    -------------------------------
    In my personal experience, somewhere around 50% of the cryonicists I've met meet
    the DSM classification for Narcissistic Personality Disorder. It has little to
    do with "selfishness" in the sense it is being discussed here. It has a great
    deal to do with:

    *being unable to understand how other people feel
    *being unable to reasonably anticipate how other people will react to social
    situations
    *being relatively insensitive to the subtle, nonverbal cues of others' behavior
    which are critical to quality communication

    *having a sense of comic book grandiosity such as renaming yourself Tom Terrific
    or Super Mann.
    *being unable to laugh at yourself and to see yourself in the context of your
    own humanity, and, to quote Robert Burns "see ourselves as others see us."
    *seeing others as either enemies or friends, as all good or all bad but not
    being able to deal with shades of gray and to accept people as the complex,
    flawed and often contradictory creatures they are
    *believing John Galt, Dagny Taggart or any other Ubermensch are even possible
    as real people, let alone desirable as ideals

    *being unable to focus on the mundane tasks required to achieve dreams and goals
    because they are always fixated on the ideal, the perfect, THE FUTURE.

    *not being able to live fully and well now because they are held back by a crude
    world, full of crude people who are keeping them from success and who will be
    gone "comes the revolution" or "comes nanotechnology.
    *seeing solutions to complex problems in terms of narrow, simplistic answers.

    Such people are tiresome, unforgiving and often vicious. And yes, the world has
    plenty of them who are most decidedly not cryonicists. Not all the
    characteristics listed above are in every person with NPD. But enough are.
    -------------------------------

    He's talking about cryonicists, or course, not transhumanists
    or singularitarians per se, yet I think you'll acknowledge the
    correlation.

    Michael Anissimov, BTW, made an interesting remark recently:

    > Although many don’t trumpet it much, there does actually exist a
    > network of mutually communicating and deeply linked individuals who
    > have transhumanist beliefs in common. I’ve called this the
    > Transhumanist Collective before, but “networked transhumanists” is
    > a more normal way of referring to the same thing. . .
    >
    > Noteworthy stand-out qualities of the people in these transhumanist
    > networks are: . . . 4) the desire to live forever (sometimes hidden
    > in public). . .

    "Transhumanists as a Coherent Group"
    Tuesday, Oct 2 2007
    Michael Anissimov 6:15 am
    http://www.acceleratingfuture.com/michael/blog/

    > Did you want to name any names in particular?

    I am not entirely uncognizant of the risk, in posting remarks like
    these, of being harassed legally (now or later) for libel or
    defamation of character.

    So no, I'm not going to name names in public. I think you can
    take a good guess, though.

    > I think that the more informative ad hominems would be along the
    > lines of. . . Yudkowsky's troubled childhood and seeming messiah complex. . .
    > [which] does seem to characterize his emotional responses

    The first time I read his (since superseded) "Coding a Transhuman AI",
    back in 2001, I came across a passage in which he (without winking)
    called his "seed AI" Elisson. As in "Eli's Son".

    I admit to doing a double-take, and then my jaw dropped. He's
    serious! Every once in a while he has let slip some similar doozies
    on the lists.

    Well, if you're not going to be the Son of God I guess you can
    one-up the Christians by being His Father.

    ReplyDelete
  16. http://web.archive.org/web/20061128141921/http://singularity.typepad.com/anissimov/2006/09/happy_birthday_.html

    Great Scott! It appears that someone said happy birthday to their friend on a blog!

    ReplyDelete
  17. Anonymous3:01 PM

    James,

    I had read the cryonet post before and thought it valuable. I think that people concerned with existential risk from AI (other than those seeking to build it themselves) are on the whole less narcissistic than cryonics enthusiasts. A substantial contingent are people who just want to be very effective altruists, coming to these issues from backgrounds in global poverty, philosophical ethics, and climate change.

    Many of these people are donors (anonymously or otherwise) to organizations such as SIAI, giving of themselves without receiving any glory in return, although they are not the public face of 'Singularitarianism.'

    "I came across a passage in which he (without winking)
    called his "seed AI" Elisson. As in "Eli's Son""
    That was in jest, and Yudkowsky is capable of self-deprecating satire. I was thinking about more substantive statements.

    ReplyDelete
  18. Anonymous3:06 PM

    Michael,

    That page also includes the following comment:

    "From an unknown fellow, I too would like to wish Eliezer Yudkowsky a happy birthday, for he has greatly influenced me.

    And no, there has probably never been a genuis like him.

    Posted by: Peter Grutsky | September 14, 2006 at 05:20 PM
    Post a comment"

    ReplyDelete
  19. "Utilitarian" wrote:

    > [I quoted]:
    >
    > > . . .Elisson. . .
    >
    > That was in jest, and [he] is capable of self-deprecating
    > satire.

    "Narcissists have little sense of humor. They don't get jokes,
    not even the funny papers or simple riddles, and they don't make
    jokes, except for sarcastic cracks and the lamest puns. This is
    because, lacking empathy, they don't get the context and affect
    of words or actions, and jokes, humor, comedy depend entirely on
    context and affect. They specialize in sarcasm about others and mistake
    it for wit, but, in my experience, narcissists are entirely incapable
    of irony -- thus, I've been chagrined more than once to discover that
    something I'd taken as an intentional pose or humorous put-on was,
    in fact, something the narcissist was totally serious about. Which
    is to say that they come mighty close to parody in their pretensions
    and pretending, so that they can be very funny without knowing it,
    but you'd better not let on that you think so."
    http://www.halcyon.com/jmashmun/npd/traits.html

    > I was thinking about more substantive statements.

    From SL4 back in January, 2002
    (the links are no longer valid):

    ------------------
    http://www.sl4.org/archive/0201/2638.html
    Re: Ethical basics
    From: ben goertzel (ben@goertzel.org)
    Date: Wed Jan 23 2002 - 15:56:16 MST

    Realistically, however, there's always going to be a mix
    of altruistic and individualistic motivations, in any
    one case -- yes, even yours...
    ------------------
    http://www.sl4.org/archive/0201/2639.html
    Re: Ethical basics
    From: Eliezer S. Yudkowsky (sentience@pobox.com)
    Date: Wed Jan 23 2002 - 16:16:57 MST

    Sorry, not mine. I make this statement fully understanding the size of
    the claim. But if you believe you can provide a counterexample - any case
    in, say, the last year, where I acted from a non-altruistic motivation -
    then please demonstrate it.
    ------------------
    http://www.sl4.org/archive/0201/2640.html
    RE: Ethical basics
    From: Ben Goertzel (ben@goertzel.org)
    Date: Wed Jan 23 2002 - 19:14:47 MST

    Eliezer, given the immense capacity of the human mind
    for self-delusion, it is entirely possible for someone
    to genuinely believe they're being 100% altruistic even
    when it's not the case. Since you know this, how then can
    you be so sure that you're being entirely altruistic?

    It seems to me that you take a certain pleasure in being
    more altruistic than most others. Doesn't this mean that
    your apparent altruism is actually partially ego gratification ;>
    And if you think you don't take this pleasure, how do you
    know you don't do it unconsciously? Unlike a superhuman AI,
    "you" (i.e. the conscious, reasoning component of Eli) don't
    have anywhere complete knowledge of your own mind-state...

    Yes, this is a silly topic of conversation...
    ------------------
    http://www.sl4.org/archive/0201/2646.html
    Re: Ethical basics
    From: Eliezer S. Yudkowsky (sentience@pobox.com)
    Date: Wed Jan 23 2002 - 21:29:18 MST

    > Yes, this is a silly topic of conversation...

    Rational altruism? Why would it be? I've often considered
    starting a third mailing list devoted solely to that. . .

    No offense, Ben, but this is very simple stuff - in fact,
    it's right there in the Zen definition of altruism I quoted.
    This is a very straightforward trap by comparison with any
    of the political-emotion mindtwisters, much less the subtle
    emergent phenomena that show up in a pleasure-pain architecture.

    I don't take pleasure in being more altruistic than others.
    I do take a certain amount of pleasure in the possession and
    exercise of my skills; it took an extended effort to acquire them,
    I acquired them successfully, and now that I have them,
    they're really cool.

    As for my incomplete knowledge of my mind-state, I have a lot
    of practice dealing with incomplete knowledge of my mind-state -
    enough that I have a feel for how incomplete it is, where,
    and why. There is a difference between having incomplete knowledge
    of something and being completely clueless. . .

    I didn't wake up one morning and decide "Gee, I'm entirely
    altruistic", or follow any of the other patterns that are the
    straightforward and knowable paths into delusive self-overestimation, nor
    do I currently exhibit any of the straightforward external signs which are
    the distinguishing marks of such a pattern. I know a lot about the way
    that the human mind tends to overestimate its own altruism.

    I took a couple of years of effort to clean up the major
    emotions (ego gratification and so on), after which I was pretty
    much entirely altruistic in terms of raw motivations, although
    if you'd asked me I would have said something along the lines of:
    "Well, of course I'm still learning... there's still probably
    all this undiscovered stuff to clean up..." - which there was,
    of course; just a different kind of stuff. Anyway, after I in
    *retrospect* reached the point of effectively complete
    strategic altruism, it took me another couple of years after
    that to accumulate enough skill that I could begin to admit
    to myself that maybe, just maybe, I'd actually managed to clean
    up most of the debris in this particular area.

    This started to happen when I learned to describe the reasons why
    altruists tend to be honestly self-deprecating about their own altruism,
    such as the Bayesian puzzle you describe above. After that, when I
    understood not just motivations but also the intuitions used to reason
    about motivations, was when I started saying openly that yes, dammit, I'm
    a complete strategic altruist; you can insert all the little qualifiers
    you want, but at the end of the day I'm still a complete strategic
    altruist. . .
    ------------------
    http://www.sl4.org/archive/0201/2649.html
    RE: Ethical basics
    From: Ben Goertzel (ben@goertzel.org)
    Date: Thu Jan 24 2002 - 07:02:42 MST

    > > Yes, this is a silly topic of conversation...
    >
    > Rational altruism? Why would it be? I've often considered
    > starting a third mailing list devoted solely to that.

    Not rational altruism, but the extended discussion of *your
    own personal psyche*, struck me as mildly (yet, I must admit, mildly pleasantly) absurd...

    > No offense, Ben, but this is very simple stuff

    Of course it is... the simple traps are the hardest to avoid,
    even if you think you're avoiding them.

    Anyway, there isn't much point to argue on & on about how
    altruistic Eli really is, in the depth of his mind. . .

    The tricks the mind plays on itself are numerous, deep and
    fascinating. And yet all sorts of wonderful people do emerge,
    including some fairly (though in my view never completely) altruistic ones...
    ------------------

    Uh **huh**.

    from _The Guru Papers: Masks of Authoritarian Power_
    by Joel Kramer and Diana Alstad
    Chapter 10, "The Traps of Being a Guru", pp. 107-110

    "The person most at risk of being strangled by the images demanded
    by the role of the guru is the guru. . .

    At the heart of the ultimate trap is building and becoming
    attached to an image of oneself as having arrived at a state where
    self-delusion is no longer possible. This is the most treacherous
    form of self-delusion and a veritable breeding ground of hypocrisy
    and deception. It creates a feedback-proof system where the
    guru always needs to be right and cannot be open to being shown
    wrong -- which is where learning comes from.

    When people portray themselves as beyond illusion -- and therefore
    no longer subject to ego, mistakes, subjectivities, the unconscious,
    or creating delusional systems that are self-aggrandizing --
    what is actually being claimed? Is it that they have never been
    deluded? Or that they aren't deluding themselves now? Or that
    they can never be deluded again? For the claim of freedom from
    self-delusion to have any force, it must also be a claim about
    the future. Who would go to a guru who said, 'I'm free of self-
    delusion now, but might not be tomorrow'? No matter how much evidence
    casts doubt on this stance of unchallengeable certainty, it is always
    possible to maintain that the place of such exalted knowledge
    is not subject to the proofs and judgments of ordinary people.
    But whether being beyond self-delusion is possible or not, presenting
    oneself to others in this fashion sets up an inevitable pattern
    of interaction. If a person believes another is so realized,
    it automatically creates not only awe and worship, but the belief
    that this person 'knows better.' Why would even the most realized
    of beings want people to become reliant on his wisdom instead of
    their own? Whether anyone actually achieves this state can be debated;
    what ought to be obvious to us is that this mode is authoritarian.

    To project that one will be a certain way in the future is to build
    an image of oneself that has within it the want and need to believe
    (or for others to believe) one will in fact be that way. This
    image of the guru as beyond self-delusion cuts off real awareness
    in both gurus and disciples. A crucial element in being self-aware
    involves being alert to when one is 'putting oneself on' -- meaning,
    telling oneself what one wants to hear. . .

    There is a tendency within the human mind to construct a universe
    with itself at the center. This is one place subjectivity comes
    from. Sanity is the realization that one is not alone in doing this.
    Sanity is also the capacity to change through being open to feedback,
    to new information. The idea that any one mind has a corner on
    the truth creates isolation that is extraordinary. . . So, another
    great danger for gurus is emotional isolation. . .

    Being a 'knower,' as opposed to a seeker, is part of being a guru.
    This implies an essential division between the guru and others.
    The guru in effect says, 'I'm here, and you're there; and not
    only can I help you move from here to there, but that's what I'm here
    for.' Being different (or rather, being perceived as different)
    is the foundation of the guru's dominance. Relations of dominance
    and submission often contain extreme emotions. But if dominance
    and submission are the essential ingredients in the glue holding
    the bond together, the connection is not really personal. Gurus
    and disciples need each other, but as roles, not as individuals,
    which makes real human connection almost impossible. . .

    Nor can gurus have any real connection with other supposed 'super-
    humans.' (other gurus) because of the inherent competition
    among them. Years ago, when we first became interested in gurus
    and Eastern concepts such as enlightenment, it initially seemed
    an oddity that all these supposedly enlightened beings did
    not seek out each other's company. with each other they presumably
    could find deep and real understanding, and respite from always
    having to deal with minds at a lower level. But since disciples
    view their guru as a vehicle for their own salvation, they
    must believe that he can do the best for them. Consequently, the
    meeting of gurus, when it occurs (it rarely does), is always
    laden with heavy meaning, as the disciples watch very carefully to
    see who comes out best. Even the simplest acts (who goes to
    see whom) have implications of dominance. The fact is that gurus
    do not 'hang out' together because the structure of the role makes
    it nigh impossible. Thus even intimacy with peers is denied them."

    ReplyDelete
  20. Anonymous7:33 PM

    James,

    Yes, the improbable attribution of the conjunction of ultra-extreme ability, altruism, and debiasing success when these are quite imperfectly correlated is among Yudkowsky's most suspect claims.

    ReplyDelete