Friday, January 11, 2008

From Future Shock to Future Fatigue

Friend of Blog Jim Fehlinger points out in the Moot: [T]he latest gambit among some of the hard-core futurists is to pooh-pooh all science fiction ([indeed,] all fiction, presumably) as being an "irrational" way to "reason about" the future.

By "hard-core" futurists Jim means to refer to Singularitarians like Michael Anissimov (a welcome Commenter with whom I regularly spar spiritedly here) and Eliezer Yudkowsky (eh... not so much). But I think this trend is also a more mainstream one as well:

For example, some are beginning to get nervous about the curious prevalence of fictional narrative at the heart of an awful lot of bioethics discussion. After all, Frankenstein, the golem, designer super-babies, clone armies, genetically superhumanized abilities, genetically subhumanized slaves, human-animal hybrids and so on don't actually exist despite their frequent appearances in discussions influencing actual health policies impacting people.

Perhaps this phenomenon might also relate to the recent relinquishment of the far-futural imaginary for a more near-futural focus one discerns in many key science fiction authors. This move doesn't make much sense to me inasmuch as science fiction has always seemed to me to be about the present more than the future anyway: It is the literature for a present culture reverberating more in the stresses of what it is ambivalently and painfully and clumsily aspiring after more than than the stresses of the ambivalent and painful and clumsy legacies of its past.

Maybe the relinquishment of the far-future by speculative fiction reflects (to the extent that it is really happening at all) the appalled recognition in the privileged North of the price in bloodshed and environmental devastation our unearned privileges have actually cost, a recognition that either makes us doubt the inevitability of progress or makes us doubt the breadth of its benefits (or, one hopes, both), a recognition facilitated by our sudden and revolutionary immersion in peer-to-peer formations that confront us viscerally with truths and consequences and voices we've been hiding from or that have been hidden from us for too long.

Anyway, about this recent repudiation of science fiction as a serious space for thinking through the complexities of technodevelopmental change in our lives and deliberating about its directions, Jim writes: This is exactly the reverse of the party line among some of those same folks 10 years ago. I suspect there might be a bit of sour grapes in this tack -- the pre-eminent SF authors today have
tended not to take the ("serious") futurists as seriously as they no doubt expected to be taken.


To these suspicions I will add two of my own:

First: There is an inevitable transition from future-shock to future-fatigue that thinking persons (even the fanboys) caught up for a time in the sensawunda and hype of the superlative futurological imaginary inevitably fall prey to as the magical toypile, the tourist moonbases, the wish-granting everything-boxes, the better-than-actual-reality virtual realities, the immortality pills, the energy too cheap to meter and so on fail to materialize (as they always do fail). The impact of this inevitable future-fatigue has normally been muted by the fact that bourgeois societies keep whomping up new generations of naive consumerist techno-enthusiasts year after year whose noise making spectacle tends to drown out the skeptical war-weary voices of hard-won experience as they emerge. This muting is less likely to happen, though, in eras like our own when many educated people who would otherwise expect to benefit (unfairly) from industrial-model elite technoculture are instead economically insecure, when the mass-mediation of corporate-militarist technocracy and inevitable progress is displaced by p2p formations peopled with victims of unregulated, undemocratic technoscientific change and subversive technodevelopmental analyses, when widespread worries about energy and resource descent give the lie to such hype, and so on.

Second: Popular futurists are actually competing with science fiction authors for much the same readership at this point. The hostility of some would-be "professional" futurological prognosticators to speculative fiction writers might well arise from the fact that futurists are almost always just such worse writers than the sf fiction-writers are with whom they are competing for attention. How often, after all, do futurist scenarios amount to science fiction but, you know, without characters to solicit our imaginative identification, without the twists and turns of plot to engage our attention through the conjuration of suspense, without the pleasures of the selective and mounting revelation of information, without the construction of dramatic rhythms, confrontations, and climaxes, and so on? In the absence of plot, character, literary conventions and so on futurological fabulists often seem to want to deny they are writing fiction at all and try to pretend to be social scientists of some kind instead... which is such a fantastic and embarrassing gambit it can't help but make some of them resentful after all.

PS: For those who are curious, by the way, one of the reasons I like my favorite professional futurist, Jamais Cascio, so much is that I think he is aware of these limitations in many of his colleagues and circumvents them in his own work through his insistence on multiple scenarios that are never specifically predictive but always only foresightful, and only in aggregate. To the extent that it is true, as he says, that the business and occupational hazard of the futurist is TATF ("thinking about the future"), it seems to me Cascio is not finally a futurist at all, because I think there is something deeply and even definitively different between TATF and TAOF ("thinking about open futures"). But that is a matter for a future discussion.

35 comments:

  1. Anonymous5:17 PM

    "human-animal hybrids"
    http://scienceline.org/2007/08/31/bio_anderson_chimera/
    There are valuable applications of human-animal chimeras to medical research ready to go today, applications that are being hampered by 'yuck' reactions. Incorporating human cells into animals makes it possible to perform experiments that would be unethical to conduct on humans.

    ReplyDelete
  2. Anonymous5:48 PM

    "human-animal hybrids and so on *don't actually exist*"

    So, why this phrase?

    ReplyDelete
  3. You know, centaurs, sheeple, angel-wings, horse schlongs (if only it were true), etc.

    ReplyDelete
  4. Anonymous6:02 PM

    I'm not sure that centaurs are much more objectionable than the plans of prominent biologists who want to create monkeys with humanized brains to study cognitive development, Alzheimer's, etc.

    ReplyDelete
  5. "Humanized" monkey brains sounds more objectionable than centaurs to me, certainly. And I've read Harry Potter.

    ReplyDelete
  6. Anonymous11:03 PM

    I've read some of the literature on chimeras though what I saw involved mice with "humanized" brains. It was all the hype in the media too for a while complete with pictures of Pinky and the Brain. The truth was that the mice had a few human neurons so they could study the plaques that form in Alzheimer's. The media and the bio-ludds turned it into a panic fest for a while but I guess it never took root.

    The fact is that brain cells actual activity seems related to overall surrounding brain structure and the general developmental pattern of the organism than to the species origin of the specific cells. That is, put a human neuron in a mouse and you get a mouse that does the same mouse things but using a few human brain cells to do them. My guess is that monkeys with some human brain cells will end up being, well, monkeys with human brain cells.

    ReplyDelete
  7. Which of course was my point (well the point of the small bit of the piece that has been the focus of this discussion):

    The chimera is a Harryhausen creature of deeply disseminated cultural myth, not an essentially nonhuman animal with a few unexpected neurons. Describing this as a "humanization" of monkey brains or the resulting organism as "chimerical" seems to me not exactly the world's most useful or clarifying rhetorical move.

    By the way, I'm bracketing a whole lot of animal rights politics here -- just let me say I strongly prefer computer models where possible (which is more often than one might think), consensual human trials where possible (which is more often than one might think), and so on and so forth.

    Anyway, when Bush raises the "specter" of human animal hybrids in the state of the union it isn't a few cells in a petri dish that people have in their minds. (Well, what I have in my mind when Bush speaks in the State of the Union is What's he lying about this time and how many people will have to die for this jackhole this time but that is neither here nor there, I suppose.)

    As usual, technodevelopmental deliberation seems to have been badly served by the figurative entailments of the language being used to get a handle on it and biocons (among other anti-dems) are more than able and happy to opportunistically make use of the resulting confusions in the service of their smug eugenic preservationism (which is surely quite as objectionable as the smug "enhancement" eugenicism the biocons decry).

    ReplyDelete
  8. Anonymous3:49 PM

    Got it. The superlatarians are playing into the hands of the biocons except that while the biocons think "scary icky whatever" the superlatarian rhetoric is the opposite - "cool, super whatever". I guess both sides play off the hype of the other and science and reality get lost in the fog.

    Computer models, hmm.. that's a bit of a reductionist position for an anti-reductionist to take. There's also the problem of sheer computational power. Current machines, as powerful as they are, are still not up to the task of doing reasonable simulations of detailed biology. We're getting to where a single cell's biochemistry is almost doable but only with a cluster. I think the computer models might be a good way to limn the possibilities but we would still need animal trials to confirm the results. I do think researchers should make more effort to find human volunteers. I know my own mom (who has MS and vision problems) would volunteer for almost anything if they would just take her in the study.

    ReplyDelete
  9. Computer models, hmm.. that's a bit of a reductionist position for an anti-reductionist to take.

    Nah... the point is not epistemological but ethical.. Don't hurt things that hurt if you can avoid it (and you usually can). Cause things that suffer avoidable suffering and you end up suffering even when you think you won't. As James Baldwin translated the key insight of karma for us all: People pay for what they do, and still more for what they have allowed themselves to become. And they pay for it very simply; by the lives they lead. How's that for ant-reductionism? :)

    ReplyDelete
  10. Anonymous5:09 PM

    Nah... the point is not epistemological but ethical..

    Yes I know. At some point though we realize that we all plow through the universe as great warships killing wantonly as we go. Does treating a massive infection mean murdering billions of suffering living things? What about destroying mosquitoes to prevent malaria or West Nile virus? Some religions like Jainism seem to think that ALL life is sacred and these are crimes. I think an ethic of suffering minimization must take into account the human suffering minimized by medical research as well as the suffering of non-human animals that it entails.

    ReplyDelete
  11. Anonymous5:11 PM

    So the actual argument here is at The Logical Fallacy of Generalization from Fictional Evidence on Overcoming Bias, which does consider the standard arguments for fiction as a tool of visualization.

    ReplyDelete
  12. I think an ethic of suffering minimization must take into account the human suffering minimized by medical research as well as the suffering of non-human animals that it entails.

    Nothing I said implied I didn't take such into account.

    ReplyDelete
  13. Shorter Eliezer Yudkowsky: "Me me me me me me me me me me me me me me me me me me me!"

    ReplyDelete
  14. Anonymous9:25 PM

    Shorter Eliezer Yudkowsky: "Me me me me me me me me me me me me me me me me me me me!"

    He has a point.

    ReplyDelete
  15. I confess I'm shocked, peco, shocked to hear you say so.

    ReplyDelete
  16. Anonymous10:12 AM

    I think he misunderstood what you were saying, but he wasn't just trying to draw attention to himself.

    ReplyDelete
  17. Anonymous10:37 AM

    I'm very cautiously delighted to see that Eliezer Yudkowsky stopped by at Amor Mundi. Now if only there could open up a dialogue between Eliezer and Dale! (Though from what I read between the lines there were previous altercations on other forums). I think they're both fascinating writers in their own ways. When I look at the respective folders on my hard drive -- I'm a compulsive file saver -- I see more than 200 items by Yudkowsky and nearly 70 by Carrico. (The difference stems from the fact that I discovered Amor Mundi later.) If Eliezer could manage to explain to Dale why he thinks that research in the field of FAI is important -- and I can't say that I'm mostly convinced about this -- he'd be able to talk to anyone about the Singularity because, as thing stand now, I can't imagine a more hostile audience than Dale :-)

    FrF

    ReplyDelete
  18. > If Eliezer could manage to explain to Dale why he thinks that research
    > in the field of FAI is important. . .

    Hm. . . Perhaps if Dale were lobotomized, he'd be more amenable
    to the explanation.

    Any takers?

    ReplyDelete
  19. Anonymous1:40 PM

    Well, I think in part the problem with the SIAI may be its own PR. It could help to take the stakes down a couple of notches, like some propose with regard to the life extension program of SENS. Just as you could see the latter, without accepting its superlative talk, as radical health care, so you could interpret SIAI as foundational research for AI and then FAI.

    FrF

    ReplyDelete
  20. FrF wrote:

    > [I]n part the problem with the SIAI may be its own PR. . .

    What the hell else has there ever been there **but** PR?

    > [Y]ou could interpret SIAI as foundational research for AI. . .

    No, not much foundation there, I'm afraid.

    "No bottom," as Francis Urquhart would say. ;->

    ReplyDelete
  21. Anonymous4:12 PM

    There's the book-length Creating Friendly AI, for example. (I have to admit that I haven't read it yet.)

    Yudkowsky's opinions are a lot more refined than he's given credit for on Amor Mundi - which doesn't mean that even when you factor these nuances in, there still isn't a lot to disagree with philosophically.

    I'm with Dale, though, when it comes to the continuing importance of politics (that great constraining enemy of a lot of transhumanists!) and differing "stakeholder interests", as he puts it.

    FrF

    ReplyDelete
  22. There's the book-length Creating Friendly AI, for example. (I have to admit that I haven't read it yet.)

    I have, actually. You wouldn't believe the sheer amount of this Superlative tech stuff I actually read.

    Yudkowsky's opinions are a lot more refined than he's given credit for on Amor Mundi -- which doesn't mean that even when you factor these nuances in, there still isn't a lot to disagree with philosophically.

    If by "refined" you mean detailed, then yes I agree with you. But on the basis of the quite "a lot to disagree with philosophically" among those details -- pertaining in my view to matters of fundamental substance -- I fear I must disagree with you that Yudkowsky warrants a deeper engagement into those details.

    I find him to be a ridiculous figure, I'm afraid. Obviously, ymmv. As no doubt would his. If I were going to delve even more deeply into AI than I have done I daresay Yudkowksy isn't the person I'd consult in any case -- even if he weren't such a complete disaster politically and rhetorically, and surrounded by dumb shrill sycophants in a Robot Cult as he is -- I'd turn to the far more interesting and respectable recommendations of my friend Robin (who is much more sensible on embodiment questions than the superlative technocentrics who come round here calculating the Robot God Odds and confusing that for substance).

    ReplyDelete
  23. FrF wrote:

    > There's the book-length Creating Friendly AI, for example.
    > (I have to admit that I haven't read it yet.)

    I was surprised at how powerfully I was swept along
    by Eliezer's _Staring into the Singularity_ back in '97
    (even though there were disturbing glimpses into
    the more unsavory elements of his personality
    in that article). But by the time Eli's
    "Coding a Transhuman AI" came out in 2000 or 2001
    or whenever, I became aware that his ideas were
    a pastiche of half-baked fragments glued together
    by his characteristic hectoring tone and finger-wagging
    attitude, with little original or even coherent
    content. I haven't bothered with "Friendly AI"
    or "Collective/Coherent [Extrapolated] Volition".

    That's my own more-or-less educated layman's impression;
    I'm certainly not qualified to peer-review literature
    on AI. But as far as I know, no one of any independent (of
    SIAI) intellectual standing has corroborated any of Yudkowsky's
    ideas about AI, and in fact, he keeps digging himself bigger
    and bigger pits by going beyond AI (as difficult as that
    is in the first place) into the philosophy of ethics,
    in which he is laughably unqualified, a dabbler with as
    much sophistication as Ayn Rand, in his attempts to define
    "friendliness".

    ReplyDelete
  24. Anonymous7:08 PM

    But as far as I know, no one of any independent (of
    SIAI) intellectual standing has corroborated any of Yudkowsky's
    ideas about AI


    Would Ben Goertzel, before he was SIAI's Director of Research, qualify?

    ReplyDelete
  25. Anonymous8:00 PM

    Or Nick Bostrom? How Long Until Superintelligence?, Predictions from Philosophy (section "Superintelligence"), and possibly other papers take superintelligence seriously, although they don't mention Yudkowsky.

    ReplyDelete
  26. Ya got it Dale,

    That's why the SL4 crowd always hated me. They knew I was a far better sci-fi writer than any of them are. They know it's only a mtter of time before I become a best-selling multi-millionare sci-fi author, whilst they will be forced to slog on in mind-numbing povery for decades to come.

    Every one loves good art. No one is especially interested in soul-less tchno-babble.

    I said it before, I'll say it again: Self-professed 'futurists' are really failed, wannabe sci-fi writers. But totally lacking in charm, wit, emotional IQ or even basic writing skills.

    Of course, those with the rationalist-fetish who claim to disdain sci-fi don't even understand it.

    As I've opined, rationality and art deal with different aspects of existence... rationality is about *what is* (true or false) , art is about *what could be*.

    Art is the *output of reflection on the volitional (teleological) doman*. It deals with emotions, direct experiences and value systems. It DOES NOT deal with what is true or false. Literature was never (and should never be) used as a means of instruction for what is true/false.

    Only a humorless troll with the emotional intelligence of a tea-sppon (ie a 'singualaritarian' or a 'futurist') could fail to see this.

    --

    As to Yudkowsky, I don't worry about him any more, since I've far surpassed him now.

    Communication is the key. It's all about communication. It was always about communication. I only need produce a good parser and the rest is in the bag.

    Whether it's

    *physical communication* (ie virtual reality)

    *volitional communication* (ie art)

    or

    *logical/mathematical communication* (ie ontology and data modelling)

    reflection is communication is reflection is communication is reflection is communication is
    reflection is communication.......

    Reflection decision theory is communication theory....

    and sci-fi is king :D

    ReplyDelete
  27. People in Robot Cult Rehab are easily as annoying as the people caught up in the full froth of Robot Cultism.

    ReplyDelete
  28. >People in Robot Cult Rehab are >easily as annoying as the people >caught up in the full froth of >Robot Cultism.

    True. Sorry. Those in rehab do sometimes suffer relapses ;) Escaping the 'addiction' to the 'Singularitarian' drug is hard.

    The point I was making is that I'm trying to maintain a strict separation between the domains of art and rationality. Art (in particular literature) is a way of exploring human aspects such as emotions and value systems. That's its function. Art says nothing about how reality actually is in true/false (rationalist) terms. So those who critique sci-fi on the basis that it fails to correspond to reality, don't understand art.

    Let me assure you Dale, I want to put as much distance between myself and any 'Singularitarians' in my past, as possible. That's why I'm quick to declare that any sci-fi of mine says *nothing* about reality, and I'm *not* in the prediction business either.

    I repeat my opinion: Sci-fi is *not* in the prediction business and it does *not* make assertions about reality. It has *nothing* to do with science (at least not in terms of the *content* of science. It does perhaps have something to do with the scientific *attitude* or *culture*).

    ReplyDelete
  29. >But as far as I know, no one of any independent (of
    SIAI) intellectual standing has corroborated any of Yudkowsky's
    ideas about AI, and in fact, he keeps digging himself bigger
    and bigger pits by going beyond AI (as difficult as that
    is in the first place) into the philosophy of ethics,
    in which he is laughably unqualified, a dabbler with as
    much sophistication as Ayn Rand, in his attempts to define
    "friendliness".

    # posted by jfehlinger : 5:43 PM

    It's interesting, Yudkowksy seems to have finally come around to *mathematical platonism*, something I've been advocating for years.

    http://plato.stanford.edu/entries/platonism/

    Yudkowsky's recent post in favor of platonism:

    http://www.overcomingbias.com/2008/01/is-reality-ugly.html#more

    ---

    Of course, it was only as recently as a few months ago that he was still *flip-flopping* on the math issue. This *flip-flopping* of his goes back to 1996 (he revamps his theories every couple of months).

    It would be hilarious (if it wasn't so sad) to see his sycophants have to change their own positions every couple of months to stay in synch with their guru. It must be quite depressing for them.

    Case in point: Jef Allbright - as recently as a couple of months ago he was still shrilly declaring that mathematical platonism was nonsense (parroting back the position of his guru at the time).

    Now that Yudkowsky has *flip-flopped* again and endorsed platonism, where does that leave Allbright ? Has he too suddenly had a conversion to platonism now his guru has made it 'officially true' ? ;)

    ReplyDelete
  30. Anonymous4:18 AM

    Dale: There's nothing particularly bad about being ridiculous!

    (This avalanche of in part personal attacks on Yudkowsky makes me wish I could defend him more effectively.)

    To bring my thoughts to a more general level: I think it would be better to start a dialogue between the various factions. Ideally this should happen via a meeting in real life or at least in a telephone/podcast situation. Direct interactions such as these help to see that one's opponent doesn't quite gel with the caricatures that one tends to have about people with whom there's severe disagreement.

    Even Jamais Cascio talks with the SIAI:

    ---
    My observation of what the Singularity Institute does is that regardless of the nuances of how you define “Singularity,” or how aggressively you want to embrace or deny artificial intelligence, they have at their core a desire to be responsible. A desire to recognize both the risks and the benefits, and the actions that we as people do to ensure that those risks are reduced and those benefits are enhanced. I am all for that.
    ---

    FrF

    ReplyDelete
  31. FrF wrote (quoting Jamais Cascio):

    > My observation of what the Singularity Institute does
    > is that regardless of [what else you might think of
    > them] they have at their core a desire to be responsible.

    "Half of the harm that is done in this world is due to people who
    want to feel important. They don't mean to do harm but the harm
    does not interest them. Or they do not see it, or they justify it
    because they are absorbed in the endless struggle to think
    well of themselves."

    -- T.S. Eliot, _The Cocktail Party_

    > I think it would be better to start a dialogue between the
    > various factions. Ideally this should happen via a meeting in
    > real life or at least in a telephone/podcast situation.
    > Direct interactions such as these help to see that one's opponent
    > doesn't quite gel with the caricatures that one tends to have
    > about people with whom there's severe disagreement.

    Or not. Sometimes, it takes "direct interaction" to realize in
    a direct way just how crazy somebody is.

    ReplyDelete
  32. Anonymous1:12 PM

    'Sometimes, it takes "direct interaction" to realize in
    a direct way just how crazy somebody is.'

    I should have seen that counterargument coming, Jim :-)

    Another interview from the SIAI site:

    http://www.singinst.org/media/interviews/jameshughes

    ---
    I run a think tank called the Institute for Ethics and Emerging Technologies, and one of our chief theoreticians is a very staunch and a very harsh critic of singularitarianism and transhumanism in general [...]
    ---

    Source

    I wonder about whom Mr. Hughes is talking here? (Just kidding!)

    FrF

    ReplyDelete
  33. Dale,

    I'm moving to...er Eastern Siberia, so don't be surprised if I don't post for another few years.

    Mean time, I just pulled this bunch of erudite quotes out of my arse for readers to go on with.

    Could be quite apt for 'Singularitarians', 'futurists', 'transhumanists' etc.

    Enjoy. Cheers...



    "Pride is a powerful narcotic, but it doesn't do much for the auto-immune system."

    Stuart Stevens, Northern Exposure, Brains, Know-How, and Native Intelligence, 1990

    "When dealing with people, let us remember we are not dealing with creatures of logic. We are dealing with creatures of emotion, creatures bustling with prejudices and motivated by pride and vanity."

    Dale Carnegie


    "The smaller the mind the greater the conceit."

    Aesop (620 BC - 560 BC)

    "He who has a thousand friends has not a friend to spare,
    And he who has one enemy will meet him everywhere."

    Ali ibn-Abi-Talib (602 AD - 661 AD), A Hundred Sayings

    "He hasn't an enemy in the world - but all his friends hate him."

    Eddie Cantor (1892 - 1964)

    "The enemy is anybody who's going to get you killed, no matter which side he's on."

    Joseph Heller (1923 - 1999), Catch 22

    "Self-conceit may lead to self-destruction."

    Aesop (620 BC - 560 BC), The Frog and the Ox

    "The public is wonderfully tolerant. It forgives everything except genius."

    Oscar Wilde (1854 - 1900), The Critic as Artist, 1891

    "Pride sullies the noblest character."

    Claudianus

    "Genius might be described as a supreme capacity for getting its possessors into trouble of all kinds."

    Samuel Butler (1835 - 1902)

    "What is the first business of one who practices philosophy? To get rid of self-conceit. For it is impossible for anyone to begin to learn that which he thinks he already knows."

    Epictetus (55 AD - 135 AD), Discourses

    "Beware when the great God lets loose a thinker on this planet."

    Ralph Waldo Emerson (1803 - 1882)

    "The world tolerates conceit from those who are successful, but not from anybody else."

    John Blake

    "When they discover the center of the universe, a lot of people will be disappointed to discover they are not it."

    Bernard Bailey

    ReplyDelete
  34. >Or not. Sometimes, it takes "direct interaction" to realize in
    a direct way just how crazy somebody is.

    # posted by jfehlinger : 8:08 AM


    I’m not sure we can say something that strong, jim. I just think, as you once pointed out, the ‘Singularity’ concept is catnip for geeks. It’s a sort of ‘intellectual cocaine’. Throw in a dollop of narcissism and a high IQ and it’ll drive the geek to raptures after they snort a little of the ‘Singularity’ drug.

    I think Yudkowsky and the SL4 Singularitarians are trying to live the fantasy of a ‘super hacker’ and a ‘programmer at arms’, taking on the universe and ‘winning’. He’s obsessed with ‘beating’ all the other researchers..

    But did they really think they could out-hack Marc Geddes - PodMaster?

    ---

    ‘It’s time that you surrendered, Podmaster. My forces control all of L1 space. We-“
    Pham’s voice held quiet certainty, with none of the bluster of Old Pham Trinli. Nau could imagine ordinary people gripped by that voice, led. But Thomas Nau was a pro himself. He had no trouble interrupting: “On the contrary, sir. I hold the only power that is worth noting.” ….

    ….

    “Perhaps your only mistake is that you have not fully understood the Podmaster ethos. You see, we Podmasters grew out of disaster. That is our inner strength, our edge….”

    …..

    Another part of his mind waited curiously for what Pham Nuwen would say. Would he cave in like an ordinary person, or did he have the true heart of a Podmaster?”


    ‘A Deepness In The Sky’, Vernor Vinge.

    ReplyDelete