Maybe I am a fool, but I would have thought anyone would agree with the following statements: 1. Industry should seek to optimise its manufacturing processes, thereby maximising the efficiency with which we handle resources and minimizing waste and pollution, as far as is physically possible. 2. Medical science should strive to find treatments, cures and preventions for afflictions which are currently incurable, and should also strive to improve treatments and preventions in order to make them as effective as they can possibly be. 3. We should seek ways to make working with machines less frustrating, reducing the times when machines fail to anticipate our intentions and therefore act in ways which impede us and increasing/ improving the ways in which machines collaborate with us on whatever project may be undertaken.
Surely ["Surely"? -- Dale], if you accept 1,2 and 3 as the right things to do, you also have to ["have to"? -- Dale] accept ["accept" as what exactly? as in some sense real now? as logically possible? as plausible eventually? as plausible soon? as coherently imagined in some particular scenario on offer? -- Dale] A) molecular nanotechnology B) indefinite lifespans and C) artificial general intelligence. After all, if you agree with 1, you have to agree that industry must seek to improve manufacturing processes until a point is which when products are assembled to atomic precision [wait, why does accepting 1 entail the arrival at A or even the aspiration to so arrive? -- Dale]: the very definition of molecular manufacturing. If you agree with 2, you can hardly disagree with the notion that medical science should not rest until each and every way in which quality of life can be adversely affected (in medical terms, at least) is countered with an effective treatment or prevention. Somewhere along THAT road you MUST arrive at effective preventions for aging. ["MUST"? Really? -- Dale] And if you agree with 3 you have to accept that one day we will have technology blessed with minds that are the equal of human intelligence, simply because anything with LESS than human intelligence is not going to be as useful a teammate [That you seem to think it would be useful for something to be possible hardly makes it inevitable, nor does it even make it coherent necessarily to use the word intelligence to describe complexities that have some things in common with intelligence but not other things, nor have you explained why usefulness really requires personhood, when surely there are occasions in which quite the opposite is the case. -- Dale]
Now, there may be reasons why we can never actually arrive at a point in which products are assembled with atomic precision, or every medical condition is preventable, or robots have brains capable of producing minds the equal of our own. [To say the least. -- Dale] If there are indeed physical reasons why the best falls short of molecular nanotechnology, indefinite lifespans and artificial general intelligence, then obviously we just have to accept that it was a fool’s hope to dream we could ever achieve such goals. That, however, is not what I am arguing. I am not arguing that we COULD achieve A,B and C. [I wonder if you disagree with the obvious reality that many of your fellow futurologists do indeed and endlessly flog precisely these practical possibilities? Why aren't you arguing with them rather than with me, I wonder? -- Dale] Frankly, I do not know if it is possible or not at this point in time. [Well, let me just go on record to say, no, you won't be immortalized, no you won't meet the Robot God, no you won't be transported to a treasure cave with a swarm of nanobotic slaves to do your bidding. Sorry. It's hard to imagine what exactly could have lead you to imagine these outcomes as "in doubt" in any sense, but what the hey. -- Dale] Instead, I am saying that IF we COULD, THEN we SHOULD. [Note that if all Robot Cultists were arguing in this mode -- in general, I mean, not just because they've been backed into a corner by somebody who actually knows what he's talking about but also took the time to take them seriously enough to point out the obvious absurdities of their discourse on its usual terms -- none of the interminable squabbles about how Robot Cultists are essentially scientists, indeed an avantguarde of sooper-scientists, would come into play at all, since this is a moral (or ethical) case rather than a scientific one. -- Dale] Now, as far as I can see A, B and C form part of the transhuman agenda. [Oh, yes, that's for sure. -- Dale] So the question this begs is: How does a person reject transhuman goals, WITHOUT arguing that medical science should impose some arbitrary limit on its ability to treat ailments? How does a person reject molecular nanotechnology WITHOUT arguing that industry should produce more waste and pollution than is strictly necessary? How does one reject artificial general intelligence WITHOUT arguing that we should produce machines that are more frustrating to work with than they really need to be? [Well, the real question is what on earth would lead anybody to expect superlative outcomes in the first place? We don't know what all the limits of our technique will be, physical, ethical, political, indeed, not knowing such limits is itself one of the limits with which we grapple. But there is nothing in this imperfect knowledge that endorses the confusion of wish-fulfillment fantasies with conventional secular democratic and technoscientific progress, any more than this non-knowing endorses belief in a Creator God or an afterlife in paradise. It is always the extraordinary claim that demands the extraordinary evidence, and the affirmation of belief without evidence is never scientifically warranted. -- Dale]
I mean, surely anyone who went around saying ‘yeah we should make people accept a lower quality of life, pump X amount of pollution and waste into the environment and forever purchase machines that are dumber than is actually necessary’ would sound like an idiot, and possibly even evil. [How does the refusal to endorse unwarranted hyperbolizations constitute the refusal of actual progress in the real world? Robot Cultism is not the advocacy of but the palpable perversion of conventional secular democratic understandings of progress. -- Dale] How, though, can anyone reject transhumanism from an ethical or moral standpoint (again, the practical issues are another matter entirely) without arguing precisely that? [Superlativity is not in any remotely recognizable sense the standard or summit from which we measure actually-existing progressive commitment, it is a skewed and self-marginalizing witch's brew of hyperbolization and poeticization of selective scientific results and research in the service of marginal sub(cult)ural identification and wish-fulfillment fantasies of personal transcendence. -- Dale]
No, "Extropia," you won’t find magical ponies in conventional secular progressive values. Automation doesn’t spit out nano-santa treasure cave wish-fulofillment fantasies. Healthcare doesn’t spit out immortalization wish-fulfillment fantasies. Working on software and network security problems and user-friendliness doesn’t spit out superintelligent post-biological Robot God wish-fulfillment fantasies.
Nobody needs to join a Robot Cult to work on network security problems or healthcare problems or materials problems or renewable energy problems in the actual world, and, indeed, to join a Robot Cult is always a self-marginalization from consensus science and public policy devoted to this work in the real world.
You don’t get to piggyback your cultism on the consensus science that actually disdains you, nor on progressive causes that disdain you. You are not the voice of the reasonable who don’t in fact give you people the time of day, you are not a futurological avantguard, so much as a marginal fandom of cranks who cannot distinguish science from science fiction.
What you think you want doesn’t make sense on its own terms, it isn’t plausible on the “technical” terms you think you prefer, it isn’t reasonable by any conventional measure of reasonableness, and it functions to indulge your personal irrationality while facilitating the ongoing irrationalization of public discourse on technodevelopmental questions at a time of disruptive scientific change in which sensible deliberation is desperately needed.