Using Technology to Deepen Democracy, Using Democracy to Ensure Technology Benefits Us All
Sunday, July 26, 2009
The Robot Cultists Have Issues
Upgraded and adapted from the Moot, 21-year old Jason assures me "I'm not a transhumanist, but" and then proceeds to chastise me as only a transhumanist would for failing to grant that the transhumanists, singularitarians, techno-immortalists, and such are, after all, bringing important "issues" to the attention to the world, while also wagging his finger at my derisive dismissal of the arrival of the superintelligent Robot God, given how shattering that arrival would no doubt be were it to, you know, happen. He assures me that he doesn't want to be a "rube" of some Robot Cult, but that in his considered opinion these Robot Cultists have some important things to say, and big things are happening after all, and science tends to confound our conservative assumptions, so on and so forth in the usual manner.
Now, the first thing to say is that the desire to live forever in a nanoslavebotic treasure cave under the watchful care of a post-biological superintelligent Robot God is not an "issue" that needs to be brought to the attention of anybody at all, except, possibly, to be blunt, a clinical psychotherapist.
This is not to deny that there are actually existing technical and policy issues (questions of cost, risk, access, oversight, education, consent) arising out of non-normalizing genetic, prosthetic, and cognitive medicine, as well as actually existing software and network security and functionality issues arising out of the brittleness and bloat of legacy-coding, infrastructural limits, not to mention the malice of criminals, as well as actually existing materials science issues arising out of the nanoscale and otherwise, out of biochemistry, and so on.
But the hyperbolic framings of superlative futurological discourse have never once contributed the least bit of sense to serious deliberation about these sorts of issues, "transhumanism," "singularitarianism," and "techno-immortalism" contribute nothing of substance to the discussion of a single one of the actual problems of technique, funding, regulation, access, education, risk connecting with actually-existing technodevelopmental change in the actual world.
Futurology is not a scientific discourse. It is a cultural discourse -- and at its superlative extremity, it is often an explicitly sub(cult)ural discourse -- responding broadly to the distress of actual and imagined disruptive technoscientific change.
It is not that I think there is not disruptive technoscientific change afoot in the world, it is that I think superlative futurologists don't know what the hell they are talking about in the most fundamental imaginable ways, with the consequence that when they are the ones who manage to frame the quandaries and aspirations provoked by such change they tend to make everybody a bit less capable of talking sense and contributing to technodevelopmental progress and democratization at the worst possible time.
When I say that futurologists don't know what the hell they are talking about it, I mean by this to point out, for example, that I think "Jason" would do well to think much more deeply about just what he means by "intelligence" and its actual social manifestations and biological incarnations in the world before he starts deploying reductive notions of "smartness" that are often quite curiously indifferent to these realities thereupon to leap off into busily calculating the Robot God Odds (Robot God in twenty years? Fifty? Seventy? Two hundred? click click click click busy busy busy) and then drawing out all of the usual "inevitable" consequences that presumably follow from these calculations.
He might think more deeply about the derangements introduced into deliberation about user-friendly software and network security by the pretense that there is something apart from computer security and efficacy called "artificial intelligence" that demands special serious consideration, as he might think more deeply about the derangements introduced into healthcare deliberation by the pretense that there is something apart from healthcare called "anti-aging" or "immortalization" that demands special separate serious consideration, or as he might think more deeply about the derangements introduced into deliberation about nanoscale biochemistry by the pretense that there is something apart from biochemistry called "nanofactories" or "Drexlerian nanotechnology" that demands special separate serious consideration.
In every case, superlative futurologists are deranging the terms of actually serious technodevelopmental discourse, tapping into hyperbolic fears of apocalypse and hyperbolic fantasies of transcendence, tapping into ancient narrative figures and frames of revolts against contingency and finitude, and all to no good purpose, apart, I suppose, from the desire to attract serious attention to themselves that they would not otherwise manage as well as the desire to indulge in irrational projects of self-congratulation and self-reassurance in the face of the anxieties of disruptive technoscientific change.
Of course, there is a real sense in which the vanishingly small and ridiculous minority of Robot Cultists are not so much deranging technodevelopmental deliberation themselves as symptomizing the derangement in more prevailing developmental and technocratic discourses (which are already suffused with reductionism, elitism, parochialism, greed, anxiety, hyperbole) but in their own -- rather clarifyingly -- extreme and marginalized form. But it is also true that there are a surprising number of well-heeled and well-established figures who are connected to the most flabbergasting extremities of Robot Cultism seduced, no doubt, by the deep structural continuities between incumbent interests and superlative discourses, and also that mass-media outlets eager for attention-grabbing hyperbole and ignorant of actual science are always happy to give a wider hearing to hysterical narratives like those of the Robot Cultists than they are to sensible scientific ones.
Now, when I was 21 years old I believed and said any number of the most foolish things imaginable (we'll leave to the side the foolishness of my 44 year old self), and so I find it easy to forgive Jason these youthful misplaced zealotries. When he declares that he does not think himself, nor does he wish to be, a "rube" of some Robot Cult, I quite believe him. My recommendation to Jason and others like him is that he read more deeply into the scientific fields that are indispensable to technodevelopmental outcomes that preoccupy his attention. This will require that he read beyond the blogs and popular magazines and papers of the charmed circle of futurological True Belief. He will confront soon enough the complexities, perplexities, qualifications, and research/policy dynamisms that instantly displace the facile formulations of the Robot Cultists. Few who study biology or chemistry or policy (or even philosophy) with any diligence or success are likely to remain superlative for long.
I recommend that wider reading, deeper study, and a more diverse acquaintance will insulate Jason from becoming or at any rate long remaining a rube of the Robot Cultists. There are plenty of shared problems in the world that demand the attention and effort of serious, reasonable, well-meaning, responsible people, peer to peer, and it is always unfortunate to lose, for however long, a partner in that collaboration to the pathologies of True Belief, whether religious, scientistic, moralizing, or otherwise reactionary.
Now, the first thing to say is that the desire to live forever in a nanoslavebotic treasure cave under the watchful care of a post-biological superintelligent Robot God is not an "issue" that needs to be brought to the attention of anybody at all, except, possibly, to be blunt, a clinical psychotherapist.
This is not to deny that there are actually existing technical and policy issues (questions of cost, risk, access, oversight, education, consent) arising out of non-normalizing genetic, prosthetic, and cognitive medicine, as well as actually existing software and network security and functionality issues arising out of the brittleness and bloat of legacy-coding, infrastructural limits, not to mention the malice of criminals, as well as actually existing materials science issues arising out of the nanoscale and otherwise, out of biochemistry, and so on.
But the hyperbolic framings of superlative futurological discourse have never once contributed the least bit of sense to serious deliberation about these sorts of issues, "transhumanism," "singularitarianism," and "techno-immortalism" contribute nothing of substance to the discussion of a single one of the actual problems of technique, funding, regulation, access, education, risk connecting with actually-existing technodevelopmental change in the actual world.
Futurology is not a scientific discourse. It is a cultural discourse -- and at its superlative extremity, it is often an explicitly sub(cult)ural discourse -- responding broadly to the distress of actual and imagined disruptive technoscientific change.
It is not that I think there is not disruptive technoscientific change afoot in the world, it is that I think superlative futurologists don't know what the hell they are talking about in the most fundamental imaginable ways, with the consequence that when they are the ones who manage to frame the quandaries and aspirations provoked by such change they tend to make everybody a bit less capable of talking sense and contributing to technodevelopmental progress and democratization at the worst possible time.
When I say that futurologists don't know what the hell they are talking about it, I mean by this to point out, for example, that I think "Jason" would do well to think much more deeply about just what he means by "intelligence" and its actual social manifestations and biological incarnations in the world before he starts deploying reductive notions of "smartness" that are often quite curiously indifferent to these realities thereupon to leap off into busily calculating the Robot God Odds (Robot God in twenty years? Fifty? Seventy? Two hundred? click click click click busy busy busy) and then drawing out all of the usual "inevitable" consequences that presumably follow from these calculations.
He might think more deeply about the derangements introduced into deliberation about user-friendly software and network security by the pretense that there is something apart from computer security and efficacy called "artificial intelligence" that demands special serious consideration, as he might think more deeply about the derangements introduced into healthcare deliberation by the pretense that there is something apart from healthcare called "anti-aging" or "immortalization" that demands special separate serious consideration, or as he might think more deeply about the derangements introduced into deliberation about nanoscale biochemistry by the pretense that there is something apart from biochemistry called "nanofactories" or "Drexlerian nanotechnology" that demands special separate serious consideration.
In every case, superlative futurologists are deranging the terms of actually serious technodevelopmental discourse, tapping into hyperbolic fears of apocalypse and hyperbolic fantasies of transcendence, tapping into ancient narrative figures and frames of revolts against contingency and finitude, and all to no good purpose, apart, I suppose, from the desire to attract serious attention to themselves that they would not otherwise manage as well as the desire to indulge in irrational projects of self-congratulation and self-reassurance in the face of the anxieties of disruptive technoscientific change.
Of course, there is a real sense in which the vanishingly small and ridiculous minority of Robot Cultists are not so much deranging technodevelopmental deliberation themselves as symptomizing the derangement in more prevailing developmental and technocratic discourses (which are already suffused with reductionism, elitism, parochialism, greed, anxiety, hyperbole) but in their own -- rather clarifyingly -- extreme and marginalized form. But it is also true that there are a surprising number of well-heeled and well-established figures who are connected to the most flabbergasting extremities of Robot Cultism seduced, no doubt, by the deep structural continuities between incumbent interests and superlative discourses, and also that mass-media outlets eager for attention-grabbing hyperbole and ignorant of actual science are always happy to give a wider hearing to hysterical narratives like those of the Robot Cultists than they are to sensible scientific ones.
Now, when I was 21 years old I believed and said any number of the most foolish things imaginable (we'll leave to the side the foolishness of my 44 year old self), and so I find it easy to forgive Jason these youthful misplaced zealotries. When he declares that he does not think himself, nor does he wish to be, a "rube" of some Robot Cult, I quite believe him. My recommendation to Jason and others like him is that he read more deeply into the scientific fields that are indispensable to technodevelopmental outcomes that preoccupy his attention. This will require that he read beyond the blogs and popular magazines and papers of the charmed circle of futurological True Belief. He will confront soon enough the complexities, perplexities, qualifications, and research/policy dynamisms that instantly displace the facile formulations of the Robot Cultists. Few who study biology or chemistry or policy (or even philosophy) with any diligence or success are likely to remain superlative for long.
I recommend that wider reading, deeper study, and a more diverse acquaintance will insulate Jason from becoming or at any rate long remaining a rube of the Robot Cultists. There are plenty of shared problems in the world that demand the attention and effort of serious, reasonable, well-meaning, responsible people, peer to peer, and it is always unfortunate to lose, for however long, a partner in that collaboration to the pathologies of True Belief, whether religious, scientistic, moralizing, or otherwise reactionary.
Subscribe to:
Post Comments (Atom)
3 comments:
Oh I think it pretty likely there will be at least 20 years of increase in average lifespans by the time I am 75, and I will be able to afford none of it.
Yes, anyone who has even an inch of subconscious urge to pay his way into those 20+ years in the full knowledge other will not - needs his (her) head on the block, logan's run style.
Whatever sweet or bitter we get, I'd like everyone to share the fuits in full, including his eminence, the pope of dythiramblyness, Dale Quixote.
Let's look at your first sentence.
Responding to the assertion before the conjunction:
I don't agree that you know enough to be quite so glib in your prediction about there being "at least 20 years of increase in average lifespans by the time [you] are 75."
Responding to the assertion after the conjunction:
I do believe that there are an overabundant number of people, who may include you among there number, who don't have access to actually-available healthcare here and now, and that this is both profoundly irrational and terribly unjust, and that this can and should be addressed by democratically-minded citizens.
There is no need to dwell in the hyperbole of the first assertion to arrive at substance of the second assertion. And, indeed, given that the first assertion (whatever your "confidence" in it) is considerably more problematic than the first, to attach the first to the second or, more foolishly still, to focus on the first over the second, can always only have the consequence of distracting or deranging sensible discussion of the second.
This is ironic, inasmuch as if your futurological hunch, for whatever it's worth, were indeed to find its way to slow fruition, it would be precisely because already-possible healthcare is already-inacessible to some, that your hoped-for healthcare might still be inacessible.
The process of funding, research, regulation, publication, education, implementation through which techniques so powerful as to render an average twenty-year increase in healthy lifespan possible for humanity (were it actually-available via healthcare administration) by the time you are 75 years old is a process taking place here and now and in a series of heres-and-nows to come, not one of which is beholden to some glossy futurological brochure dreamed up in the 1990s under the influence of Eric Drexler, Ray Kurzweil, or Aubrey de Grey.
There is a strange bait-and-switch that futurologists like to indulge in: If -- if -- if some hyperbolic imagined outcome were to arrive -- actually-intelligent humanoid robot, desktop nanofactories, nonhuman animals endowed with speech, bioengineered genocidal-racist plagues, clone armies, handheld nukes, whatever -- then wouldn't a sensible person, or morally upright person, or person of democratic sentiments prefer this outcome to that one, and so on? This sort of discussion can be entertaining and even illuminating to a point, but superlative futurologists seem especially prone to the curious idea that such fantastic speculations are the most urgently necessary ethical dilemmas that beset us, the only deliberations worthy of sustained interest, the best topics through which to determine how progressive minded or how morally conscientious a person actually is here and now -- and all this despite the actual unreality of their subjects and the actual urgency of real problems.
I don't think it matters very much whether or not you think people in general should have equitable access to sooper-longevity pills, to shiny robot bodies, to quality time in the brothels of the Holodek, to a nanotech genie-in-a-bottle, or what have you. I don't think these fantastic quandaries represent the place where the rubber hits the road where what we want to know is whether or not a person can be trusted here and now as an ally in the fraught ongoing struggle for equity in diversity, for democratization, for investments in our already vulnerable fellow-citizens and the emerging generation.
To be honest, I think these futurological gestures really function to invest the insubstantial wish-fulfillment fantasies of futurologists with the urgent substance of present moral dilemmas precisely as a way of making the dream feel more real. The substance of especially the sub(cult)ural varieties of futurological politics (the so-called transhumanists, extropians, singularitarians, techno-immortalists, and so on) is a disavowal of the plural present for an idealized future, a dis-identification with ones peers for an identification with a idealized post-human other. The substantiation of both idealizations always depends on the substance of the actually-real present, and hence involves much mischief in the way of extrapolations, amplifications, surrogacy, and allegory to "find the future" in that detested present.
All of this would be more harmless than not -- apart from the distressed unfortunates who indulge in it, of course -- were it not for the unfortunate affinity these rhetorical gestures have for the hyperbole of corporate advertising, of militarist scenario-building, and the sensationalism of broadcast media formations, which renders what would otherwise be a fairly palpable self-marginalizing pathological discourse enormously susceptible to abuse in more prevailing public discourse.
Post a Comment