Advocates of Good Old Fashioned Artificial Intelligence (GOFAI) have been predicting that the arrival of intelligent computers is right around the corner more or less every year from the formation of computer science and information science as disciplines, from World War II to Deep Blue to Singularity U. These predictions have always been wrong, though their ritual reiteration remains as strong as ever.
The serial failure of intelligent computers to make their long awaited appearance on the scene has lead many computer scientists and coders to focus their efforts instead on practical questions of computer security, reliability, user-friendliness, and so on. But there remain many GOFAI dead-enders who keep the faith and still imagine the real significance that attaches to the solution of problems with/in computation is that each advance is a stepping stone along the royal road to AI, a kind of burning bush offering up premonitory retroactive encouragement from The Future AI to its present-day acolytes.
In the clarifying extremity of superlative futurology we find techno-transcendentalists who are not only stubborn adherents of GOFAI in the face of its relentless failure, but who double down on their faith and amplify the customary insistence on the inevitable imminence of AI (all appearances to the contrary notwithstanding) and now declare no less inevitable the arrival of SUPER-intelligent artificial intelligence, insisting on the imminence of a history-shattering, possibly apocalyptic, probably paradisical, hopefully parental Robot God.
Rather than pay attention to (let alone learn the lessons of) the pesky failure and probable bankruptcy of the driving assumptions and aspirations of the GOFAI research program-cum-ideology, these techno-transcendentalists want us to treat with utmost seriousness the "existential threat" of the amplification of AI into a superintelligent AI in the wrong hands or with the wrong attitudes. I must say that I for one do not agree with Very Serious Robot Cultists at Oxford University like Nick Bostrom or at Google like Ray Kurzweil or celebrity tech CEOs like Elon Musk that the dumb belief in GOFAI becomes a smart belief rather than an even dumber one when it is amplified into belief in a GOD-AI, or that the useless interest in GOFAI becomes urgently useful rather than even more useless when it is amplified into worry about the existential threat of GOD-AI because it would be so terrible if it did come true. It would be terrible if Godzilla or Voldemort were real, but that is no reason to treat them as real or to treat as Very Serious those who want to talk about what existential threats they would pose if they were real when they are not (especially when there are real things to worry about).
The latest variation of the GOFAI via GOD-AI gambit draws on another theme beloved by superlative futurologists, the so-called Fermi Paradox -- the fact that there are so very many stars in the sky and yet no signs that we can see so far of intelligent life out there. Years ago, I proposed
A recent article in Vice's Motherboard collects a handful of proponents of a "new view" on this question that proposes instead that the "dominant intelligence in the cosmos is probably artificial." The use of the word "probable" there may make you think that there is some kind of empirical inquiry afoot here, especially since all sorts of sciency paraphernalia surrounds the assertion, and its proponents are denominated "astronomers, including Seth Shostak, director of NASA’s Search for Extraterrestrial Intelligence, or SETI, program, NASA Astrobiologist Paul Davies, and Library of Congress Chair in Astrobiology Stephen Dick." NASA and the Library of Congress are institutions that have some real heft, but let's just say that typing the word "transhumanist" into a search for any of those names may leave you wondering a bit about the robocultic company they keep.
But what I want to insist you notice is that the use of the term "probability" in these arguments is a logical and not an empirical one at all: What it depends on is the acceptance in advance of the truth of the premise of GOFAI via GOD-AI which is in fact far from obvious at all that anyone would sensibly take for granted. Indeed, I propose that like many arguments offered up by Robot Cultists in more mainstream pop-tech journalism, the real point of the piece is to propagandize for the Robot Cult by indulging in what appear to be harmless blue-sky speculations of science fictional conceits but which entertain as true and so functionally bolster what are actually irrational and usually pernicious articles of futurological faith.
The philosopher Susan Schneider (search "Susan Schneider transhumanist," go ahead, try it) is paraphrased in the article saying "when it comes to alien intelligence... by the time any society learns to transmit radio signals, they’re probably a hop-skip away from upgrading their own biology." This formulation buries the lede in my view, and quite deliberately so. That is to say, what is really interesting here -- one might actually say it is flabbergasting -- is the revelation of a string of techno-transcendental assumptions: [one] that technodevelopmental vicissitudes are not contingently sociopolitical but logically or teleologically determined; [two] that biology could be fundamentally transformed while remaining legible to the transformed (that's the work done by the reassuring phrase "their own"); [three] that jettisoning biological bodies for robot bodies and "uploading" our biological brains into "cyberspace" is not only possible but desirable (make no mistake about it, that is what she is talking about when she talks about "upgrading biology" -- by the way, the reason I scare-quote words like "upload" and "cyberspace" is because those are metaphors not engineering specs, and unpacking those metaphors exposes enough underlying confusion and fact-fudging that you may want to think twice about trusting your "biological upgrade" to folks who talk this way, even if they chirp colloquiually at you that your immortal cyberangel soul-upload into Holodeck Heaven is just a "hop-skip away" from easy peasy radio technology); and [four] that terms like "upgrade," freighted as they are with a host of specific connotations derived from the deceptive hyperbolic parasitic culture of venture-capitalism and tech-talk, are the best way to characterize fraught fundamental changes in human lives to be brought about primarily by corporate-military incumbent-elites seeking parochial profits. Maybe you want to read that last bit again, eh?
Seth Shostak quotes from the same robocultic catechism a paragraph later: “As soon as a civilization invents radio, they’re within fifty years of computers, then, probably, only another fifty to a hundred years from inventing AI... At that point, soft, squishy brains become an outdated model.” Notice the same technological determinism. Notice that the invention of AI is then declared to be probable within a century -- and no actual reasons are offered up in support of this declarations and it is made in defiance of all evidence to the contrary. And then notice suddenly we find ourselves once again in the moral universe of techno-transcendence, where Schneider assumed robot bodies and cyberspatial uploads would be "upgrades" (hop-skipping over the irksome question whether such notions are even coherent or possible on her terms, whether a picture of you could be you, whether fetishized prosthetization would be enhancing to all possible ends or disabling to some we might come to want or immortalizing when no prostheses are eternal, etc etc etc etc) Shostak leaps to the ugly obverse face of the robocultic coin: "soft, squishy brains" are "outdated model[s]." Do you think of your incarnated self as a "model" on the showroom floor, let alone an outdated one? I do not. And refusing such characterizations is indispensable to resisting being treated as one. Maybe you want to read that last bit again, eh?
“I believe the brain is inherently computational -- we already have computational theories that describe aspects of consciousness, including working memory and attention,” Schneider is quoted as saying in the article. "Given a computational brain, I don’t see any good argument that silicon, instead of carbon, can’t be a excellent medium for experience.” Now, I am quite happy to concede that phenomena enough like intelligence and consciousness for us to call them that might in principle take different forms from the ones exhibited by conscious and intelligent people (humans animals and I would argue also some nonhuman animals) and be materialized differently than in the biological brains and bodies and historical struggles that presently incarnate them.
But conceding that logical possibility does not support in the least the assertion that non-biological intelligences are inevitable, that present human theories of intelligence tell us enough to guide us in assessing these possibilities, that human beings are on the road to coding such artificial intelligence, or that current work in computer theory or coding practice shows any sign at all of delivering anything remotely like artificial intelligence any time soon. Certainly there is no good reason to pretend the arrival of artificial intelligence (let alone godlike superintelligence) is so imminent that we should prioritize worrying about it over deliberation about actually real, actually urgent, actually ongoing problems like climate change, wealth concentration, exploited majorities, neglected diseases, abuse of women, arms proliferation, human trafficking, military and police violence.
What if the prior investment in false and facile "computational" metaphors of intelligence and consciousness are evidence of the poverty of the models employed by adherents of GOFAI and are among the problems yielding its serial failure? What if such "computational" frames are symptoms of a sociopathic hostility to actual animal intelligence or simply reveal ideological commitments to the predatory ideology of Silicon Valley's unsustainable skim-and-scam venture capitalism?
Although the proposal of "computational" consciousness is peddled here as a form of modesty, as a true taking-on of the alien otherness of alien intelligence in principle, what if these models of alien consciousness reflect most of all the alienation of their adherents -- the sociopathy of their view of their own superior computational intellects and their self-loathing of the frailties in that intellect's "atavistic" susceptibility to contingency, error, and failure -- rather than any embrace of the radical possibilities of difference?
It is no great surprise that the same desperate dead-enders who thought they could make the GOFAI lemon into GOD-AI lemonade would then go on to find evidence of the ubiquity of that GOD-AI in the complete lack of evidence of GOD-AI anywhere at all. What matters about the proposal of this "new view"on the Fermi Paradox is that it requires us to entertain as possible, so long as we are indulging the speculation at hand, the very notion of GOFAI that we otherwise have absolutely no reason to treat seriously at all.
Exposing the rhetorical shenanigans of faith-based futurologists is a service I am only too happy to render, of course, but I do want to point out that even if there are no good reasons to treat the superlative preoccupations of Robot Cultists seriously on their own terms (no, we don't have to worry about a mean Robot God eating the earth; no, we don't have to worry about clone armies or designer baby armies or human-animal hybrid armies taking over the earth; no, we don't have any reason to expect geo-engineers from Exxon-Mobil to profitably solve climate change for us or gengineers to profitably solve death and disease for us or nanogineers to profitably solve poverty for us) there may be very good reasons to take seriously the fact that futurological frames and figures are taken seriously indeed.
Quite apart from the fact that time spent on futurologists is time wasted in distractions from real problems, the greater danger may be that futurological formulations derange the terns of our deliberation on some of the real problems. Although the genetic and prosthetic interventions techno-triumphalists incessantly crow about have not enhanced or extended human lifespans in anything remotely like radical ways, the view that this enhancement and extension MUST be happening if it is being crowed about so incessantly has real world consequences, making consumers credulous about late-nite snake-oil salesmen in labcoats, making hospital administrators waste inordinate amounts for costly gizmos and ghastly violations for end-of-life care, rationalizing extensions of the retirement age for working majorities broken down by exploitation and neglect. Although the geo-engineering interventions techno-triumphalists incessantly crow about cannot be coherently characterized and seem to depend on the very funding and regulatory apparatuses the necessary failure of which is usually their justification, the view that such geo-engineering MUST be our "plan B" or our "last chance" provides extractive-industrial eco-criminals fresh new justifications to deny any efforts at real world education, organization, legislation to address environmental catastrophe. The very same techno-deterministic accounts of history techno-triumphalists depend on for their faith-based initiatives provided the rationales to justify the indebtedness to their former occupiers -- in the name of vast costly techno-utopian boondoggles like superdams and superhighways and skyscraper skylines -- in nations emerging from colonial occupation and then the imposition of austerity regimes that returned them to conditions of servitude.
Although I regard as nonsensical the prophetic utterances futurologists make about the arrival any time soon, or necessarily ever, of artificial intelligence in the world, I worry that there are many real world consequences of the ever more prevalent deployment of the ideology of artificial life and artificial intelligence by high-profile "technologists" in the popular press. I worry that the attribution of intelligence to smart cards and smart cars and smart phones, none of which exhibit anything like intelligence, confuses our sense of what intelligence actually is and risks denigrating the intelligence of the people with whom we share the world as peers. To fail to recognize the intelligence of humans risks the failure to recognize their humanity and the responsibilities demanded of us inhering in that humanity. Further, I worry that the faithful investment in the ideology of artificial intelligence rationalizes terrible decisions, justifies the outsourcing of human judgments to crappy software that corrects our spelling of words we know but it does not, recommends purchases and selects options for us in defiance of the complexities and dynamism of our taste, decides whether banks should find us credit-worthy whatever our human potential or states should find us target-worthy whatever our human rights.
Futurology rationalizes our practical treatment as robots through an indulgence in what appears to be abstract speculation about robots. The real question to ask of the Robot Cultists, and of the prevailing tech-culture that popularizes their fancies, is not how plausible their prophesies are but just what pathologies do these prophesies symptomize and just what constituencies do they benefit.
The serial failure of intelligent computers to make their long awaited appearance on the scene has lead many computer scientists and coders to focus their efforts instead on practical questions of computer security, reliability, user-friendliness, and so on. But there remain many GOFAI dead-enders who keep the faith and still imagine the real significance that attaches to the solution of problems with/in computation is that each advance is a stepping stone along the royal road to AI, a kind of burning bush offering up premonitory retroactive encouragement from The Future AI to its present-day acolytes.
In the clarifying extremity of superlative futurology we find techno-transcendentalists who are not only stubborn adherents of GOFAI in the face of its relentless failure, but who double down on their faith and amplify the customary insistence on the inevitable imminence of AI (all appearances to the contrary notwithstanding) and now declare no less inevitable the arrival of SUPER-intelligent artificial intelligence, insisting on the imminence of a history-shattering, possibly apocalyptic, probably paradisical, hopefully parental Robot God.
Rather than pay attention to (let alone learn the lessons of) the pesky failure and probable bankruptcy of the driving assumptions and aspirations of the GOFAI research program-cum-ideology, these techno-transcendentalists want us to treat with utmost seriousness the "existential threat" of the amplification of AI into a superintelligent AI in the wrong hands or with the wrong attitudes. I must say that I for one do not agree with Very Serious Robot Cultists at Oxford University like Nick Bostrom or at Google like Ray Kurzweil or celebrity tech CEOs like Elon Musk that the dumb belief in GOFAI becomes a smart belief rather than an even dumber one when it is amplified into belief in a GOD-AI, or that the useless interest in GOFAI becomes urgently useful rather than even more useless when it is amplified into worry about the existential threat of GOD-AI because it would be so terrible if it did come true. It would be terrible if Godzilla or Voldemort were real, but that is no reason to treat them as real or to treat as Very Serious those who want to talk about what existential threats they would pose if they were real when they are not (especially when there are real things to worry about).
The latest variation of the GOFAI via GOD-AI gambit draws on another theme beloved by superlative futurologists, the so-called Fermi Paradox -- the fact that there are so very many stars in the sky and yet no signs that we can see so far of intelligent life out there. Years ago, I proposed
The answer to the Fermi Paradox may simply be that we aren't invited to the party because so many humans are boring assholes. As evidence, consider that so many humans appear to be so flabbergastingly immodest and immature as to think it a "paradoxical" result to discover the Universe is not an infinitely faceted mirror reflecting back at us on its every face our own incarnations and exhibitions of intelligence.I for one don't find it particularly paradoxical to suppose life is comparatively rare in the universe, especially intelligent life, and more especially still the kind of intelligent life that would leave traces of a kind human beings here and now would discern as such, given how little we understand about the phenomena of our own lives and intelligence and given the astronomical distances involved. As the Futurological Brickbat quoted above implies, I actually think the use of the word "paradox" here probably indicates human idiocy and egotism more than anything else.
A recent article in Vice's Motherboard collects a handful of proponents of a "new view" on this question that proposes instead that the "dominant intelligence in the cosmos is probably artificial." The use of the word "probable" there may make you think that there is some kind of empirical inquiry afoot here, especially since all sorts of sciency paraphernalia surrounds the assertion, and its proponents are denominated "astronomers, including Seth Shostak, director of NASA’s Search for Extraterrestrial Intelligence, or SETI, program, NASA Astrobiologist Paul Davies, and Library of Congress Chair in Astrobiology Stephen Dick." NASA and the Library of Congress are institutions that have some real heft, but let's just say that typing the word "transhumanist" into a search for any of those names may leave you wondering a bit about the robocultic company they keep.
But what I want to insist you notice is that the use of the term "probability" in these arguments is a logical and not an empirical one at all: What it depends on is the acceptance in advance of the truth of the premise of GOFAI via GOD-AI which is in fact far from obvious at all that anyone would sensibly take for granted. Indeed, I propose that like many arguments offered up by Robot Cultists in more mainstream pop-tech journalism, the real point of the piece is to propagandize for the Robot Cult by indulging in what appear to be harmless blue-sky speculations of science fictional conceits but which entertain as true and so functionally bolster what are actually irrational and usually pernicious articles of futurological faith.
The philosopher Susan Schneider (search "Susan Schneider transhumanist," go ahead, try it) is paraphrased in the article saying "when it comes to alien intelligence... by the time any society learns to transmit radio signals, they’re probably a hop-skip away from upgrading their own biology." This formulation buries the lede in my view, and quite deliberately so. That is to say, what is really interesting here -- one might actually say it is flabbergasting -- is the revelation of a string of techno-transcendental assumptions: [one] that technodevelopmental vicissitudes are not contingently sociopolitical but logically or teleologically determined; [two] that biology could be fundamentally transformed while remaining legible to the transformed (that's the work done by the reassuring phrase "their own"); [three] that jettisoning biological bodies for robot bodies and "uploading" our biological brains into "cyberspace" is not only possible but desirable (make no mistake about it, that is what she is talking about when she talks about "upgrading biology" -- by the way, the reason I scare-quote words like "upload" and "cyberspace" is because those are metaphors not engineering specs, and unpacking those metaphors exposes enough underlying confusion and fact-fudging that you may want to think twice about trusting your "biological upgrade" to folks who talk this way, even if they chirp colloquiually at you that your immortal cyberangel soul-upload into Holodeck Heaven is just a "hop-skip away" from easy peasy radio technology); and [four] that terms like "upgrade," freighted as they are with a host of specific connotations derived from the deceptive hyperbolic parasitic culture of venture-capitalism and tech-talk, are the best way to characterize fraught fundamental changes in human lives to be brought about primarily by corporate-military incumbent-elites seeking parochial profits. Maybe you want to read that last bit again, eh?
Seth Shostak quotes from the same robocultic catechism a paragraph later: “As soon as a civilization invents radio, they’re within fifty years of computers, then, probably, only another fifty to a hundred years from inventing AI... At that point, soft, squishy brains become an outdated model.” Notice the same technological determinism. Notice that the invention of AI is then declared to be probable within a century -- and no actual reasons are offered up in support of this declarations and it is made in defiance of all evidence to the contrary. And then notice suddenly we find ourselves once again in the moral universe of techno-transcendence, where Schneider assumed robot bodies and cyberspatial uploads would be "upgrades" (hop-skipping over the irksome question whether such notions are even coherent or possible on her terms, whether a picture of you could be you, whether fetishized prosthetization would be enhancing to all possible ends or disabling to some we might come to want or immortalizing when no prostheses are eternal, etc etc etc etc) Shostak leaps to the ugly obverse face of the robocultic coin: "soft, squishy brains" are "outdated model[s]." Do you think of your incarnated self as a "model" on the showroom floor, let alone an outdated one? I do not. And refusing such characterizations is indispensable to resisting being treated as one. Maybe you want to read that last bit again, eh?
“I believe the brain is inherently computational -- we already have computational theories that describe aspects of consciousness, including working memory and attention,” Schneider is quoted as saying in the article. "Given a computational brain, I don’t see any good argument that silicon, instead of carbon, can’t be a excellent medium for experience.” Now, I am quite happy to concede that phenomena enough like intelligence and consciousness for us to call them that might in principle take different forms from the ones exhibited by conscious and intelligent people (humans animals and I would argue also some nonhuman animals) and be materialized differently than in the biological brains and bodies and historical struggles that presently incarnate them.
But conceding that logical possibility does not support in the least the assertion that non-biological intelligences are inevitable, that present human theories of intelligence tell us enough to guide us in assessing these possibilities, that human beings are on the road to coding such artificial intelligence, or that current work in computer theory or coding practice shows any sign at all of delivering anything remotely like artificial intelligence any time soon. Certainly there is no good reason to pretend the arrival of artificial intelligence (let alone godlike superintelligence) is so imminent that we should prioritize worrying about it over deliberation about actually real, actually urgent, actually ongoing problems like climate change, wealth concentration, exploited majorities, neglected diseases, abuse of women, arms proliferation, human trafficking, military and police violence.
What if the prior investment in false and facile "computational" metaphors of intelligence and consciousness are evidence of the poverty of the models employed by adherents of GOFAI and are among the problems yielding its serial failure? What if such "computational" frames are symptoms of a sociopathic hostility to actual animal intelligence or simply reveal ideological commitments to the predatory ideology of Silicon Valley's unsustainable skim-and-scam venture capitalism?
Although the proposal of "computational" consciousness is peddled here as a form of modesty, as a true taking-on of the alien otherness of alien intelligence in principle, what if these models of alien consciousness reflect most of all the alienation of their adherents -- the sociopathy of their view of their own superior computational intellects and their self-loathing of the frailties in that intellect's "atavistic" susceptibility to contingency, error, and failure -- rather than any embrace of the radical possibilities of difference?
It is no great surprise that the same desperate dead-enders who thought they could make the GOFAI lemon into GOD-AI lemonade would then go on to find evidence of the ubiquity of that GOD-AI in the complete lack of evidence of GOD-AI anywhere at all. What matters about the proposal of this "new view"on the Fermi Paradox is that it requires us to entertain as possible, so long as we are indulging the speculation at hand, the very notion of GOFAI that we otherwise have absolutely no reason to treat seriously at all.
Exposing the rhetorical shenanigans of faith-based futurologists is a service I am only too happy to render, of course, but I do want to point out that even if there are no good reasons to treat the superlative preoccupations of Robot Cultists seriously on their own terms (no, we don't have to worry about a mean Robot God eating the earth; no, we don't have to worry about clone armies or designer baby armies or human-animal hybrid armies taking over the earth; no, we don't have any reason to expect geo-engineers from Exxon-Mobil to profitably solve climate change for us or gengineers to profitably solve death and disease for us or nanogineers to profitably solve poverty for us) there may be very good reasons to take seriously the fact that futurological frames and figures are taken seriously indeed.
Quite apart from the fact that time spent on futurologists is time wasted in distractions from real problems, the greater danger may be that futurological formulations derange the terns of our deliberation on some of the real problems. Although the genetic and prosthetic interventions techno-triumphalists incessantly crow about have not enhanced or extended human lifespans in anything remotely like radical ways, the view that this enhancement and extension MUST be happening if it is being crowed about so incessantly has real world consequences, making consumers credulous about late-nite snake-oil salesmen in labcoats, making hospital administrators waste inordinate amounts for costly gizmos and ghastly violations for end-of-life care, rationalizing extensions of the retirement age for working majorities broken down by exploitation and neglect. Although the geo-engineering interventions techno-triumphalists incessantly crow about cannot be coherently characterized and seem to depend on the very funding and regulatory apparatuses the necessary failure of which is usually their justification, the view that such geo-engineering MUST be our "plan B" or our "last chance" provides extractive-industrial eco-criminals fresh new justifications to deny any efforts at real world education, organization, legislation to address environmental catastrophe. The very same techno-deterministic accounts of history techno-triumphalists depend on for their faith-based initiatives provided the rationales to justify the indebtedness to their former occupiers -- in the name of vast costly techno-utopian boondoggles like superdams and superhighways and skyscraper skylines -- in nations emerging from colonial occupation and then the imposition of austerity regimes that returned them to conditions of servitude.
Although I regard as nonsensical the prophetic utterances futurologists make about the arrival any time soon, or necessarily ever, of artificial intelligence in the world, I worry that there are many real world consequences of the ever more prevalent deployment of the ideology of artificial life and artificial intelligence by high-profile "technologists" in the popular press. I worry that the attribution of intelligence to smart cards and smart cars and smart phones, none of which exhibit anything like intelligence, confuses our sense of what intelligence actually is and risks denigrating the intelligence of the people with whom we share the world as peers. To fail to recognize the intelligence of humans risks the failure to recognize their humanity and the responsibilities demanded of us inhering in that humanity. Further, I worry that the faithful investment in the ideology of artificial intelligence rationalizes terrible decisions, justifies the outsourcing of human judgments to crappy software that corrects our spelling of words we know but it does not, recommends purchases and selects options for us in defiance of the complexities and dynamism of our taste, decides whether banks should find us credit-worthy whatever our human potential or states should find us target-worthy whatever our human rights.
Futurology rationalizes our practical treatment as robots through an indulgence in what appears to be abstract speculation about robots. The real question to ask of the Robot Cultists, and of the prevailing tech-culture that popularizes their fancies, is not how plausible their prophesies are but just what pathologies do these prophesies symptomize and just what constituencies do they benefit.
You guys, Mass Effect is not real. You won't get to control the Reapers or synthesize everyone.
ReplyDeleteUsing Seth Shostak's formula, we find that we'll get AI in, gee, about 20 more years! Where have I heard that "in twenty years time" thing before? Oh yeah, every time a "futurist' opens their mouth.
ReplyDeleteUsing Seth Shostak's formula, we find that we'll get AI in, gee, about 20 more years!
ReplyDeleteAh, he does indeed. Tho' his version slyly seems to predict that the Robot God Singularity will occur in twenty years or twenty years... ago. About what you would expect from an argument that treats the absence of something as evidence of its ubiquity. For the True Believer, I hear, anything is possible and everything is a confirmation. Science!
I just always felt that those two endings were pollyanna and bullshit and that Shepard was being slowly indoctrinated and always chooses to end the fuckers. We save ourselves, we do not need Reapers to do it.
ReplyDeleteOy. Don't get me started on ME3!
ReplyDeleteWhat drew me to this blog is that I've been surprised that, increasingly often, I encounter people using "transhumanist" to describe their political views. And I find their political views are a mix of radical social criticism combined with futurist faith -- with all the weaknesses that this article implies.
More broadly, I encounter a fair number of activists and the like who pick up on some of these ideas.
In particular, this leads to softening of criticism of the motives of powerful Silicon Valley entrepreneurs, or even enthusiasm for them, which weakens their insight into political-economic inequities.
I'd be curious to know what you mean when you say that some people affirming the transhumanist label are doing "radical social criticism." Given my reputation I know that probably sounds like an invitation for me to attack you, but I actually don't mean it that way. I'm curious what people are finding in the superlative futurologies that isn't utterly reactionary -- in the past I have known well-meaning people who have tried to read transhumanism through queer or anarchist lenses, for example. These efforts have foundered in my view in historical, practical, and conceptual terms. I'm skeptical, and always critical, but truly interested to know what people are trying to see that is different from what I see in these discourses.
ReplyDeleteI tried to for a while be a transhumanist as well as a socialist but I realised that the philosophical content of transhumanism either had to be radically changed or just dropped and therefore no longer any where close to transhumanism but instead I always ended up in the techno progressive camp which I feel has some of the (very little) good stuff of transhumanism (like morphological freedom and a general belief in the advancement of knowledge) while none of the deterministic eschatological really authoritarian content in transhumanism. I really hate it everytime on io9 some people can criticise the current socioeconomic conditions and then somehow just say that this will be solved by (to seal a wonderful phrase) "sooper intelligence". Because apparently putting all our faith in a centralised computer system is different from putting our faith in a centralised goverment.
ReplyDeleteCome on what wrong about ME3! That ending was awesome :P. Absolutly did not ruin (or atleast sour) one of the best game trilogies of all time. :D
ReplyDeleteIs not AI coming just ten years after fusion power? I thought that was the general consensus prediction of the last fifty years.
ReplyDeleteI think a key conceptual problem with transhumanisms is that they have an utterly uninterrogated idea of "technology" that pervades the whole discourse. They attend very little to the politics of habituation/ de-familiarization naturalization/ de-naturalization that invest some of techniques/artifacts (but not others, indeed probably not most others) with the force of the "technological." Quite a lot of the status quo gets smuggled in through this evasionn, de-politicizing what could be otherwise, rationalizing incumbency. This is all the more difficult for the transhumanists to engage, since they are so invested in the self-image of an embrace of novelty, disruption, anti-nature, and so on. But what could be more plain these days than how much novelty is profitably repackaged out of the stale, how much disruption is just an apologia for plutocratic dismantlement of public goods? Transhumanists indulge what seems to me an utterly fetishistic discourse of technology, and a host of infantile conceits arrive in tow: failing to grasp the technical/performative articulation of every socially legible body they fetishistically identify with cyborg bodies that appeal to wish-fulfillment fantasies they consume from commercials and Hollywood blockbusters, failing to grasp the collective/interpdendence conditions out of which agency emerges, they grasp at prosthetic fetishes to barnacle or genetically enhance the delusive sociopathic liberal dream of rugged individualism in a cyborg shell (pretty much like every tragic ammosexual or mid-life crisis case does with his big gun or his sad sportscar). I have found technoprogressives untrustworthy progressives (I say this as the one who popularized that very label), making common cause with reactionaries at the drop of a hat, too willing to rationalize inequity and uncritical positions through appeals to eventual or naturalized progress, and so. I don't think these frailties are accidental or incidental, but tendencies arising out of the under-interrogated naturalized technological assumptions and techno-transcendental on which all superlative futurologies/ists ultimately depend.
ReplyDeleteI agree with everything you said and I personally think the problem of uninterrogated idea of technology is a problem that they share with the people they try to oppose the most, bioconservatist. Both camp talk about technology without analysing it in a thorough and understandable way. I personally do not pay that much attention what other so called technoprogressives does as a see it as one of the several labels that seems useful for my own philosophy. Especially from a technological "progress" standpoint. But I have to be honest that with the exception of a few science fiction writers (especially modern ones) all my influence from technoprogressivism comes directly from you, so I hope to avoid certain alliances with reactionaries on some issues. Also I find even the ideas that might cross politcal boundries are all deeply different between the sides, take the debate a week ago about BIG. But I feel many times that many transhumanist are mostly people who want change but is very much afraid of it happening either too fast. Also I find it not terribly shocking that transhumanism arrived very much at the end or collapse of the Soviet Union where now we want changes but apparently "socialism" is bad. Also there has apparently been a complete renaming of what cyberpunk is. Gone is the idea that cyberpunk reflects capitalism (plutocratic military complex) in futuristic society and how technology is not saving us. From being the idea that technology is bad and transhumanism that it is good. Seriously check out the quote about cyberpunk on tvtropes.
ReplyDeleteI devote a whole section to the mirroring/interdependence of transhumanist/bioconservative (I rename them superlative/supernative) rhetorics in the essay I published in Existenz: https://www.bu.edu/paideia/existenz/volumes/Vol.8-2Carrico.pdf
ReplyDeleteI apparently had already read that one which means that I most likly just stole from you yet again.:P
ReplyDeleteThere's nothing new under the sun and no author is an island, I'm just thrilled to be useful to a reader.
ReplyDeleteThe examples I have in mind are mostly casual comments in various online forums -- so, not *doing* radical social criticism, just referring to it, and then saying something along the lines of, say, gender won't matter anymore when we upload our minds to the noosphere.
ReplyDeleteI've got a lot to say about the ending of ME3, but in brief: what remains most interesting about what happened is how the problem of the ending mirrors the problem of the response to player complaints about the ending. In the game, ultimately, the protagonist confronts the ultimate antagonist, and the antagonist presents the only solutions the antagonist believes are possible to what the antagonist defines as the problem; the protoganist is reduced to choosing among the options offered by the antagonist. Outside the game, the lead writers, backed up by some game critics, insisted that only an ending imposed in such a way would have artistic integrity, and that modifying the ending in response to player demands would undermine that integrity. The common element is an insistence that there be an authority which does not answer to the player.
That was a sharp contrast with both how the Mass Effect series had been describe up to that point, that it was all about a narrative determined by the player's choices, and with a commonly claimed goal of game design, to improve interactivity.
not *doing* radical social criticism, just referring to it, and then saying something along the lines of, say, gender won't matter anymore when we upload our minds to the noosphere.
ReplyDeleteNot only is this not doing radical social criticism but it seems to me pretty explicitly straightforwardly reactionary -- plutocrats always naturalize their hierarchies as meritocracies, right? The whole uploading schtick is obviously a denigration of materiality of the body, and it is always of course the white male straight cis body that can best disavow its materiality because its materiality isn't in question or under threat, right? It can be a mark more of privilege than perceptiveness to call into question that which won't ever be in question for you whatever.
But what has always cracked me up is that since all information is instantiated on a material carrier, so even on their own terms the spiritualization of digi-info souls is hard to square with the reductionist scientism these folks tend to congratulate themselves over -- not that that would anything to be proud of even if they managed to consistently be dumb in that particular way.
For more on Mass Effect you'll have to hope my partner Eric or some of my other readers step in. Eric's the gamer, I watch blu ray marathons under a blanket and immobilized by our heat-seeking cat.
Honestly i continued on mass effect because it was kinda fun on a blog about philosophy, social criticism etc. In short i agree with your analysis about the ending. We could spend the next couple of days discussing the ending more in depth but i feel we are on the rough same page. I personally, since i can not get a new more appopriate (better) ending, feel the fan theory that shepard is slowly indoctrinated and in the end IT becomes a kinda of subversion of the games and RPG in general that your only real choice is destroying them since there are no shortcuts to life and some Times you have give up the dream to live in reality.
ReplyDelete