Advocates of Good Old Fashioned Artificial Intelligence (GOFAI) have been predicting that the arrival of intelligent computers is right around the corner more or less every year from the formation of computer science and information science as disciplines, from World War II to Deep Blue to Singularity U. These predictions have always been wrong, though their ritual reiteration remains as strong as ever.
The serial failure of intelligent computers to make their long awaited appearance on the scene has lead many computer scientists and coders to focus their efforts instead on practical questions of computer security, reliability, user-friendliness, and so on. But there remain many GOFAI dead-enders who keep the faith and still imagine the real significance that attaches to the solution of problems with/in computation is that each advance is a stepping stone along the royal road to AI, a kind of burning bush offering up premonitory retroactive encouragement from The Future AI to its present-day acolytes.
In the clarifying extremity of
superlative futurology we find techno-transcendentalists who are not only stubborn adherents of GOFAI in the face of its relentless failure, but who double down on their faith and amplify the customary insistence on the inevitable imminence of AI (all appearances to the contrary notwithstanding) and now declare no less inevitable the arrival of SUPER-intelligent artificial intelligence, insisting on the imminence of a history-shattering, possibly apocalyptic, probably paradisical, hopefully parental Robot God.
Rather than pay attention to (let alone learn the lessons of) the pesky failure and probable bankruptcy of the driving assumptions and aspirations of the GOFAI research program-cum-ideology, these techno-transcendentalists want us to treat with utmost seriousness the "existential threat" of the amplification of AI into a superintelligent AI in the wrong hands or with the wrong attitudes. I must say that I for one do not agree with Very Serious Robot Cultists at Oxford University like
Nick Bostrom or at Google like
Ray Kurzweil or celebrity tech CEOs like
Elon Musk that the dumb belief in GOFAI becomes a smart belief rather than an even dumber one when it is amplified into belief in a GOD-AI, or that the useless interest in GOFAI becomes urgently useful rather than even more useless when it is amplified into worry about the existential threat of GOD-AI because it would be so terrible if it did come true. It would be terrible if Godzilla or Voldemort were real, but that is no reason to treat them as real or to treat as Very Serious those who want to talk about what existential threats they would pose if they were real when they are not (especially when there are real things to worry about).
The latest variation of the GOFAI
via GOD-AI gambit draws on another theme beloved by superlative futurologists, the so-called Fermi Paradox -- the fact that there are so very many stars in the sky and yet no signs that we can see so far of intelligent life out there. Years ago, I
proposed
The answer to the Fermi Paradox may simply be that we aren't invited to
the party because so many humans are boring assholes. As evidence,
consider that so many humans appear to be so flabbergastingly immodest
and immature as to think it a "paradoxical" result to discover the
Universe is not an infinitely faceted mirror reflecting back at us on
its every face our own incarnations and exhibitions of intelligence.
I for one don't find it particularly paradoxical to suppose life is comparatively rare in the universe, especially intelligent life, and more especially still the kind of intelligent life that would leave traces of a kind human beings here and now would discern as such, given how little we understand about the phenomena of our own lives and intelligence and given the astronomical distances involved. As the Futurological Brickbat quoted above implies, I actually think the use of the word "paradox" here probably indicates human idiocy and egotism more than anything else.
A recent article in Vice's
Motherboard collects a handful of proponents of a "new view" on this question that proposes instead that the "dominant intelligence in the cosmos is probably artificial." The use of the word "probable" there may make you think that there is some kind of empirical inquiry afoot here, especially since all sorts of sciency paraphernalia surrounds the assertion, and its proponents are denominated "astronomers, including Seth Shostak, director of NASA’s Search for Extraterrestrial Intelligence, or SETI, program, NASA Astrobiologist Paul Davies, and Library of Congress Chair in Astrobiology Stephen Dick." NASA and the Library of Congress are institutions that have some real heft, but let's just say that typing the word "transhumanist" into a search for any of those names may leave you wondering a bit about the robocultic company they keep.
But what I want to insist you notice is that the use of the term "probability" in these arguments is a logical and not an empirical one at all: What it depends on is the acceptance in advance of the truth of the premise of GOFAI
via GOD-AI which is in fact far from obvious at all that anyone would sensibly take for granted. Indeed, I propose that like many arguments offered up by Robot Cultists in more mainstream pop-tech journalism, the real point of the piece is to propagandize for the Robot Cult by indulging in what appear to be harmless blue-sky speculations of science fictional conceits but which entertain as true and so functionally bolster what are actually irrational and usually pernicious articles of futurological faith.
The philosopher Susan Schneider (search "Susan Schneider transhumanist," go ahead, try it) is paraphrased in the article saying "when it comes to alien intelligence... by the time any society
learns to transmit radio signals, they’re probably a hop-skip away from
upgrading their own biology." This formulation buries the lede in my view, and quite deliberately so. That is to say, what is really interesting here -- one might actually say it is flabbergasting -- is the revelation of a string of techno-transcendental assumptions: [one] that technodevelopmental vicissitudes are not contingently sociopolitical but logically or teleologically determined; [two] that biology could be fundamentally transformed while remaining legible to the transformed (that's the work done by the reassuring phrase "their own"); [three] that jettisoning biological bodies for robot bodies and "uploading" our biological brains into "cyberspace" is not only possible but desirable (make no mistake about it, that is what she is talking about when she talks about "upgrading biology" -- by the way, the reason I scare-quote words like "upload" and "cyberspace" is because those are metaphors not engineering specs, and unpacking those metaphors exposes enough underlying confusion and fact-fudging that you may want to think twice about trusting your "biological upgrade" to folks who talk this way, even if they chirp colloquiually at you that your immortal cyberangel soul-upload into Holodeck Heaven is just a "hop-skip away" from easy peasy radio technology); and [four] that terms like "upgrade," freighted as they are with a host of specific connotations derived from the deceptive hyperbolic parasitic culture of venture-capitalism and tech-talk, are the best way to characterize fraught fundamental changes in human lives to be brought about primarily by corporate-military incumbent-elites seeking parochial profits. Maybe you want to read that last bit again, eh?
Seth Shostak quotes from the same robocultic catechism a paragraph later: “As soon as a civilization invents radio, they’re within fifty years of
computers, then, probably, only another fifty to a hundred years from
inventing AI... At that point, soft, squishy brains become
an outdated model.” Notice the same technological determinism. Notice that the invention of AI is then declared to be probable within a century -- and no actual reasons are offered up in support of this declarations and it is made in defiance of all evidence to the contrary. And then notice suddenly we find ourselves once again in the moral universe of techno-transcendence, where Schneider assumed robot bodies and cyberspatial uploads would be "upgrades" (hop-skipping over the irksome question whether such notions are even coherent or possible on her terms, whether a picture of you could be you, whether fetishized prosthetization would be enhancing to all possible ends or disabling to some we might come to want or immortalizing when no prostheses are eternal, etc etc etc etc) Shostak leaps to the ugly obverse face of the robocultic coin: "soft, squishy brains" are "outdated model[s]." Do you think of your incarnated self as a "model" on the showroom floor, let alone an outdated one? I do not. And refusing such characterizations is indispensable to resisting being treated as one. Maybe you want to read that last bit again, eh?
“I believe the brain is inherently computational -- we already have
computational theories that describe aspects of consciousness, including
working memory and attention,” Schneider is quoted as saying in the article. "Given a computational
brain, I don’t see any good argument that silicon, instead of carbon,
can’t be a excellent medium for experience.” Now, I am quite happy to concede that phenomena enough like intelligence and consciousness for us to call them that might in principle take different forms from the ones exhibited by conscious and intelligent people (humans animals and I would argue also some nonhuman animals) and be materialized differently than in the biological brains and bodies and historical struggles that presently incarnate them.
But conceding that logical possibility does not support in the least the assertion that non-biological intelligences are inevitable, that present human theories of intelligence tell us enough to guide us in assessing these possibilities, that human beings are on the road to coding such artificial intelligence, or that current work in computer theory or coding practice shows any sign at all of delivering anything remotely like artificial intelligence any time soon. Certainly there is no good reason to pretend the arrival of artificial intelligence (let alone godlike superintelligence) is so imminent that we should prioritize worrying about it over deliberation about actually real, actually urgent, actually ongoing problems like climate change, wealth concentration, exploited majorities, neglected diseases, abuse of women, arms proliferation, human trafficking, military and police violence.
What if the prior investment in false and facile "computational" metaphors of intelligence and consciousness are evidence of the poverty of the models employed by adherents of GOFAI and are among the problems yielding its serial failure? What if such "computational" frames are symptoms of a sociopathic hostility to actual animal intelligence or simply reveal ideological commitments to the predatory ideology of Silicon Valley's unsustainable skim-and-scam venture capitalism?
Although the proposal of "computational" consciousness is peddled here as a form of modesty, as a true taking-on of the alien otherness of alien intelligence in principle, what if these models of alien consciousness reflect most of all the alienation of their adherents -- the sociopathy of their view of their own superior computational intellects and their self-loathing of the frailties in that intellect's "atavistic" susceptibility to contingency, error, and failure -- rather than any embrace of the radical possibilities of difference?
It is no great surprise that the same desperate dead-enders who thought they could make the GOFAI lemon into GOD-AI lemonade would then go on to find evidence of the ubiquity of that GOD-AI in the complete lack of evidence of GOD-AI anywhere at all. What matters about the proposal of this "new view"on the Fermi Paradox is that it requires us to entertain as possible, so long as we are indulging the speculation at hand, the very notion of GOFAI that we otherwise have absolutely no reason to treat seriously at all.
Exposing the rhetorical shenanigans of faith-based futurologists is a service I am only too happy to render, of course, but I do want to point out that even if there are no good reasons to treat the superlative preoccupations of Robot Cultists seriously on their own terms (no, we don't have to worry about a mean Robot God eating the earth; no, we don't have to worry about clone armies or designer baby armies or human-animal hybrid armies taking over the earth; no, we don't have any reason to expect
geo-engineers from Exxon-Mobil to profitably solve climate change for us or gengineers to profitably solve death and disease for us or nanogineers to profitably solve poverty for us) there may be very good reasons to take seriously the fact that futurological frames and figures are taken seriously indeed.
Quite apart from the fact that time spent on futurologists is time wasted in distractions from real problems, the greater danger may be that futurological formulations derange the terns of our deliberation on some of the real problems. Although the genetic and prosthetic interventions techno-triumphalists incessantly crow about have not enhanced or extended human lifespans in anything remotely like radical ways, the view that this enhancement and extension MUST be happening if it is being crowed about so incessantly has real world consequences, making consumers credulous about late-nite snake-oil salesmen in labcoats, making hospital administrators waste inordinate amounts for costly gizmos and ghastly violations for end-of-life care, rationalizing extensions of the retirement age for working majorities broken down by exploitation and neglect. Although the geo-engineering interventions techno-triumphalists incessantly crow about cannot be coherently characterized and seem to depend on the very funding and regulatory apparatuses the necessary failure of which is usually their justification, the view that such geo-engineering MUST be our "plan B" or our "last chance" provides extractive-industrial eco-criminals fresh new justifications to deny any efforts at real world education, organization, legislation to address environmental catastrophe. The very same techno-deterministic accounts of history techno-triumphalists depend on for their faith-based initiatives provided the rationales to justify the indebtedness to their former occupiers -- in the name of vast costly techno-utopian boondoggles like superdams and superhighways and skyscraper skylines -- in nations emerging from colonial occupation and then the imposition of austerity regimes that returned them to conditions of servitude.
Although I regard as nonsensical the prophetic utterances futurologists make about the arrival any time soon, or necessarily ever, of artificial intelligence in the world, I worry that there are many real world consequences of the ever more prevalent deployment of the ideology of artificial life and artificial intelligence by high-profile "technologists" in the popular press. I worry that the attribution of intelligence to smart cards and smart cars and smart phones, none of which exhibit anything like intelligence, confuses our sense of what intelligence actually is and risks denigrating the intelligence of the people with whom we share the world as peers. To fail to recognize the intelligence of humans risks the failure to recognize their humanity and the responsibilities demanded of us inhering in that humanity. Further, I worry that the faithful investment in the ideology of artificial intelligence rationalizes terrible decisions, justifies the outsourcing of human judgments to crappy software that corrects our spelling of words we know but it does not, recommends purchases and selects options for us in defiance of the complexities and dynamism of our taste, decides whether banks should find us credit-worthy whatever our human potential or states should find us target-worthy whatever our human rights.
Futurology rationalizes our practical treatment as robots through an indulgence in what appears to be abstract speculation about robots. The real question to ask of the Robot Cultists, and of the prevailing tech-culture that popularizes their fancies, is not how plausible their prophesies are but just what pathologies do these prophesies symptomize and just what constituencies do they benefit.