[T]he notion of AI is so ingrained in popular culture (and has been for so many years -- who doesn't know about R2D2 and C3PO these days, even if fewer remember HAL9000 or Robbie, and fewer still have ever heard about R. Daneel Olivaw or Adam Link?) that it's a cliche. So much so that it's hard to imagine anybody in the 21st century (even people over 60, or people in third-world countries who have seen American movies) thinking of AI as "weird" (as in Twilight Zone or X-Files weird). Everybody understands the cinematic convention of the talking metal man (or the talking plastic android, or the talking computer).
Klaatu barada nikto. If I only had a heart. You are the Kirk, the creator. Just what do you think you're doing, Dave? I have such a bad case of dust contamination. Danger, Will Robinson! There is another system!
The frustration (mine and, I presume, Dale's) is more complicated than that. It has to do with the narrowness and implausibility of an ideology or quasi-religion that Jaron Lanier has labeled "cybernetic totalism", q.v. It has to do with the putative paradisiacal or apocalyptic consequences of AI demanding, D*E*M*A*N*D*I*N*G immediate attention, resources, and backgrounding of all other concerns. It has to do with AI as a "mcguffin" for cultish group formation and guru-ish proclamations of certainty about where the world is headed. It has to do with the narrow and wrong-headed perpetuation of certain quite outmoded stereotypes, prejudices, and assumptions about what an AI would be like and how it would work, and about the nature of "intelligence" itself -- assumptions which have deep philosophical and political roots and implications.
I agree with every word Jim said.
For relevant Jeron Lanier, everybody should read, or re-read, his Half A Manifesto, which is the more or less canonical delineation of the "Cybernetic Totalism" Critique, as well as the pithy and provocative Lanier's Laws, and the down to earth discussion of Artificial Stupidity in his Salon interview. I'm not endorsing everything he says, but I cheerfully and gratefully affirm that am indebted to much of it.
Lanier often points to the terrible software design decisions that arise from programmer commitments to the larger religious "worldview" of Cybernetic Totalism. In his comment above, Jim points to the bad political decisions, the skewed priorities, the false frames, the pointless distractions, the odd emotional investments that arise from and are nourished by more general technocentric commitments to the religious worldview of Superlative Technology Discourse in its many variations (Singularity, Nanosanta, Technological Immortalism, and so on).
I know of people caught up in the Singularitarian variation of Superlative Technology Discourse (the variation I like to poke friendly fun at as "The Robot Cultists") who will earnestly declare as the very sign of their Seriousness that we must direct our attention away from the actually-existing problems of climate change and weapons proliferation and over-urbanization (which the urgent scholarship of Mike Davis, among others, has demonstrated to incubate global pandemics, install mass precarity, support vast rights violations, provoke social instability, and, I would add, inspire and invigorate compensatory mass fundamentalist social and political movements) to focus instead on the completely made-up "real" problem of the imminent arrival of a nonbiological entitative superintelligent being.
Not only is this just flabbergastingly awful and ugly and stupid in its own right, let's be honest, but it also plays into any number of already-existing neoliberal and neoconservative agendas (as I've rehearsed here on Amor Mundi so depressingly many times at this point: in its tendency to endorse anti-democratic technocratic elitism, in its tendency to reductive instrumentalizing formulations of human interest, in its reliance on "free market" and "spontaneist" formulations of progress and public interest, in its fixation on corporations as the principal actors of history and "development," in its uncritical acceptance of the terms of corporate-militarist competitive futurism, and so on), all the while it invigorates rhetorical frames that derange our capacity to talk sensibly about the actual politics of viral and recursive software, and the regulation and compensation of security and creativity facilitated by digital networks, and so on.
The repudiation of these facile worldviews (to which I am willing to extend the same tolerance that I am to any essentially religious worldview, but not one bit more) need have nothing in the least to do with the question whether one takes seriously the actual problems and promises inhering in emerging and palpably upcoming planetary technodevelopmental social struggle. Repudiating Superlative formulations it seems to me one is not less but more and better able to grasp the problems and promises of especially the legally and developmentally inter-implicated emerging NBIC technologies, so-called: Nanotechnologies (materials and techniques involving relatively controlled manipulations at the nanoscale, whether replicative or not), Biotechnologies (especially emerging biomedical interventions: genetic, prosthetic, modification and longevity medicine), Information Technologies (digital networked information and communication technologies, and social software), and Cognitive Technologies (prosthetic and neuropharmocological modifications of mood, memory, and perception, human-network interfaces, some intrusive marketing, pedagogical, and surveillance techniques).
Notice that I spoke just now of the "inter-implication" rather than the "convergence" of NBIC techs. I have come to realize that even that modest metaphorical conjuration of a technodevelopmental convergence (a ready-to-hand phrase I've used myself a hundred times) feeds the iconography of Superlativity and Singularity even as it feeds on the available iconography of Manifest Destiny and Natural Progress (that is to say, progress conceived as a wave some lucky people get to ride, rather than progress as a great democratic struggle and collective work). Even knowing full well that technodevelopment isn't the sort of thing that monolithically "does" anything at all, accumulating, dispersing, accelerating, converging, transcending, or what have you -- I still glibly dropped that "NBIC convergence" phrase without thinking, really, without thinking on what problematic premises and figures its intuitive plausibility and rhetorical effectiveness ultimately depended.
It is breathtaking how easy, how unconscious, how ubiquitous the temptations to and expressions of teleology, of transcendence, of apocalypse, of Superlativity seem to be when talk turns to the ongoing technodevelopmental derangement of human agency, individual and collective. This is exactly what we should expect, I suppose. It is a temptation that speaks to the most basic fears and fantasies of agency, of free and creative beings who understand their abiding vulnerability.
To the extent that we are trying to be technoprogressive in our politics -- that is to say, to the extent that our goal is to democratize planetary technodevelopmental social struggle, to distribute the costs, risks, and benefits of technoscientific change fairly to all the stakeholders to that change, and to institute and maintain a scene of legitimate legible, planetary, informed and nonduressed consent to prosthetic and multicultural practices of self-creation -- then it seems to me we are well rid of the deranging seductions of Superlativity, reductionism, and elitism that seem to articulate so much technocentricity in general. Or, at the very least, we should encourage those whose own private perfections demand indulgence in such superlativity, reductionism, and elitism to privatize, to personalize, to aestheticize the particular pleasures and definitive limits that this indulgence enables them (rather than to confuse these esthetic or moral ends with instrumental or ethical ends, or worse attempt to commandeer the latter in the service of the former as fundamentalisms tend, hysterically, to do). For those who are interested in the topic, I offered up a preliminary and imperfect sketch of these different ends and their articulation in a properly democratic technoethics in a piece called Technoethical Pluralism.
A world of networked planetary multiculture that can avail itself of world-destroying technique demands this secular democratic political compromise else it can scarcely avoid destroying itself. In the terms that edify our Singularitarian friends far more than they edify me, I want to say that as an obfuscation of technodevelopmental policy discourse, as a distraction from urgent actually-existing priorities, and as an apologia for catastrophic neoliberal rationalizations, among many other faults, Singularitarian Discourse seems to be something like an existential risk itself -- at any rate, certainly it is more so than the "risk" of the Big Bad Robot Gods that so exercise the imaginations and attentions of those devoted to the discourse.