Using Technology to Deepen Democracy, Using Democracy to Ensure Technology Benefits Us All

Tuesday, December 23, 2014

Nicholas Carr on the Robot God Odds

It is easy to agree with Nicholas Carr when he says:
The odds of computers becoming thoughtful enough to decide they want to take over the world, hatch a nefarious plan to do so, and then execute said plan remain exquisitely small. Yes, it’s in the realm of the possible. No, it’s not in the realm of the probable. If you want to worry about existential threats, I would suggest that the old-school Biblical ones -- flood, famine, pestilence, plague, war -- are still the best place to set your sights.
Carr is lampooning the nightmares and wet dreams of comparatively high-profile Singularitarian Robot Cultists like Ray Kurzweil, Elon Musk, and Nick Bostrom who think the danger of coding an unfriendly superintelligent AI before coding a friendly superintelligent AI is a matter of fierce concern. I say it is easy to agree with Carr on this, and I largely do: here and also here is my take on the relevant Kurzweil, here on Elon Musk and here on Nick Bostrom.

I do worry that there is something counterproductive in the way Carr is framing his very correct and commonsensical objection here, however. Just what is Carr conceding in his generous admission of logical possibility rather than policy-relevant probability to the published concerns of the Singularitarians? I cheerfully grant the logical possibility that phenomena we would describe as intelligence and consciousness could be materially instantiated otherwise than in the biological brains and living nervous systems. I cheerfully grant that not only humans but dolphins and great apes and who knows who else could be taken for people and for rights-bearers. I cheerfully grant that we enter into legible political and discursive relations with nonhuman as well as human animals and certain machines (especially instruments that complicate our customary senses). These are all important and provocative arguments to have. But are these actually the arguments the Singularitarians are having?

Are Singularitarians making the rather large point that consciousness might be non-biologically materialized? Or are they mobilizing hoary sf cliches as relevant terms of policy art -- "friendliness"? "cyberangels"? really? Or are they making claims about the eventual triumph of serially failed program based on reductive, sociopathic, body-loathing, stealthily spiritual, conspicuously faith-based AI-models?

The half-century old techno-utopian dream of Good Old Fashioned AI (GOFAI) seems to me to model intelligence in ways that attest as much to the social alienation and the quest for facile clarities and certainties as to our actual understanding (such as it is) of the actually existing material systems exhibiting actual intelligence and consciousness in the actual world. So seen, it is hardly surprising to find that research programs modeled on these assumptions keep failing. And it is hard to see why this failure would be circumvented by actually amplifying GOFAI from a failed quest to build intelligent AI on wrongheaded assumptions into a quest to build instead a superintelligent AI on wrongheaded assumptions. As marketing gambits go, it is true you can sell more of the same crappy unwholesome cola by adding a Big Gulp to the range of options, but it is hard to see how that makes your cola less crappy or more wholesome if that's what you were worried about.

Like most techno-transcendental wish-fulfillment fantasists, Singularitarians want to be taken seriously on their own terms. They may not enjoy disagreement particularly but they can appreciate even some forms of ridicule if these direct more attention their way or skew the co-ordinates of legitimate debate in their direction. There are few things Robot Cultists better enjoy than debating the Robot God Odds with skeptics on terms that they regard as "technical." This matters not least because even though the Singularitarians, Techno-Immortalists, Transhuman eugenicists, "geo-engineers" like to declare themselves champions of enlightenment and science, they are drawn away from scientific consensus to the fringe in tenet after tenet after tenet and then assert their convictions in the unenlightened undercritical tonalities of True Belief. I don't doubt that monks were annoyed with their opponents debating the number of angels who could dance on a pin-head, but so long as that was the debate there were asses in the pews and that, after all, was the victory that mattered most.

I think this is the force of Charles Rubin's objection to Carr at the bioconservtive Futurisms blog, namely: "there are also people attempting to develop machine consciousness, and while they may not get the resources or support they think they deserve, the tech culture at least seems largely on their side... [I]sn’t that something to worry about?" The Robot God may not exist and may not ever exist, but Robot Cultists do, and they are definitely, you know, doing stuff.

Like Carr, I think the partisans of superintelligence (or at any rate the partisans of taking superintelligence seriously on their terms) are selling moonshine. Their views are so symptomatic they might deserve to be taken seriously by the therapists of the partisans. And their views are so flawed they probably deserved to be taken more seriously by whomever it was who graded their science papers, assuming they actually ever took a real science class. But it is still hard to see why their concerns should be taken seriously on their terms by policy-makers. The Singularitaarians are practically on the road to nowhere and it really does matter that we understand that reality as it actually is before we go on to worry instead about the more real danger that so many do take Singularitarians seriously on their own terms even though the Singularitarians are not serious on those terms.

As I have said many times, the example of the Hayekian Mont Pelerin Society reminds us that a small band of ideologues committed to discredited notions that happen to benefit and compliment the rich can sweep the world to the brink of ruin just as the example of the neoconservatives reminds us that a small band of even ridiculous committed people can prevail to the cost of us all even when they are peddling not only discredited but outrageous notions that appeal to irrational passions. Even though the futurologists are peddling nonsense, there are many elite-incumbent interests, not to mention complacent consumerist technoscientific illiterates, that find both titillation as well as useful and consoling rationalizations in their robocultic formulations. And there is plenty of damage that can be done when technodevelopmental discourse and policy are suffused with their deranging assumptions, aspirations, figures, and frames.

Although Carr does not fully elaborate the point himself, I think it is important to notice that he began his piece denigrating the silliness of superintelligent-AI discourse by observing that "[n]ow... we’ve branded every consumer good with a computer chip “smart[.]” Like Carr, I do not think there is any reason to take the least bit seriously the robocultic prediction that a superintelligent Robot God is on the horizon and that nothing much matters (not greenhouse gasses, not neoliberal precarization, not racist biases in policing, not arms proliferation, not human trafficking, not neglected treatable diseases in overexploited populations) apart from making sure that this superintelligent Robot God is not naughty but nice.

But I do think there is every reason to take enormously seriously the ever greater public prevalence -- in corporate entities like Google, in military entities like DARPA, in academic entities like Stanford and Oxford -- of the ideology underpinning these predictions about superintelligent AI:

There are reasons to think that AI-faithful make crappy software like autocorrect not because autocorrect is good at what it does but because they see autocorrect as a sort of fledgling robogodlet to which they owe their allegiance.

There are reasons to think that when we call cards and cars "smart" that are not smart at all we begin to lose sight of the demands of legibility, dignity, and flourishing of people who actually are smart.

There are reasons to think we have more than enough "unfriendly" AI in the world already -- even if it looks nothing like what the Singularitarians are warning us about and distracting us with -- when algorithmic credit scoring stands in for judgments about whether humans deserve to be treated as homeowners and when Big Data profiles stand in for judgments about whether humans deserve to be targeted for extrajudicial murder.

When it comes to superintelligent AI, the odds aren't good and the goods are very odd. It matters that we take care to determine just what it is that matters in these matters.

No comments: