Using Technology to Deepen Democracy, Using Democracy to Ensure Technology Benefits Us All
Friday, June 10, 2005
GIGOO
As more and more computers continue to grow more and more powerful, more and more people worry more and more about the specter of hostile or simply indifferent nonbiological superhuman intelligences. This makes me think that far too much of the roboethical discourse around ethical AI has been informed by monster movies.
Look: Computers do not have to "awaken" to do their mischief. Unfathomably powerful computers need not evolve to form demonic "intentions" to manage to destroy the world any number of awful but readily conceivable ways.
The emphasis of many roboethicians on "hostility" and "intelligence" -- neither of which I personally imagine likely to be spit up by a hyperbolic MIPS curve or any near to middle-term future chapter in Moore's extraordinary ongoing saga -- looks to me like a deranging distraction more than anything else.
I'm personally incomparably more scared of GIGOO (unprecedentedly powerful, complex, distributed, possibly self-recursive, probably globally networked, buggy software) than "Unfriendly AI."
And, no, I do not deny the possibility "in principle" of an engineered intelligence embodied in a nonbiological substrate. I don't think intelligence is some kind of ghost in the machine.
I just think that when computer enthusiasts talk about the looming arrival of artificial intelligence they (a) usually seem to overestimate the adequacy of their practical knowledge, (b) usually seem to overestimate the smooth function of technologies on the ground, and (c) usually seem to mean by "intelligence" rather less than what most people mean when they speak of human intelligence. This is all very familiar because of course these limitations have always bedeviled the discourse of artificial intelligence, especially when it takes to prognosticating.
But none of this should make anybody feel certain, a priori, that the strong program of artificial intelligence will never succeed... Indeed, as a good materialist I frankly see no reason at all why it shouldn't succeed eventually -- so long as civilization doesn't manage to destroy itself through commonplace short-sighted greed and superstition.
But, frankly, it seems to me technoprogressives have much more pressing things to worry about right about now than hostile nonbiological superhuman intelligences. After technoprogressives manage to grapple with unprecedentedly cheap and powerful WMD, address damaging climate change exacerbated by expanding petrochemical industrialization, deal with global pandemics incubated by technologically facilitated migration, slow pernicious wealth concentration via increasing automation, re-articulate intuitions about "consent" to accommodate cognitive modification via existing and emerging neuroceuticals, address prosthetic practices of medical enhancement and longevity within the frames of rights culture and the promotion of general welfare, and sort out the threats and promises of digital networked media and communications technologies to the democratic experiment, then, and only then, will it make much sense to me to expend technoprogressive energies in worries about making prospective HALs behave like pals.
GIGOO quandaries will have to be addressed through regulation, oversight, sequestration, and such. The hostile and malign intelligences with which we will no doubt have to deal in fact soon enough (and in a real sense already are) belong to the criminal humans who will surely code or deploy these GIGOO nuisances. I can't imagine that any of these roboethical efforts will be helped along particularly if they are articulated through the prism of hype-notised handwaving enthusuasm or disasterbatory panic about superhuman computers or robot armies or other artificial intelligences.
Look: Computers do not have to "awaken" to do their mischief. Unfathomably powerful computers need not evolve to form demonic "intentions" to manage to destroy the world any number of awful but readily conceivable ways.
The emphasis of many roboethicians on "hostility" and "intelligence" -- neither of which I personally imagine likely to be spit up by a hyperbolic MIPS curve or any near to middle-term future chapter in Moore's extraordinary ongoing saga -- looks to me like a deranging distraction more than anything else.
I'm personally incomparably more scared of GIGOO (unprecedentedly powerful, complex, distributed, possibly self-recursive, probably globally networked, buggy software) than "Unfriendly AI."
And, no, I do not deny the possibility "in principle" of an engineered intelligence embodied in a nonbiological substrate. I don't think intelligence is some kind of ghost in the machine.
I just think that when computer enthusiasts talk about the looming arrival of artificial intelligence they (a) usually seem to overestimate the adequacy of their practical knowledge, (b) usually seem to overestimate the smooth function of technologies on the ground, and (c) usually seem to mean by "intelligence" rather less than what most people mean when they speak of human intelligence. This is all very familiar because of course these limitations have always bedeviled the discourse of artificial intelligence, especially when it takes to prognosticating.
But none of this should make anybody feel certain, a priori, that the strong program of artificial intelligence will never succeed... Indeed, as a good materialist I frankly see no reason at all why it shouldn't succeed eventually -- so long as civilization doesn't manage to destroy itself through commonplace short-sighted greed and superstition.
But, frankly, it seems to me technoprogressives have much more pressing things to worry about right about now than hostile nonbiological superhuman intelligences. After technoprogressives manage to grapple with unprecedentedly cheap and powerful WMD, address damaging climate change exacerbated by expanding petrochemical industrialization, deal with global pandemics incubated by technologically facilitated migration, slow pernicious wealth concentration via increasing automation, re-articulate intuitions about "consent" to accommodate cognitive modification via existing and emerging neuroceuticals, address prosthetic practices of medical enhancement and longevity within the frames of rights culture and the promotion of general welfare, and sort out the threats and promises of digital networked media and communications technologies to the democratic experiment, then, and only then, will it make much sense to me to expend technoprogressive energies in worries about making prospective HALs behave like pals.
GIGOO quandaries will have to be addressed through regulation, oversight, sequestration, and such. The hostile and malign intelligences with which we will no doubt have to deal in fact soon enough (and in a real sense already are) belong to the criminal humans who will surely code or deploy these GIGOO nuisances. I can't imagine that any of these roboethical efforts will be helped along particularly if they are articulated through the prism of hype-notised handwaving enthusuasm or disasterbatory panic about superhuman computers or robot armies or other artificial intelligences.
Subscribe to:
Post Comments (Atom)
1 comment:
This second comment about priorities is fair enough...
But as you can see in the supplemental GIGOO post from the next day, I am not just frustrated about the misplaced urgency of those preoccupied with "AI" -- whether Friendly or not -- and hence the misspent effort they inspire.
My deeper perplexity and worry is with the figures through which they communicate their quandaries, the way they frame their case, the unrealistic general assumptions about technological development they mobilize. These problems drift into technoprogressive discourse more generally.
I personally think that some of the more curiously reductive and apolitical assumptions that get vented among a non-marginal number of advocates for the strong program of AI are troubling in ways that help account for the reluctance of many of the "greater-than-50%" to embrace a more technoprogressive stance.
I think roboethics -- the branch of technoethics focused on foresight and deliberation about the impact of automation and computation -- should certainly devote considerable energies to thinking through potential threats and promises in replicative software. I still don't understand why after half a century of failed predictions and deranging projections the default figure though which we try to take these complexities on is: "intelligence."
GIGOO is obviously a silly term -- but its heart is in the right place. What is wanted is an alternative figure that captures the complexity, but without the monster-movie entailments. (Well, come to think of it, I guess there was the Blob...)
Post a Comment