I do definitely agree that not only humans exhibit intelligence (ethical vegetarian here). I also agree that we should take care not to reduce discussion of "intelligence" to discussion of "mind" in whatever construal, especially in its recently fashionable computational figurations. I assume a conception of "intelligence" that admits of emotional dimensions as well as sociocultural ones that tend to be neglected in current philosophical currents. In the passage you quote there is already the indication -- left unexplored, it is true -- that intelligence is expressed and incarnated in history and in social struggle, for one thing.
Even if we were to restrict "intelligence" to the bleak precinct of the body, I would not care to assume that only the brain is "its" seat, given the wide-ranging organismic terrain on which the nervous system plays. And even to the extent that the brain is where the action is insofar as intelligence is concerned, I do think it pays to notice, in Bruce Sterling's phrase, that the brain seems a whole lot more like a gland than a computer of all things.
I definitely agree that there is a deep-seated Cartesianism undergirding GOFAI, Singularitarianism, and techno-immortalist uploading fantasies -- which is paradoxical to say the least, because this Cartesianism is the stealth spiritualism in what they insist is a starkly materialist viewpoint (indeed, when I have countered that materialism about intelligence demands greater respect than they show for the actually-existing organismic and social materialization of intelligence they tend rather nonsensically to accuse me of a chauvinist championing of "vitalism").
Now, in the post you discuss I fear I had too many plates spinning on poles already to explore these themes as they warrant. Also, my focus was more specifically rhetorical then broadly analytic in the piece, and so you may be right that speech recognition devices are getting better at what they do in the way Krugman says, maybe even enough to justify his skepticism about Robert J. Gordon's skepticism about digital-boosterism (the jury's very much out and you can count me with Gordon and the skeptics on that one still), but it seems to me that in mobilizing futurological frames about super-intelligent robotic quasi-persons Krugman is committed to a fairly conventional AI discourse throughout the piece, whether the specificities of his technological examples warrant it or not, and hence my concerns about the ways such futurological frames threaten to denigrate actual intelligence remain relevant even if Krugman thinks he is talking about how Big Data might give us a Long Boom contra Gordon. It is the generic conventions of his futurological frame that lead him to conclude very much as he began, making sfnal claims about robot uprisings where he began with robot slaves.
Of course, it is true that our techniques and artifacts amplify the force of our agency, whether we are talking about an abacus, a bulldozer, or a speech-recognition device. It is also true that the benefits of such amplification are exactly as likely to exacerbate inequity and exploitation as to facilitate equity and flourishing, and it is political not technical agency that determines the difference.
As I said in the piece, I think Krugman contributes to the amplification of democratic agency through which the amplification of technical agency can be directed to the common good when he is writing as an economist, but I think he undermines this contribution when he is writing instead as a futurologist, so shaped as that discourse is with techno-fetishisms, techno-reductionisms, and techno-triumphalisms that ultimately conduce to anti-democratic politics whatever the avowed intentions of those who deploy it.
And so, I definitely agree with you again when you say "there is every reason to believe that machines can and will replace everything we do, or nearly everything, unless we bring technological 'progress' under democratic control." This is why the piece included what might have seemed the digressive insistence that, "I do not agree that there can be a political science of free beings, I do not agree that there is a human destiny that beckons the clear-sighted but an open futurity inhering in the ineradicable diversity of stakeholders to the present, I do not agree that thinking what we are doing is the least bit about making profitable bets or making better prophesies. I think the skewed perspective of futurology may sometimes seem to be a matter of talking about robots but it is really more a matter of talking as if we are robots." Politics is profoundly misconceived as a scientific description of predictable regularities of human behavior rather than as an ongoing rhetorical and pragmatic project through which a plurality of stakeholders to a shared finite world reconcile the diversity of their aspirations.
Hari Seldon is simply the wrong hero for a liberal economist to have, if I may say so again, especially when Marx, Keynes, Polanyi, and Galbraith remain available, warts and all.
Using Technology to Deepen Democracy, Using Democracy to Ensure Technology Benefits Us All
Thursday, January 03, 2013
Reply to David Golumbia on Krugman's Futurological Forays
David Golumbia has posted an intelligent, wide-ranging response to my critique of Krugman's occasional forays into futurology. You should follow the link, where I have offered up the following reply (edited here in a few minor spots):