The futurological fantasy in question follows immediately, and it is very much the usual one: imagine "we could produce intelligent robots able to do everything a person can do." We cannot do this, and it matters that we cannot do this, and it matters that whenever we pretend otherwise we end up indulging in other pretenses that are even worse, among them denigrating what it is about people that makes them other than robots and hence becoming a bit more cavalier about our responsibilities to ensure people flourish as such. This is something Krugman would not otherwise countenance, but in my experience futurological discourse sometimes makes sensible humane people far more credulous and sloppy and insensitive than they would be under normal circumstances, and I have no reason to think Krugman would be any less susceptible (indeed, we have good reason to think this is a particular weakness for the usually reliably sensible humane Krugman, about which I will say more near the end of my piece).
What Krugman proposes is that if we could (as we cannot) produce intelligent robots able to do everything a person can do, [then c]learly, such a technology would remove all limits on per capita GDP, as long as you don’t count robots among the capitas." Of course, this is far from clear at all, indeed it seems to me that to create a robot that could do literally ALL people can do, then they would necessarily have to be included "among the capitas." If prosperity means nothing but a slave economy (and we know it does not), then the tried and true method requires merely mistreating people as though they were robots, rather than making a go at the whole unwieldy making actually intelligent robots that are then mistreated as unintelligent robots project anyway. Of course, Krugman does not advocate for a slave economy (except on Fox News where everybody to the left of Newt Gingrich is said to be advocating for a socialist slave camp), nor would he likely countenance the treatment of actually intelligent robots that could do literally anything people can as slaves either. Fortunately for us all, this is a dilemma which confronts none of us, for nobody is making anything even remotely like intelligent robots in the first place.
Krugman admits this right off the bat: "Now, that [ie, intelligent robots to the rescue] [i]s not happening -- and in fact, as I understand it, not that much progress has been made in producing machines that think the way we do." Let us pause here and say what Krugman does not. It isn't just that not much progress has been made in producing artificial intelligence, it is that since just before World War II when the idea of coding artificial intelligence first seriously captured the imagination of certain techno-utopians (I leave to the side a long pre-history of fascinating automatons and con-artists, even though they have in my view much more in common with contemporary adherents of AI and robo-utopianism than is commonly admitted, even among their skeptics) advocates of the idea have been predicting with stunning confidence the imminent arrival of AI pretty much every year on the year, year after year, and with never the slightest diminishment in their conviction, despite being always only completely wrong every single time. Very regularly, these adherents of AI speak of "intelligence" in ways the radically reduce the multiple dimensions and expressions of intelligence as it actually plays out in our everyday usage of the term, and often they seem to disparage and fear the vulnerability, error-proneness, emotional richness of the actually incarnated intelligence materialized in biological brains and in historical struggles. It is one thing to be a materialist about mind (I am one) and hence concede that other materializations than organismic brains might give rise in principle to phenomena sufficiently like consciousness to merit the application of the term, but it is altogether another thing to imply that there is any necessity about this, that there actually are any artifacts in the world here and now that exhibit anything near enough to warrant the term without doing great violence to it and those who merit its assignment, or to suggest we know enough in declaring mind to be material to be able to engineer one any time soon if ever given how much that is fundamental to thought that we simply do not yet understand.
One might like to think that this awareness is embedded in Krugman's admission that AI "isn't happening," but of course, were he to take this lesson to heart he would little likely have invited us down this garden path in the first place, and, true enough, he takes back his admission that AI "isn't happening" immediately after admitting it: "But it turns out that there are other ways of producing very smart machines." Let us be clear, if by "very smart" machines Krugman means very useful machines well designed by intelligent people then this is true (but we would still then have no reason to entertain his "fantasy technology scenario") but if by "very smart" machines he means machines actually exhibiting something like intelligence then this is still as not true as it was a minute ago. That is to say, it is not at all true. And for all the reasons I mentioned before, this is an untruth that it matters enormously to be clear about, because in attributing intelligence unintelligently we risk loosening the indispensable attribution of intelligence to those who actually incarnate it.
Krugman writes of the new "very smart machines":
In particular, Big Data -- the use of huge databases of things like spoken conversations -- apparently makes it possible for machines to perform tasks that even a few years ago were really only possible for people. Speech recognition is still imperfect, but vastly better than it was and improving rapidly, not because we’ve managed to emulate human understanding but because we’ve found data-intensive ways of interpreting speech in a very non-human way. And this means that in a sense we are moving toward something like my intelligent-robots world; many, many tasks are becoming machine-friendly.I do hope everybody takes note of the terrible argumentative burden being borne in this passage by the word "apparently" -- a burden that is especially noteworthy given how little evidence is offered up to render the claim, you know, actually "apparent." Quite apart from the silliness of pretending the enraging ineptitudes of Autocorrect and Siri, say, would suggest to anybody but a "true believer" in AI that "we are moving toward something like my intelligent-robots world" (do take note of that personally possessive "my" to describe a non-existing world of the future for which Krugman is uncharacteristically disdaining the empirical evidence of our -- you should note that pronoun, too -- actually existing world, peer to peer), I must protest the glib suggestion that one can still describe as the very human act of "interpretation" what Krugman is actually referring to when he speaks of "data-intensive… very non-human ways of… speech." Indeed, I protest that this suggestion is not just as bad as the falsehood of proposing as so many AI dead-enders do, and as Krugman seems to deny, that we have "emulated understanding" in code, but that the claim about machine "interpretation" is actually just another form of making exactly the same proposal.
Now, Krugman's whole discussion is a response to a piece by Robert J. Gordon proposing that "[g]lobal growth is slowing -- especially in advanced-technology economies. This column argues that regardless of cyclical trends, long-term economic growth may grind to a halt. Two and a half centuries of rising per-capita incomes could well turn out to be a unique episode in human history." In that piece, Gordon provides a handy little table summarizing the thrust of his argument and its assumptions, which Krugman reproduces in his response as well. Here is the key passage:
The analysis in my paper links periods of slow and rapid growth to the timing of the three industrial revolutions:Krugman agrees both with Gordon's narrative of three key transformative technoscientific ensembles and with Gordon's insistence that the second ensemble was much more transformative than the third (in which we are presently caught up). Krugman's facile "intelligent robot scenario" is proposed precisely to suggest an as yet unrealized but presumably imminent (it isn't) amplification of the third ensemble that would render it even more transformative than the prior ensembles. I have long been a champion of Krugman's thesis that market fundamentalism represents a Dark Age of Macroeconomics in which public discussion of economic policy exhibits a basic illiteracy of Keynes(-Hicks) insights akin to the comparable policy illiteracies driving "intelligent design" into biology classrooms, climate-change denialism, abstinence-only education, more guns as the solution to gun violence, and on and on. But I have to wonder if Krugman's futurology in this instance is mobilized to defend an article of Keynesian faith actually much better left behind with the Dark Ages as well, the faith expressed in Economic Possibilities of Our Grandchildren that a prolongation of progress ensures prosperity for us all without the muss and fuss of radical politics simply via compound interest.IR #1 (steam, railroads) from 1750 to 1830;It provides evidence that IR #2 was more important than the others and was largely responsible for 80 years of relatively rapid productivity growth between 1890 and 1972.
IR #2 (electricity, internal combustion engine, running water, indoor toilets, communications, entertainment, chemicals, petroleum) from 1870 to 1900; and
IR #3 (computers, the web, mobile phones) from 1960 to present.
Quite apart from the extent to which Keynes was endorsing too much imperialism for comfort in that early argument, the deeper problem is that he was also endorsing, as so very many twentieth century intellectuals did, as "inevitable progress" what amounted to the inflation of a petrochemical bubble that so vastly amplified the forces available to human agency that it created an impression that its brute force could overcome all problems. This wasn't true, in fact it often lead to catastrophically greater problems (the Dust Bowl, antibiotic resistance, car culture, desert cities depleting aquifers, rising GDP conjoined to rising stress and suicide and reports of dissatisfaction, etc.), but even if it were true it was never going to last forever, indeed it was never going to last long enough to smooth away the criminal unevenness distributing its benefits and its costs, and it is beginning to look like the only thing worse than finitude pretending to infinitude as resources run out is the possibility that the waste and pollution accompanying this false infinitude might actually manage to destroy the world before destroying the world by running out.
I agree with Krugman that Gordon's illustrative table is useful to a point, but I want to point out that accepting it too wholeheartedly obscures as much as is illustrated. Although petroleum makes an appearance in Gordon's second ensemble it seems to me it should be foregrounded considerably more, and that coal should probably appear just as prominently in the first ensemble. This would immediately clarify that part of what is lacking in the third ensemble is a substantial shift to renewable energy the absence of which goes a long way to explain why the third ensemble really hasn't had anything like the transformative substance of the first and second. Recalling the famous introduction to Keynes' Economic Consequences of the Peace and its lament of the irrationally exuberant "Long Boom"-esque celebration of the networked globalism enabled by what Tom Standage has termed as the Victorian Internet of telegraphy one really is forced to question whether the Gordon's third ensemble isn't really just the continuation of the second after all. Indeed, to the extent that the internet is still powered by coal and implemented on petrochemical devices -- and to the extent that one accepts my premise that especially the petrochemical epoch amounted to the inflation of a ruinous meta-bubble misconstrued as modern civilization -- then it is really hard not to wonder if Gordon's third ensemble represents anything but a more hysterically hyperbolic variation of the preceding fraud, a "digitality" enabling outrageous global financial fraud, tragic race to the bottom globalization, and distracting attention from economic collapse and environmental catastrophe with promises of virtual heavens and robot paradises.
When I suggest that part of what makes the third ensemble vacuous is the lack of renewable energy investment I might seem to be providing my own variation on Krugman's robotic supplement to renew hopes for progress, but I would remind both Gordon and Krugman of Yochai Benkler's provocative suggestion that the substantial impact of digitization is precisely anti-industrial, where what is taken to be unique to industrialism is the reliance for productivity on capital-intensive infrastructure investment which in turn ensures concentrations of authority countervailing the otherwise democratizing force of comparatively more disseminated prosperity. I do indeed still believe in the possibility of progress, but I would not characterize it as industrial but absolutely anti-industrial in character, a matter of relocalized and disseminated investment, democratic and accountable authority, situated and networked knowledge, peer to peer. Political struggle in the direction of equity-in-diversity, and stakeholder/knowledge-struggle toward the solution of shared problems still looks to me like progress, but it is a matter of taking up democratic effort, not abdicating agency in a hope for techno-transcendence.
Krugman genuflects a bit unconvincingly toward such political realities in an aside:
Ah, you ask, but what about the people? Very good question. Smart machines may make higher GDP possible, but also reduce the demand for people -- including smart people. So we could be looking at a society that grows ever richer, but in which all the gains in wealth accrue to whoever owns the robots. And then eventually Skynet decides to kill us all, but that’s another story.Of course, there is nothing so conventional among futurologists of the most embarrassingly Robot Cultic kind to propose altogether flabbergasting wish-fulfillment fantasies, involving sooper-genius brain upgrades, living forever in shiny sexy robot bodies, wallowing atop nanobotic treasure piles or in Holodeck heavens, and so on and so forth, but then to attempt to boost their credibility as Very Serious intellectuals by piously warning us of the dangers of clone armies, robotic uprisings, Robot Gods eating humans as computronium feedstock, and so on. That is to say, they provide a little disasterbatory hyperbole as a "balance" to their techno-transcendent hyperbole.
While these hoary sfnal conceits made for some diverting fiction when they first appeared decades ago and still can be jolted into life with great writing, great acting, great special effects doing some serious heavy lifting, I cannot pretend to find much in the way of original insight in this sort of stuff let alone, for heaven's sake, thoughtful policy-making. Of course, these literary expressions are most powerful when they provide critical purchase on our current predicaments: the rhetorical force of the genre depends on the framing narrative machinery through which what is proffered under the guise of future prediction or projection provides in fact the needed alienation to re-imagine our inhabitation of the present differently, more capaciously, more critically. When futurological scenarists go on to republish simpleton sketches of the scenery of literary sf and then treat this most dispensable furniture as an analytic mode involving literal prediction and projection of "the future" (which doesn't exist, and can only become the focus of identification at the cost of dis-identification with the present) the result debauches the literary form it steals from while at once it deranges the policy form it seeks to promote itself as.
Notice that one of the things one is not talking about when one is talking about perpetual GDP growth via intelligent robots (or the Very Serious non-worry of plutocratic slavebot plantation societies) is how incomparable wealth concentration was abetted through the upward distribution of profitable productivity gains of automation in the context of the destruction of organized labor in the United States in the aftermath of the great but incomplete gains of the middle class in the aftermath of the New Deal -- about which Krugman has useful things to say when he isn't impersonating a futurological guru. In other words, when one is talking futurologically one is talking about things that don't and won't exist rather than talking about things that do, or at any rate talking about things that do exist only in highly hyperbolized and symptomatic ways that render them unavailable for useful critical engagement, even though, as here, the actual reality of automation provides the disavowed real world substance on which the futurological fancies of intelligent slavebots probably ultimately depend for much of their intuitive force anyway.
Needless to say, I find little comfort in Krugman's jokey futurological offer of a Terminator flip-side to his transparently consumer-capitalist robo-utopia as ideological guarantor of eternal progress, and I am not at all edified to see someone I otherwise admire quite a lot (I've read all of his books, including the textbooks and memoirs, and often link to his work here, and of course I will continue to do so with great pleasure and to my great benefit) stooping so low. I'll return the favor with the low blow of reminding readers that as a kid Krugman wanted to be Asimov's Foundational Hari Seldon when he grew up, and regards economics as a poor substitute but perhaps a serviceable one for "psychohistory" -- which Krugman imagines as a discipline integrating economics, political science, and sociology (and no doubt "Big Data") -- "a social science that gives its acolytes a unique ability to understand and perhaps shape human destiny." Interesting word choice, acolytes! While I think it is enormously important for human beings to try to understand the times in which we live, the meaning of events that beset us, the history which we take up, the legacies with which we will come to grapple later in life as will generations who follow after us, I do not agree that there can be a political science of free beings, I do not agree that there is a human destiny but the open futurity inhering in the ineradicable diversity of stakeholders to the present, I do not believe that thinking what we are doing is the least bit about making profitable bets or making better prophesies. I think the skewed perspective of futurology may seem to be a matter of talking about robots but it is really more a matter of talking as if we are robots.
Here is Krugman's final thought: "Anyway, [this is] interesting stuff to speculate about -- and not irrelevant to policy, either, since so much of the debate over entitlements is about what is supposed to happen decades from now." May I suggest by way of conclusion myself that the primary relevance of this speculation to future policy outcomes is precisely the deranging impact of this genre of speculation on policy-making in general. Consider the way in which futurological daydreams about longevity gains have provided the rationale for suggestions that the retirement age be delayed -- even though expectations of longevity for actual people at retirement age haven't increased at all for most people who have to work for a living, although no doubt superannuated senators and wonks in their cushy posts may feel their prospects past sixty-five are long. Consider the way in which futurological daydreams of megascale geo-engineering projects provide corporate rationales continued paralysis in the face of anthropogenic climate catastrophe -- rationales in which the very corporate-military actors who exacerbated and denied climate change are cast as convenient imaginary saviors from climate change, no less profitably of course but much less accountably due to the conditions of emergency, reckless proposing hosts of unilateral interventions into ill-understood climate systems, willy-nilly, at industrial scales, without who knows what consequences all the while decrying democratic environmental politics of education, regulation, incentivization, and public investment as hopelessly corrupt, dead on arrival, emotionally overwrought.
I am far from denying the necessity of policy makers to make recourse to consensus science in crafting effective legislation, making sound investments, planning for likely problems and opportunities. Every actually legibly constituted scientific and academic and professional discipline has a foresight dimension -- but there is no analytic discipline evacuated of or subsuming all specificity that produces "foresight in general," and there is no literary discipline devoted to testable hypotheses rather than to meaning-making through salient narrative, figurative, logical association. There is no such thing as "The Future" qua destination or Destiny, nor such forces as "trends" one can ride to that destination or Destiny: there are only judgments and narratives that provide purchase on the present and only to that extent provide some measure of guidance as the present opens onto the next present. There are few economists that provide us a better grasp through the application of empirically grounded models of the complex, dynamic policy terrain of international economics, uneven technodevelopment, and liquidity traps than Paul Krugman. He is invaluable in the work of understanding where we are going from the present, and as such he has no reason to pine after prophetic utterances.
We have no reason to think intelligent robots are on the way in any sense remotely relevant to responsible policy concern. And it won't be economists (or pop-tech journalists or, worst of all, futurologists) we should be reading to gain a sense when intelligent robots are proximate enough to assume real world relevance, it will be biologists, neuroscientists, engineers. But we have every reason to think that were intelligent robots to arrive on the scene they would do so only after who knows how many intermediary steps had been made, at every single one of which there will be quandaries for policy to address that will be shaped by the stakeholders to the changes of the moment, the shaping of which will articulate in turn the terms and stakes on which the next change will depend. The distances and destinies of the futurologists exert little force and provide little insight on the complex vicissitudes of technoscientific change and technodevelopmental struggle, and their pristine lines of techno-teleologic rarely have much at all to do with the shape and substance and stakes that drive the way to eventual outcomes. There is plenty for policy-makers to grapple with as we are beset by dumb automation in the hands of plutocrats, and every moment devoted to wish-fulfillment fantasies of intelligent robot friends and foes is a moment stolen from matters actually at hand, many of them sufficiently urgent that our failure to be equal to them guarantees as nothing else could that futurological fancies never find their way even to some fragmentary fruition.