Using Technology to Deepen Democracy, Using Democracy to Ensure Technology Benefits Us All

Thursday, December 27, 2012

Krugman Flirts With Robot Cultism

In a recent post, Paul Krugman asks us to "[c]onsider for a moment a sort of fantasy technology scenario." The appearance of that word "scenario" should be chilling enough to those who know it tends to portend think-tank non-thinking is on the way. That it is preceded by the word "technology" removes all doubt, and as for the word "fantasy," well, whenever the words "technology" and "scenario" are combined one should more or less just consider that one implied.

The futurological fantasy in question follows immediately, and it is very much the usual one: imagine "we could produce intelligent robots able to do everything a person can do." We cannot do this, and it matters that we cannot do this, and it matters that whenever we pretend otherwise we end up indulging in other pretenses that are even worse, among them denigrating what it is about people that makes them other than robots and hence becoming a bit more cavalier about our responsibilities to ensure people flourish as such. This is something Krugman would not otherwise countenance, but in my experience futurological discourse sometimes makes sensible humane people far more credulous and sloppy and insensitive than they would be under normal circumstances, and I have no reason to think Krugman would be any less susceptible (indeed, we have good reason to think this is a particular weakness for the usually reliably sensible humane Krugman, about which I will say more near the end of my piece).

What Krugman proposes is that if we could (as we cannot) produce intelligent robots able to do everything a person can do, [then c]learly, such a technology would remove all limits on per capita GDP, as long as you don’t count robots among the capitas." Of course, this is far from clear at all, indeed it seems to me that to create a robot that could do literally ALL people can do, then they would necessarily have to be included "among the capitas." If prosperity means nothing but a slave economy (and we know it does not), then the tried and true method requires merely mistreating people as though they were robots, rather than making a go at the whole unwieldy making actually intelligent robots that are then mistreated as unintelligent robots project anyway. Of course, Krugman does not advocate for a slave economy (except on Fox News where everybody to the left of Newt Gingrich is said to be advocating for a socialist slave camp), nor would he likely countenance the treatment of actually intelligent robots that could do literally anything people can as slaves either. Fortunately for us all, this is a dilemma which confronts none of us, for nobody is making anything even remotely like intelligent robots in the first place.

Krugman admits this right off the bat: "Now, that [ie, intelligent robots to the rescue] [i]s not happening -- and in fact, as I understand it, not that much progress has been made in producing machines that think the way we do." Let us pause here and say what Krugman does not. It isn't just that not much progress has been made in producing artificial intelligence, it is that since just before World War II when the idea of coding artificial intelligence first seriously captured the imagination of certain techno-utopians (I leave to the side a long pre-history of fascinating automatons and con-artists, even though they have in my view much more in common with contemporary adherents of AI and robo-utopianism than is commonly admitted, even among their skeptics) advocates of the idea have been predicting with stunning confidence the imminent arrival of AI pretty much every year on the year, year after year, and with never the slightest diminishment in their conviction, despite being always only completely wrong every single time. Very regularly, these adherents of AI speak of "intelligence" in ways the radically reduce the multiple dimensions and expressions of intelligence as it actually plays out in our everyday usage of the term, and often they seem to disparage and fear the vulnerability, error-proneness, emotional richness of the actually incarnated intelligence materialized in biological brains and in historical struggles. It is one thing to be a materialist about mind (I am one) and hence concede that other materializations than organismic brains might give rise in principle to phenomena sufficiently like consciousness to merit the application of the term, but it is altogether another thing to imply that there is any necessity about this, that there actually are any artifacts in the world here and now that exhibit anything near enough to warrant the term without doing great violence to it and those who merit its assignment, or to suggest we know enough in declaring mind to be material to be able to engineer one any time soon if ever given how much that is fundamental to thought that we simply do not yet understand.

One might like to think that this awareness is embedded in Krugman's admission that AI "isn't happening," but of course, were he to take this lesson to heart he would little likely have invited us down this garden path in the first place, and, true enough, he takes back his admission that AI "isn't happening" immediately after admitting it: "But it turns out that there are other ways of producing very smart machines." Let us be clear, if by "very smart" machines Krugman means very useful machines well designed by intelligent people then this is true (but we would still then have no reason to entertain his "fantasy technology scenario") but if by "very smart" machines he means machines actually exhibiting something like intelligence then this is still as not true as it was a minute ago. That is to say, it is not at all true. And for all the reasons I mentioned before, this is an untruth that it matters enormously to be clear about, because in attributing intelligence unintelligently we risk loosening the indispensable attribution of intelligence to those who actually incarnate it.

Krugman writes of the new "very smart machines":
In particular, Big Data -- the use of huge databases of things like spoken conversations -- apparently makes it possible for machines to perform tasks that even a few years ago were really only possible for people. Speech recognition is still imperfect, but vastly better than it was and improving rapidly, not because we’ve managed to emulate human understanding but because we’ve found data-intensive ways of interpreting speech in a very non-human way. And this means that in a sense we are moving toward something like my intelligent-robots world; many, many tasks are becoming machine-friendly.
I do hope everybody takes note of the terrible argumentative burden being borne in this passage by the word "apparently" -- a burden that is especially noteworthy given how little evidence is offered up to render the claim, you know, actually "apparent." Quite apart from the silliness of pretending the enraging ineptitudes of Autocorrect and Siri, say, would suggest to anybody but a "true believer" in AI that "we are moving toward something like my intelligent-robots world" (do take note of that personally possessive "my" to describe a non-existing world of the future for which Krugman is uncharacteristically disdaining the empirical evidence of our -- you should note that pronoun, too -- actually existing world, peer to peer), I must protest the glib suggestion that one can still describe as the very human act of "interpretation" what Krugman is actually referring to when he speaks of "data-intensive… very non-human ways of… speech." Indeed, I protest that this suggestion is not just as bad as the falsehood of proposing as so many AI dead-enders do, and as Krugman seems to deny, that we have "emulated understanding" in code, but that the claim about machine "interpretation" is actually just another form of making exactly the same proposal.

Now, Krugman's whole discussion is a response to a piece by Robert J. Gordon proposing that "[g]lobal growth is slowing -- especially in advanced-technology economies. This column argues that regardless of cyclical trends, long-term economic growth may grind to a halt. Two and a half centuries of rising per-capita incomes could well turn out to be a unique episode in human history." In that piece, Gordon provides a handy little table summarizing the thrust of his argument and its assumptions, which Krugman reproduces in his response as well. Here is the key passage:
The analysis in my paper links periods of slow and rapid growth to the timing of the three industrial revolutions:
IR #1 (steam, railroads) from 1750 to 1830;
IR #2 (electricity, internal combustion engine, running water, indoor toilets, communications, entertainment, chemicals, petroleum) from 1870 to 1900; and
IR #3 (computers, the web, mobile phones) from 1960 to present.
It provides evidence that IR #2 was more important than the others and was largely responsible for 80 years of relatively rapid productivity growth between 1890 and 1972.
Krugman agrees both with Gordon's narrative of three key transformative technoscientific ensembles and with Gordon's insistence that the second ensemble was much more transformative than the third (in which we are presently caught up). Krugman's facile "intelligent robot scenario" is proposed precisely to suggest an as yet unrealized but presumably imminent (it isn't) amplification of the third ensemble that would render it even more transformative than the prior ensembles. I have long been a champion of Krugman's thesis that market fundamentalism represents a Dark Age of Macroeconomics in which public discussion of economic policy exhibits a basic illiteracy of Keynes(-Hicks) insights akin to the comparable policy illiteracies driving "intelligent design" into biology classrooms, climate-change denialism, abstinence-only education, more guns as the solution to gun violence, and on and on. But I have to wonder if Krugman's futurology in this instance is mobilized to defend an article of Keynesian faith actually much better left behind with the Dark Ages as well, the faith expressed in Economic Possibilities of Our Grandchildren that a prolongation of progress ensures prosperity for us all without the muss and fuss of radical politics simply via compound interest.

Quite apart from the extent to which Keynes was endorsing too much imperialism for comfort in that early argument, the deeper problem is that he was also endorsing, as so very many twentieth century intellectuals did, as "inevitable progress" what amounted to the inflation of a petrochemical bubble that so vastly amplified the forces available to human agency that it created an impression that its brute force could overcome all problems. This wasn't true, in fact it often lead to catastrophically greater problems (the Dust Bowl, antibiotic resistance, car culture, desert cities depleting aquifers, rising GDP conjoined to rising stress and suicide and reports of dissatisfaction, etc.), but even if it were true it was never going to last forever, indeed it was never going to last long enough to smooth away the criminal unevenness distributing its benefits and its costs, and it is beginning to look like the only thing worse than finitude pretending to infinitude as resources run out is the possibility that the waste and pollution accompanying this false infinitude might actually manage to destroy the world before destroying the world by running out.

I agree with Krugman that Gordon's illustrative table is useful to a point, but I want to point out that accepting it too wholeheartedly obscures as much as is illustrated. Although petroleum makes an appearance in Gordon's second ensemble it seems to me it should be foregrounded considerably more, and that coal should probably appear just as prominently in the first ensemble. This would immediately clarify that part of what is lacking in the third ensemble is a substantial shift to renewable energy the absence of which goes a long way to explain why the third ensemble really hasn't had anything like the transformative substance of the first and second. Recalling the famous introduction to Keynes' Economic Consequences of the Peace and its lament of the irrationally exuberant "Long Boom"-esque celebration of the networked globalism enabled by what Tom Standage has termed as the Victorian Internet of telegraphy one really is forced to question whether the Gordon's third ensemble isn't really just the continuation of the second after all. Indeed, to the extent that the internet is still powered by coal and implemented on petrochemical devices -- and to the extent that one accepts my premise that especially the petrochemical epoch amounted to the inflation of a ruinous meta-bubble misconstrued as modern civilization -- then it is really hard not to wonder if Gordon's third ensemble represents anything but a more hysterically hyperbolic variation of the preceding fraud, a "digitality" enabling outrageous global financial fraud, tragic race to the bottom globalization, and distracting attention from economic collapse and environmental catastrophe with promises of virtual heavens and robot paradises.

When I suggest that part of what makes the third ensemble vacuous is the lack of renewable energy investment I might seem to be providing my own variation on Krugman's robotic supplement to renew hopes for progress, but I would remind both Gordon and Krugman of Yochai Benkler's provocative suggestion that the substantial impact of digitization is precisely anti-industrial, where what is taken to be unique to industrialism is the reliance for productivity on capital-intensive infrastructure investment which in turn ensures concentrations of authority countervailing the otherwise democratizing force of comparatively more disseminated prosperity. I do indeed still believe in the possibility of progress, but I would not characterize it as industrial but absolutely anti-industrial in character, a matter of relocalized and disseminated investment, democratic and accountable authority, situated and networked knowledge, peer to peer. Political struggle in the direction of equity-in-diversity, and stakeholder/knowledge-struggle toward the solution of shared problems still looks to me like progress, but it is a matter of taking up democratic effort, not abdicating agency in a hope for techno-transcendence.

Krugman genuflects a bit unconvincingly toward such political realities in an aside:
Ah, you ask, but what about the people? Very good question. Smart machines may make higher GDP possible, but also reduce the demand for people -- including smart people. So we could be looking at a society that grows ever richer, but in which all the gains in wealth accrue to whoever owns the robots. And then eventually Skynet decides to kill us all, but that’s another story.
Of course, there is nothing so conventional among futurologists of the most embarrassingly Robot Cultic kind to propose altogether flabbergasting wish-fulfillment fantasies, involving sooper-genius brain upgrades, living forever in shiny sexy robot bodies, wallowing atop nanobotic treasure piles or in Holodeck heavens, and so on and so forth, but then to attempt to boost their credibility as Very Serious intellectuals by piously warning us of the dangers of clone armies, robotic uprisings, Robot Gods eating humans as computronium feedstock, and so on. That is to say, they provide a little disasterbatory hyperbole as a "balance" to their techno-transcendent hyperbole.

While these hoary sfnal conceits made for some diverting fiction when they first appeared decades ago and still can be jolted into life with great writing, great acting, great special effects doing some serious heavy lifting, I cannot pretend to find much in the way of original insight in this sort of stuff let alone, for heaven's sake, thoughtful policy-making. Of course, these literary expressions are most powerful when they provide critical purchase on our current predicaments: the rhetorical force of the genre depends on the framing narrative machinery through which what is proffered under the guise of future prediction or projection provides in fact the needed alienation to re-imagine our inhabitation of the present differently, more capaciously, more critically. When futurological scenarists go on to republish simpleton sketches of the scenery of literary sf and then treat this most dispensable furniture as an analytic mode involving literal prediction and projection of "the future" (which doesn't exist, and can only become the focus of identification at the cost of dis-identification with the present) the result debauches the literary form it steals from while at once it deranges the policy form it seeks to promote itself as.

Notice that one of the things one is not talking about when one is talking about perpetual GDP growth via intelligent robots (or the Very Serious non-worry of plutocratic slavebot plantation societies) is how incomparable wealth concentration was abetted through the upward distribution of profitable productivity gains of automation in the context of the destruction of organized labor in the United States in the aftermath of the great but incomplete gains of the middle class in the aftermath of the New Deal -- about which Krugman has useful things to say when he isn't impersonating a futurological guru. In other words, when one is talking futurologically one is talking about things that don't and won't exist rather than talking about things that do, or at any rate talking about things that do exist only in highly hyperbolized and symptomatic ways that render them unavailable for useful critical engagement, even though, as here, the actual reality of automation provides the disavowed real world substance on which the futurological fancies of intelligent slavebots probably ultimately depend for much of their intuitive force anyway.

Needless to say, I find little comfort in Krugman's jokey futurological offer of a Terminator flip-side to his transparently consumer-capitalist robo-utopia as ideological guarantor of eternal progress, and I am not at all edified to see someone I otherwise admire quite a lot (I've read all of his books, including the textbooks and memoirs, and often link to his work here, and of course I will continue to do so with great pleasure and to my great benefit) stooping so low. I'll return the favor with the low blow of reminding readers that as a kid Krugman wanted to be Asimov's Foundational Hari Seldon when he grew up, and regards economics as a poor substitute but perhaps a serviceable one for "psychohistory" -- which Krugman imagines as a discipline integrating economics, political science, and sociology (and no doubt "Big Data") -- "a social science that gives its acolytes a unique ability to understand and perhaps shape human destiny." Interesting word choice, acolytes! While I think it is enormously important for human beings to try to understand the times in which we live, the meaning of events that beset us, the history which we take up, the legacies with which we will come to grapple later in life as will generations who follow after us, I do not agree that there can be a political science of free beings, I do not agree that there is a human destiny but the open futurity inhering in the ineradicable diversity of stakeholders to the present, I do not believe that thinking what we are doing is the least bit about making profitable bets or making better prophesies. I think the skewed perspective of futurology may seem to be a matter of talking about robots but it is really more a matter of talking as if we are robots.

Here is Krugman's final thought: "Anyway, [this is] interesting stuff to speculate about -- and not irrelevant to policy, either, since so much of the debate over entitlements is about what is supposed to happen decades from now." May I suggest by way of conclusion myself that the primary relevance of this speculation to future policy outcomes is precisely the deranging impact of this genre of speculation on policy-making in general. Consider the way in which futurological daydreams about longevity gains have provided the rationale for suggestions that the retirement age be delayed -- even though expectations of longevity for actual people at retirement age haven't increased at all for most people who have to work for a living, although no doubt superannuated senators and wonks in their cushy posts may feel their prospects past sixty-five are long. Consider the way in which futurological daydreams of megascale geo-engineering projects provide corporate rationales continued paralysis in the face of anthropogenic climate catastrophe -- rationales in which the very corporate-military actors who exacerbated and denied climate change are cast as convenient imaginary saviors from climate change, no less profitably of course but much less accountably due to the conditions of emergency, reckless proposing hosts of unilateral interventions into ill-understood climate systems, willy-nilly, at industrial scales, without who knows what consequences all the while decrying democratic environmental politics of education, regulation, incentivization, and public investment as hopelessly corrupt, dead on arrival, emotionally overwrought.

I am far from denying the necessity of policy makers to make recourse to consensus science in crafting effective legislation, making sound investments, planning for likely problems and opportunities. Every actually legibly constituted scientific and academic and professional discipline has a foresight dimension -- but there is no analytic discipline evacuated of or subsuming all specificity that produces "foresight in general," and there is no literary discipline devoted to testable hypotheses rather than to meaning-making through salient narrative, figurative, logical association. There is no such thing as "The Future" qua destination or Destiny, nor such forces as "trends" one can ride to that destination or Destiny: there are only judgments and narratives that provide purchase on the present and only to that extent provide some measure of guidance as the present opens onto the next present. There are few economists that provide us a better grasp through the application of empirically grounded models of the complex, dynamic policy terrain of international economics, uneven technodevelopment, and liquidity traps than Paul Krugman. He is invaluable in the work of understanding where we are going from the present, and as such he has no reason to pine after prophetic utterances.

We have no reason to think intelligent robots are on the way in any sense remotely relevant to responsible policy concern. And it won't be economists (or pop-tech journalists or, worst of all, futurologists) we should be reading to gain a sense when intelligent robots are proximate enough to assume real world relevance, it will be biologists, neuroscientists, engineers. But we have every reason to think that were intelligent robots to arrive on the scene they would do so only after who knows how many intermediary steps had been made, at every single one of which there will be quandaries for policy to address that will be shaped by the stakeholders to the changes of the moment, the shaping of which will articulate in turn the terms and stakes on which the next change will depend. The distances and destinies of the futurologists exert little force and provide little insight on the complex vicissitudes of technoscientific change and technodevelopmental struggle, and their pristine lines of techno-teleologic rarely have much at all to do with the shape and substance and stakes that drive the way to eventual outcomes. There is plenty for policy-makers to grapple with as we are beset by dumb automation in the hands of plutocrats, and every moment devoted to wish-fulfillment fantasies of intelligent robot friends and foes is a moment stolen from matters actually at hand, many of them sufficiently urgent that our failure to be equal to them guarantees as nothing else could that futurological fancies never find their way even to some fragmentary fruition.


Anonymous said...

Foxconn in China (who makes among other things every single electronic component for Apple, Inc.) will be in the next 18 months firing 1 million factory wage slaves and replacing them entirely with decidedly un-intelligent robots. We will almost certainly never see the magic AGI, but robotic automation is set to wreak terrible havok and wild carnage on labor. is also replacing its entire warehouse staff with robots... once the riots have died down, and half the industrialized world is burned down, will we finally see an end to endemic Calvinism and the dawn of Guaranteed Minimal Income?

Dale Carrico said...

If a basic income guarantee is instituted (I am an advocate of BIG, but expect it to take the form of a bundle of welfare entitlements in the context of steeply progressive taxes rather than a lump payment), it will not be because of proliferating robots but because of social struggle, which is exactly how it would be instituted if automation, outsourcing, and crowdsourcing weren't taking place as well. I agree neoliberal technodevelopmental precarization and exploitation make BIG urgent, but I disagree that futurological fixation with robots actually provides much clarity or unique insight on the assumptions, aspirations, or tactics involved. If anything, futurological commandeering of the topic renders it less likely to assume the position in the radical political imaginary it deserves to be, in my view.

Anonymous said...

A bundle of welfare entitlements will sigmatize and ghetto-ize. (We learned this I hope from the failure of the Great Society centralized housing projects.) And unless the Rednecks get a check also, the idea will be racialized. Welfare obscures and dooms the whole concept. In Beijing, I saw how homeless are rounded up. They are then given a medical check, a dorm bed, meals, and they are assigned a street to clean and provided with cleaning tools... minimal guarantees of housing, food, clothing, medical care and education. This must be the new economic paradigm, and aligned with pragmatic Green policy. This must be defined as fundamental, legal human rights - not entitlements. This is the human element of the equation the fundamentally dehumanizing transhumanists are blind to.

Anonymous said...

software is now automating 90% of the tasks that once defined a "lawyer" also ; so the automation is eating into the white collar as well...a future where prostitution is the only growth industry left is dystopian and dismal indeed.

Dale Carrico said...

I think the stigmatization, ghettoization, racialization arguments are losing their force in our diversifying, secularizing, planetizing US conjuncture, but losing is not the same thing as lost and so the point is of course well taken. Ultimately I argue that these guarantees (healthcare, education, income) work to ensure a scene of legible informed nonduressed consent to the terms of everyday intercourse and hence should be seen as of a piece with the state's provision of nonviolent alternatives for the adjudication of disputes via representation and courts, and the state's circumvention of the structural violence in parochial exploitation of common or public goods, and so I see so-called welfare entitlements as of a piece with a properly democratic commitment to nonviolence. That seems to me the ultimate significance of FDR's Four Freedoms as an inflection point comparable to Lincoln's. I do agree with you that healthcare, education, housing, income should be regarded as basic human rights, and I didn't mean to get your hackles up calling them "entitlements." I know where you are coming from, definitely I've heard the arguments, but I guess I do think that all human beings are entitled to these and that is part of what I think it means to call them rights. I would say -- and have done! -- that the transhumanoids are blind to MANY things.

jimf said...

> Of course, there is nothing so conventional among futurologists
> of the most embarrassingly Robot Cultic kind to propose altogether
> flabbergasting wish-fulfillment fantasies, involving
> sooper-genius brain upgrades, living forever in shiny sexy
> robot bodies, wallowing atop nanobotic treasure piles or
> in Holodeck heavens, and so on and so forth, but then to attempt
> to boost their credibility as Very Serious intellectuals by
> piously warning us of the dangers of clone armies, robotic
> uprisings, Robot Gods eating humans as computronium feedstock,
> and so on. . .

Apropos of which -- a book in the works, from Springer.

Anonymous said...

I think this is a very uncharitable reading of Krugman's post. Thought experiments are unbelievably helpful tools for understanding how the different parts in our society affect each other. In this thought experiment Krugman clearly extrapolates from fictional evidence (and he readily admits this) in order to visualize what a society would look like when all labor is replaced by capable (intelligent) robots. It's exactly because Krugman habitually does these thought experiments that he has become such a clear thinker about implications of macro-economic policy. It's exactly the reason why he can so easily identify fundamental mistakes made by other renowned economists.

To argue that those robots are deserving of equal rights if they are as capable as people is to miss the point entirely. Thought experiments like these are constructed precisely so we can think of the economic consequences of changes in the world or in policy *without* derailing the conversation with ethical considerations. Of course the thought experiment still works when you substitute "intelligent robots" with "human slaves". All we need to do is increase the ratio of human slaves to people and we can get any GDP we want (hurrah!). Obviously it's not a coincidence that this substitution works. It's precisely the reason why Krugman introduces the concept of intelligent robots that are capable workers but conveniently sub-human. This means Krugman is painting a dystopian picture.

So to suggest that Krugman is fantasizing about a "consumer-capitalist robo-utopia as ideological guarantor of eternal progress" is utterly ridiculous. He even goes out of his way to correct this misconception: "we could be looking at a society that grows ever richer, but in which all the gains in wealth accrue to whoever owns the robots". Spoiler: here you can make the substitution too. For somebody who has been reading Krugman I'm amazed that you think Krugman is so stupid as to believe that a world without a middle class but with a small elite that controls all robots and GDP is anything but a dystopia. That you could think Krugman could write that paragraph without reflecting for a second and seeing the parallels to the real world.

Krugman goes out of his way, time and time again, to qualify every statement he makes. Is that enough for you? No. Krugman may not simply make his case about economic growth -- you effectively demand that he doesn't speak of GDP growth in the future all or that he speaks of the radical social change required to steer away from our present distopian trajectory. This demand is completely unreasonable. If you don't believe you're making this unreasonable demand, ask yourself how else Krugman could possibly have approached this subject. In your zeal to seek and destroy robot cultists you see robot cultists eve nwhere there are none.

Dale Carrico said...

Krugman: "And this means that in a sense we are moving toward something like my intelligent-robots world; many, many tasks are becoming machine-friendly. This in turn means that Gordon is probably wrong about diminishing returns to technology."

Here, "something like" and "probably" both qualify the assertions but the assertions are plain, and plainly meant to be factual. Very much to the contrary, I see nothing like either AI or golems on the way, and no reason to assess Gordon's third digital ensemble as more transformative than it seems in fact to be (I notice you didn't have much to say about my deflation of digital-utopian hype).

I cannot agree with you that the alternative to accepting the literal truth of Krugman's assertion of a futurological acceleration thesis here is "dystopian" -- since I actually am claiming that much that is described as "utopian" in such formulations is fraudulent anyway. A world with more renewable energy but also more energy efficiency, a world with more sustainable polyculture but also population decline (requiring little more than the empowering of women), a world with more equity-in-diversity via basic healthcare, education, basic income and steeply progress taxes seems like a non-dystopian future, just not one that indulges futurological wish-fulfillment fantasies.

"Robot Cultism" has a real organizational existence, but it is above all a discourse with legible assumptions, conceits, frames, figures, tropes, topics, which I identify, elaborate, and pressure in my critique. It is true that I criticize these wherever they appear, even when they appear in the work of intelligent people I otherwise admire like Krugman. You ask how else could Krugman have approached this topic -- I pointed out in the critique itself that he could have confined himself to the discussion of the effects of actual automation in the actual world as he actually has done very well elsewhere.

I do not claim Krugman is a Robot Cultist, I claim that in this post he made an argument that "flirts" with the futurological in ways that render this writing less useful than he usually is. I still think this is true, and though I don't really think it is "uncharitable" to say so. I agree that futurology makes everybody who indulges in it a bit more stupid than they have to be.

Thanks for engaging with the argument though, you are right that one shouldn't treat the emphases in thought experiments as literal assertions any more than one would with allegories, I just think Krugman's own argument applies it factually and hence it is fair for a critic to the do the same.

jollyspaniard said...

Dale I agree with you that it's a bit fatous to break down the industrial revolution into three stages. Two things I'd like to point out. Petroleum extraction is dependant on computers nowadays. The days of There Will Be Blood are long gone. Whenever I hear people make these kinds of distinctions it's best to take cover, bullshit is incoming.

And computers dovetail into renewables. PV solar uses similar technology as silicon chipmaking.

Anonymous said...

So it seems that your main objection to Krugman's post is that he claims that technological advances by unintelligent robots are "in a sense" similar to the advances made by intelligent robots in science fiction because the implications are the same: in both cases people are replaced by machines. So yes, we are moving to "something like" a world with intelligent robots, even though we live in a world without AI, golums or intelligent robots. Because it turns out that relatively mundane and gradual advances in robotics, big data, dumb-AI and so on are capable of solving problems that we used to believe required human-level intelligence. So the economic consequences of things that used to be the domain of AI (and fiction) have become economic consequences of mundane (but fascinating) advances in robotics and software.

So Krugman doesn't even flirt with robot cultism. He is not confused about what is science fiction and what is not. His love for science fiction never contaminates his thinking, as you see with the likes of Kurzweil (who is smart but clearly crazy). Krugman is not making a "futurological acceleration thesis" that relates to the real world. In today's NYT column titled "Is Growth Over" he again explains his position, which is completely in line with my earlier interpretation. The column makes clear beyond a shadow of a doubt that Krugman's observations about robots and growth are about the here and now and the claims stay clear away from robo-cultism.

(Aside: I agree mostly with what you've written in an effort to deflate digital-utopian hype, although I don't see much evidence that transhumanists are at all dangerous. The futurological drivel on io9 is essentially marketing material and not really deserving of analysis of any sort. Transhumanists make some scientific claims that seem to hold up to casual scrutiny and therefore the strongest arguments against transhumanism should be scientific arguments. However many futorological claims take the shape of "X is not outlawed by the rules of physics therefore X will happen Any Day Now". I hope they're not holding their sooper-breath.)

With "dystopian" I meant to say that we are not at all on track towards a world "with more renewable energy but also more energy efficiency, a world with more sustainable polyculture but also population decline (requiring little more than the empowering of women), a world with more equity-in-diversity via basic healthcare, education, basic income and steeply progress taxes". I said "This means Krugman is painting a dystopian picture." and Krugman says in his column "On the other hand, if income inequality continues to soar, we’re looking at a dystopian, class-warfare future — not the kind of thing government agencies want to contemplate.", proving my interpretation exactly right. I realize that the column didn't have a section where the reader is asked to think about a "fantasy technology scenario", which is of course a big part of your original critique. I think this demonstrates that the science fiction bit was immaterial in terms of the economic consequences and policy implications and therefore I still reject that Krugman was making a "futurological acceleration thesis" at any point.

Thanks for your response and for your blog in general. I've been a semi-regular reader for the past couple of months and you've my opinions have shifted on various issues as a result.

Dale Carrico said...

Uh, Big Data isn't anything like AI for anybody who isn't looking for it to be, it doesn't put us in a position to cheer we are on our way to sexy slavebot utopia OR to shriek we are on our way ti bubble-dome trillionaires gassing the now superfluous 99%. To say otherwise is to ignore engagement with actual problems, including actual technodevelopmental quandaries concerning socioeconomic dislocations of real-world automation, the better to indulge in the daydreams and nightmares of Robot Cultists. In his piece Krugman made some fairly conventional futurological moves and ended up drawing flawed conclusions in consequence. I blame his flirtation with the conventions of futurological discourse for a discussion of his that wasn't up to his standard. That was my critique and you haven't dissauded me from it, though the exchange has been enjoyable.

Dale Carrico said...

I find I want to take issue with another thing you said in this latest response however. I deny that transhumanists and other superlative futurologists make any claims that are sound scientifically, that are UNIQUE or ORIGINAL with them. That is to say, nobody ever needed to join a Robot Cult to affirm the things they say that are consonant with consensus science. (Say that even a Scientologist has figured out how to run a nice bath, this is no argument for Scientology, eg.) What is wanted if one is trying to grasp superlative futurology or weigh its merits is to understand its actual content and contribution, surely?

While it is true that Robot Cultists believe all sorts of things that reasonable people do, one also notices that they fervently believe in outcomes that are very marginal, very from consensus expectations from actual experts and scientists who have devoted their lives to actual fields indispensable to the substance of claims and outcomes the belief in which is indeed UNIQUE and often ORIGINAL to the transhumanists.

While we can all agree that sometimes, very rarely, bucking scientific consensus has put a dedicated researcher in a position to author a paradigm shift in our understanding of the world, the thing to notice about the Robot Cultists is that they are attracted not to one, not to two, not to three, but often to dozens of marginal beliefs, that in each case it is easy to see the passion of their belief connects quite clearly to wish-fulfillment fantasies they often indulge openly (invulnerability! omniscience! immortality! superabundance! super powers!), that few to none of them have any stature in the fields they buck or are likely to achieve the stature to actually substantiate a paradigm shift.

Let me be clear, not only do superlative futurologists believe incidentally in some soundly scientific facts nobody has to join a Robot Cult to entertain seriously, but what superlative futurologists go on to do WITH at least some of those sound beliefs is treat them as the foundation from which to validate a host of absolutely marginal beliefs all of which are indeed definitive of Robot Cultism.

What matters in my view are not their scientific and unscientific beliefs, but the discursive operations that are central to their mode of belief, that both shape and connect their assertions over and over and over again. I believe that the most powerful critique of the transhumanists is a critique aimed at transhumanism as a discourse and as a characteristic subculture soliciting identification, and that is precisely where I lodge my own critique.

I happen to think saquabbling with Robot Cultists over the scientific specifities is often a waste of time. Few are actually sufficiently trained in science enough to weigh the implausibility of their cherished outcomes in scientific terms. To go down the rabbit hole with them debating the Robod God Odds is rarely productive. This is not because they have any substance on their side but because an appearance of substance functions in their discourse to provide the pretext for faith. A debate with an actually knowledgeable scientist confounding their assumptions and aspirations will most likely function for them if anything as a ritual substantiating the scientific seriousness of their faith. The error in such a case is actually happening on the part of the SCIENTIST who has misrecognized the nature of the phenomenon with which she is grappling.

In pretending a pseudo-scientific faith-based initiative in the service of techno-transcendent wish fulfillment fantasies can be adjudicated scientifically with the Robot Cultist, one concedes a scientific substance to their aspiration which is the only concession they ever needed and from which they gain all they could ever get from the transaction in any case. Conceding the existence of angels one is left only to debate angels on a pinhead as an outsider with a monk who has devoted his life to these games.

Dale Carrico said...

I don't see much evidence that transhumanists are at all dangerous.

I urge you to read Ten Reasons To Take Transhumanists Seriously and then tell me why it does not change your mind if it does not.

The futurological drivel on io9 is essentially marketing material and not really deserving of analysis of any sort.

Do you think there is any reason to understand, for instance, the rhetoric through which relentlessly deceptive advertizing forms mislead, distract, and abuse consumers?

As somebody who teaches critical thinking skills to undergraduates I find I am very concerned to arm students with tools to engage critically with the actually prevalent rhetorical forms that suffuse our public discourse, most of them taking marketing and promotional forms, not just obvious commercials and ads, or self-serving rhetoric from politicians and pundits, but press releases, think-tank position papers, advertorial content in pop journalism, dating profiles, sartorial choices, armchair editorializing treating elite-incumbent interests as natural, and, yes, as you say, the sorts of broadly marketing hype one finds in fandoms like the ones solicited and remarked on at io9.

Dale Carrico said...

[M]any futorological claims take the shape of "X is not outlawed by the rules of physics therefore X will happen Any Day Now". I hope they're not holding their sooper-breath.

Notice that indefinitely many individual outcomes may be logically compatible with what we know now, but not logically compatible with one another, and so this rationale is not even logically sound. Of course, it is a curious kind of "scientific" belief that bases itself on the compatibility of a belief with the current state of ignorance in respect to outcomes that would demand considerable expansions of actual knowledge to be accomplished? It is not surprising that this is a kind of belief that dispenses altogether with practical, political, profitability considerations that are all also indispensable to the actual plausibility of an outcome. I propose that such futurological "arguments" for superlative outcomes are more than non compelling on their own terms, they also provide compelling evidence that futurologists do not grasp the nature of technoscientific change or technodevelopmental struggle at a fundamental level.

jimf said...

Anonymous said...
> . . .
> I don't see much evidence that transhumanists are at all dangerous.

Well they're not dangerous in the sense that they're (presumably not)
organizing to hijack planes or poison subway passengers, or planning assassinations
of public figures. (Though **some** among them are alleged to
be "preppers" -- a term I learned in the aftermath of Sandy Hook;
i.e., survivalists and anticipaters of the apocalypse who amass canned goods,
medical supplies, and firearms to defend themselves and their enclaves against
the coming chaos. **Allegedly**, as Kathy Griffin would say. ;-> ).

They're not (yet) dangerous in that they haven't quite attracted
the money and clout that would allow them to harass public critics
(such as Dale) by legal means, as the Scientologists were once
famous for doing (though even they have had to scale back their
expectations of being able to silence all public criticism in
these days of the Internet). Not that some of them haven't occasionally
made noises along those lines.

They **are** "dangerous" in the sense that all cults and charismatic gurus
are dangerous -- they suck people into their orbit by seeming to
offer an easy way out of life's existential quandaries, at the cost
(at the very least) of short-circuiting critical thought in their "victims".

Also, both the political ideology and the psychology that pervade >Hist circles are
skewed toward the antisocial end of the narcissistic/psychopathic spectrum.
The Zeitgeist feels a lot more like Ted Turner than like Jane Fonda. ;->

And this largely-unexamined iceberg of unpleasant arrogance and narcissism,
(denied) politics, and wishful thinking fancies itself "scientific" and "rational" --
that's extremely irritating at best, even if it's not outright "dangerous".
It's certainly deserving both of critical deconstruction and
entertaining mockery.

And even in the fields where the >Hists claim to know what's in store -- the
presumed bases for the expectations they have concerning artificial intelligence,
molecular nanotechnology, and longevity medicine -- they exhibit a
fixed orthodoxy which actively **resists** uncongenial input
from experts who are "outsiders" (that is, experts who have no interest
in reinforcing the wishful thinking of the >Hist true believers).
It is really quite astonishing how shrilly these people have responded
to domain experts who have attempted to contribute a bit of sense
and perspective to the forums and mailing lists that amount, in the end,
to little more than mutually-reassuring hype fests.

Dale Carrico said...

I agree with most everything Jim has said above. The singularitarian AI folks have indeed attracted some real scratch, and critics like Jeron Lanier are more high-profile than I am (he's also a better writer) concerning the dangers of what he calls "cybernetic totalism. I have commented on geo-engineering discourse as a corporate-military rationalization for parochially profitable misdirections of environmental politics that is possibly one of the most dangerous organized ideological formations imaginable, that is to say if you believe as I do in the catastrophic consequences of anthropogenic climate change. I have also commented on the dangers of futurological "existential risk" discourse distorting funding and policy priorities away from actually more proximate and urgent problems and risks/costs that actual majorities suffer. I also think Robot Cultists (among many others) contribute ongoing hysteria and hyperbole to public deliberation over technoscience and development questions in generally unproductive ways. Is that starting to look "dangerous" to anybody but me?