Using Technology to Deepen Democracy, Using Democracy to Ensure Technology Benefits Us All

Sunday, August 04, 2013

Very Serious Futurology With Robin Hanson

Upgraded and adapted from the Moot in answer to a question posed by one "Alex":
Hi Dale. I'm curious what you think of this. I'd say Robin Hanson gives a valid argument for expecting a singularity in the next few centuries based on trends in world GDP. He puts the likelihood of this happening at 50% to 75%, which I'd about agree with, along with his reasoning method.
At a glance, Hanson's arguments seem to me ridiculous six ways to Sunday. This conclusion is not based on close reading -- I'm grading mid-terms right now. In the irrational exuberance of the roaring nineties plenty of transhumanoids were high-fiving each other over the inevitable Long Boom and Dow 100,000 with the rest of the high-tech assholes, but one would like to think the rest us have learned some lessons from that idiot tide -- Hanson's GDP to techno-transcension seems flabbergasting in its denialism of history, not to mention little pesky issues around, you know, anthropogenic climate change and weapons proliferation and planetary precarity, but, hey, they say things look pretty swell from plutocratic perches even so.

Of course, for an argument to be valid its conclusions need only follow logically from its premises. Stipulate whatever the hell you want but follow the bouncing ball! Whether the argument is sound or relevant is another question. "Likelihood estimates" from futurists over the Robot God Odds always seem to me akin to perspiring monks contending passionately over angels on pinheads. Even a cursory examination of the piece that has impressed you reveals the usual futurological pathologies aplenty. Not to put too fine a point on it, declarations of a fifty to seventy five percent likelihood of robocalypse or techno-transcension seems to me more or less like cutting the cheese, even when they are published in IEEE Spectrum.

You say you agree with Hanson that it is more likely than not (not exactly a prediction one can hang a hat on) that there is going to be a Singularity -- but what do you even mean by the term "The Singularity" about which you have formed such confident expectations? The emergence of nonbiological entitative superintelligence eventuating in a history-ending Robot God? Greater-than-human prosthetic-assisted collective intelligence that is different in some way from those forms already expressed in organizations, divisions of labor, and so on for some reason? Competing superintelligences, non-biological, biological, assemblages, collectivities? And just what is it about the serially failed state of good old fashioned artificial intelligence research programs and the current state of the art that makes you think such paradigm shattering developments are on the horizon? Happy to agree your "Smart Card" is truly "smart" after all? Pinning Big Hopes on Big Data are we?

Even worse than the usual futurological AI tomfoolery, Hanson's argument presumes brain emulations ARE people -- does it likewise presume photographs of people are people? How rich does the scan of you have to get, how many people does your angel avatar have to fool into accepting it as you for you to concede it's you as well? Not that any of this is actually happening outside of science fiction, but are people who talk this way even talking in a way that still deserves to be called literal and not figurative? Hanson presumes "robots" that own wealth and compete and make wars on humans as key players in his speculations. But all such characters are pure fantasy! I put their "likelihood" at 14.2 percent in seventy six years and four months -- not really, but surely you are impressed by the precision of my formulation. Come on, let's debate it now like real scientists! Hey, not that anything we say connects to actual reality! The usual confusion of science fiction with science fact and science policy. Any one can play (but with consequences, about which more later).

It is conventional futurological flim-flam to demand one's wish fulfillment fantasies be treated as serious policy discourse while at once shunting off all the evidentiary and analytic basis for rigor onto a horizon displaced by twenty years into the future. Hanson is savvy enough to consign his Very Serious projections into centuries distant from our own and to refer to actual social science research that he then applies to loosely conceived projections scarcely related to the terms of the research he is citing itself. But there are still no actual substantive reasons to accept his assumptions or treat his projections as relevant to any real-world considerations. Again, even by its own lights it is bedeviled by questionable premises and definitional fudges. "Singularity," "Robots," "Uploads," even "Superintelligence" are playing out here as loose fancies pretending to be facts or terms of art when they simply are not -- at most these are subcultural signals in the marginal fandom for that most derivative and impoverished genre of science fiction, the futurological scenario, which might indeed be of real academic interest: for a pop culture ethnography.

Conventional automation hasn't displaced much labor yet, says Hanson? And yet productivity gains associated with automation, organization, transportation, communication developments in the context of the dismantling of organized labor in the US and the outsourcing of labor to overexploited regions where there are low to no labor protections has facilitated an extraordinary concentration of wealth and amplified precarity across the globe and eviscerated social mobility and buying power for all but the rich. Labor has indeed been displaced or replaced by placeholder jobs policing docility in majorities from whom the occasional photogenic or gifted exception can be plucked up for the gratification of plutocrats.

This state of affairs will continue either until the world perishes from the use of weapons of mass destruction in conflicts exacerbated by ongoing climate catastrophe, I suppose, or until revolts reverse these terms. Who knows whether such revolts will be convulsive and easily appropriated popular uprisings enabling the rise of authoritarian formations little different from the plutocracy they displace or will result from social democratizing reforms that are more sustainable? In any case, only the latter response can possibly be equal to present climate catastrophe and resource descent and there is less reason to think that response will happen in time with every passing month. Resource wars and climate catastrophe yielding a breakdown of planetary society and ending technodevelopmental advances in most fields seems to me the more likely result on the assumptions favored by futurologists. Whatever their disagreement about its definition, or my disagreements about its sense, I notice that none of these outcomes look much like anything they tend to describe as "the singularity" in their glossy brochures.

I believe that substantial technodevelopment already stalled a generation ago -- and to the extent that progressive technodevelopment involves not only technical advances but a more equitable distribution of the costs, risks, and benefits of technoscientific changes among their actual stakeholders, I can't say this development ever really got off the ground if we assume the relevant planetary vantage on the phenomenon. Digital enthusiasm and medical breakthroughs are gratuitously over-hyped by a public discourse now mostly reduced to marketing deception without end or exception. How much of what gets passed off as GDP even refers to anything real in the first place? Financial industry fictions and crap stuffed in a landfill in a generation-long global neoliberal circle jerk has been celebrated as ballooning GDP. Now Hanson wants to assume that this is going to accelerate us into Holodeck Heaven or NanoHogwarts? Science! Futurology and the batshit extremities of futurology represented by transhumanism singularitarianism techno-immortalism greenwashing geoengineers and so on are simply the froth on the cauldron of such marketing denialism and fraud.

I am actually hopeful that social democracy and environmental politics might indeed marginalize the neoliberal/ libertopian/ Republican madness in the US in time to build a sustainable, equitable, and diverse social democracy and incubate renewable industries in collaboration with Europe, India, and South America in time to save the world from likely destruction. It's worth trying at any rate.

In saying this you may notice that I do not pretend to be making a scientific prediction or diagnosing futurological "trends." There is no such thing as an historically agentic or otherwise forceful "trend." "Trends" are retroactive narrative constructions at their best, but usually their retroactivity is falsely projected as if from the vantage of a non-existing superior height (fashion trends announced from on high) or from the future (which is inhabited by no one at all) in which case they are always prescriptions masquerading as descriptions.

By the way, I do not think any true believing Robot Cultist or even Very Serious academic/think-tank Futurologists is a reliable ally in any of the work to accomplish a world worth living in or capable of sustaining an equitable-in-its-diversity secular technoscientific civilization. Almost all futurology functions as apologiae for distraction from organized resistance together with encouragements for increased consumption and celebrations of corporate-military plutocrats and their norms and institutional forms. There is no definition of "singularity" I am aware of that looks much like any planetary outcome that seems to me remotely relevant to our actual circumstances.

The conversation continues on in the comments -- definitely read on and make a contribution.

10 comments:

erickingsley said...

If these trends continue...

http://i.imgur.com/OHPSy.jpg

Alex said...

Hanson means by "singularity" a period where world growth rates jump at the largest historical scales. According to him there are at least two previous jumps, agriculture and the industrial revolution.

His argument is that maybe another jump will happen. This doesn't depend on any specific technology like nonbiological superintelligence, greater-than-human prosthetic-assisted intelligence, and so forth. You can plug in whatever feels best. He does pick artificial intelligence and that seems like the best guess to me, just because I can't think of anything else that could possibly drive growth rates at the required levels.

He is extrapolating a very small series. The doubling times of the world economy during the three "growth modes" of hunting and gathering, agriculture, and industry were 200,000 years, 900 years, and (today) 18 years. That is why his confidence level is 50-75%. With such a small series your confidence is not high but it's not zero either.

If I understand you right, your objection is that there is no causal mechanism that would allow you to extrapolate this series--that it is just playing with numbers or "loosely conceived." And that's not a bad objection. But I agree with Hanson that:

"We would be fools if we confidently expected all patterns to continue. But it strikes me as pretty foolish to ignore the patterns we see."

In other words, things being equal we should expect past patterns to continue. This is true for individual technologies too. Past technologies have always plateaued eventually. This is why Kurzweil's arguments based on simply extending Moore's Law are suspect. He extrapolates a lesser pattern, the growth of an individual technology, without accounting for the larger pattern that individual technologies level off. Hanson's argument seems better because there is no larger pattern than world GDP over all human history. That's all the data we have.

Now maybe we know some specific reasons why this pattern won't continue. Since we don't know much about why this pattern exists, I agree we should pay attention to such reasons if they're convincing.

I just don't think they are.

Sure, there are many challenges today such as global warming, nuclear proliferation, biodiversity loss, and so forth. Global warming is predicted to reduce GDP by 5-20% in the Stern Review. But I don't see why these counter the overall historical growth trend. Population growth and more technology increased GDP 3700% in the 20th century. Nuclear war I worry more about but the risk is down since the Cold War.

It's also true that US productivity growth was lower during the last forty years than in the previous century, as Robert Gordon says. I don't know, however, why we would focus mainly on the past forty years when we have data for the past million.

Futurism can get pretty bad, I'd agree. I actually sometimes worry that we can't know anything about the world in say 2050. That's about what David Friedman thinks: radical uncertainty. I'm not ready to concede that though. My impression is that you aren't either. True?

I'd say the best way to prepare for the longer-term future is adopting policies that would help many different widely ranging scenarios. My favored catch-all policy is global coordination or internationalism. That seems likely to help with many problems.

Dale Carrico said...

Hanson means by "singularity" a period where world growth rates jump at the largest historical scales.

Hanson is writing on a futurological topic using a futurological term but then defining it in a way no futurologist does -- nobody needs the word "singularity" to talk about large scale growth. He seems to want to make sloppy conventional analysis seem more interesting than it is by aping futurology, but then seems to want to make futurological analysis seem more reasonable than it is by aping more conventional political economy. It's sleight of hand.

His argument is that maybe another jump will happen. This doesn't depend on any specific technology like non-biological superintelligence, greater-than-human prosthetic-assisted intelligence, and so forth. You can plug in whatever feels best.

"Maybe something will happen" actually isn't an argument. Indifference to the actual causal agent of this change would make it even less of an argument.

He writes about AI and I took him at his word. You say it is your own best guess.

But I'm afraid that before you put the cart before the horse claiming strong AI is going to change history your going to have to come to terms with the fact that there is no strong AI and that everybody who expects otherwise is always wrong and always perfectly confident despite always being wrong and that AI theory is suffused with facile algorithmic/computational metaphors and somewhere between indifference to hostility to the biologically incarnated and socially situated exercises of actually existing intelligence in the world. Marketing and popular culture may be full of "smart" artifacts and robot persons -- but reality isn't. This matters.

He is extrapolating a very small series. The doubling times of the world economy during the three "growth modes" of hunting and gathering, agriculture, and industry were 200,000 years, 900 years, and (today) 18 years. That is why his confidence level is 50-75%. With such a small series your confidence is not high but it's not zero either. If I understand you right, your objection is that there is no causal mechanism that would allow you to extrapolate this series--that it is just playing with numbers or "loosely conceived." And that's not a bad objection. But I agree with Hanson that: "We would be fools if we confidently expected all patterns to continue. But it strikes me as pretty foolish to ignore the patterns we see."

These are the alternatives? Either NO CHANGE when history is obviously endlessly dynamic or SINGULARITY on the construal of a handful of Robot Cultists who can't get their own definition straight and who presume to know what the effects of not-existing and very possibly never-existing sooper-devices will be? You'll forgive me, but the alternative is patently false and frankly stupid.

Like Kurzweillian "Laws of Accelerating Returns" this is an absurdly lightweight accounting of a synoptic sweep of thousands of years of complex history reduced to a ridiculous just so story.

I disagree that the phrase "growth modes" presumably referencing "productivity doublings" (do you even know what substantiating such a claim would LOOK LIKE?) associated with glib fables of duration "hunting and gathering" (a thing "lasting" 200,000 years presumably) "agriculture" (another one thing, this time lasting 900 years) "industry" (a couple centuries) then "singularity" (the blink of the Robot God's eye declare some of the cybernetic faithful) -- not one of which is usefully characterized or periodized at this level of generality. This is all perfectly silly.

Dale Carrico said...

In other words, things being equal we should expect past patterns to continue.

For example the human pattern of would be gurus bamboozling people by telling reassuringly simple tales that give them a false sense of purchase on incomparably more complicated realities -- especially with the pay off that buying the false narrative makes it seem as though great profit or even personal transcendence happens to be justified by the narrative?

Sure, there are many challenges today such as global warming, nuclear proliferation, biodiversity loss, and so forth... But I don't see why these counter the overall historical growth trend.

You're right, dead people in a dead world, or scattered settlements in holes eating fungus and warming themselves by burning their own poop will have plenty of time to keep the techno-transcendence program online.

It's also true that US productivity growth was lower during the last forty years than in the previous century, as Robert Gordon says. I don't know, however, why we would focus mainly on the past forty years when we have data for the past million.

What you are calling "data" about a million years of GDP growth is the wooliest imaginative retroactive construction in the service of a wish-fulfillment fantasy.

You should note, by the way, that the "acceleration of acceleration unto techno-transcendence" talk originated in the very period you now admit was contra-indicated by factual reality -- although its cheerleaders always pretending to be describing ongoing accelerating progress rather than prescribing it.

Why, one might almost describe the whole discourse as a desperate compensation for faith-based technophiles deeply invested in an ideology of brute-force technodevelopmental solutions to what are in fact political problems for which more democracy and more equitable redistribution are the only real solutions, solutions disapproved by elite-incumbent corporate-militarists who always preferred techno-triumphalism that cost them nothing.

I'd say the best way to prepare for the longer-term future is adopting policies that would help many different widely ranging scenarios.

The best way to prepare for the longer-term future is to junk the scenario spinning which is just a form of marketing PR for incumbents anyway, and apply our intelligence and resources to actually shared problems in the most equitable way to the diversity of stakeholders to change. In my view, futurological methodologies -- at their BEST, I'm not even talking about their pathological expressions in the Robot Cult archipelago at their extreme edges -- are profoundly distortive of the deliberative forms actually equal to our problems.

Robert Gross said...

If Alex is whom I think he is (and really, the people who care about these things is sort of like a web-ring; you just keep seeing the same names and people), then you're arguing with someone who is professionally invested in transhumanism being real and the singularity being likely.

So you might as well be arguing with a priest about theism. But I will say this--- I am open to arguments about singularitarianism and transhumanism both if I see them in mainstream peer-reviewed journals. Anyone reading this who aspires to such, I say that I certainly have much more sympathy for you than, say, the cranks at Lesswrong.

jimf said...

> [R]eally, the people who care about these things is sort of
> like a web-ring; you just keep seeing the same names and people. . .

This is certainly true, and it's been going on (not just on the Web,
of course) for many decades.

George Dvorsky, in his letter of complaint to John Bruce in 2006
(quoted at
http://amormundi.blogspot.com/2010/01/transhumanists-are-not-just-wrong-they.html )
acknowledges this, and adds:

"A short list of highly respected scientists who agree that a
posthuman future awaits us include Steven Hawking, Sir Martin Rees,
Michio Kaku, Nick Bostrom, Hans Moravec, Marvin Minsky, and James Watson.
And there are many, many others; I urge you take a look at the citations
in Kurzweil's Singularity book to see how broadly these ideas have disseminated
throughout academia and research labs around the world."

This is true too, but it's not about the **science** (if any) that
these people do, it's a modern religion, or religion substitute.

Jaron Lanier (who has rubbed shoulders with many of these people)
writes amusingly about this in his latest book
_Who Owns the Future?_ (and particularly in the section
"Fourth Interlude -- Limits Are for Muggles").
He also talks about the strong influence on Silicon Valley of
"Eastern spirituality", est (and its successor "the Forum"),
and George Gurdjieff.

jimf said...

From Jaron Lanier's _Who Owns the Future?_
-----------------
Many of the top scientists, politicians, and entrepreneurs attended
est or similar happenings. Terms like _self-actualization_ became
ubiquitous. You'd develop yourself, and your success would be
manifest in societal status, material rewards, and spiritual
attainment. All these would be of a piece.

It's hard to overstate how influential this movement was in Silicon
Valley. Not est specifically, for there were hundreds more like
it. In the 1980s the Silicon Valley elite were often found at
a successor institution called simply "the Forum."

The Global Business Network was a key, highly influential institution
in the history of Silicon Valley. It has advised almost all the
companies, and almost everyone who was anyone had something to
do with it. Stewart Brand, who coined the phrases "personal
computer" and "information wants to be free," was one of the
founders. Now Stewart is a genuinely no-nonsense kind of guy.
So is Peter Schwartz, who was the driving force behind GBN
and wrote _The Art of the Long View_. And yet the ambience of
the New Age was so thick that it helped define GBN. It was
inescapable. . .

Meanwhile, the world of marketing was being reinvented at the
Stanford Research Institute. This is the same SRI that employed
Doug Engelbart, who first demonstrated the basis of person-oriented
computing in the 1960s. More recently SRI spawned Siri, the
voice interface used in Apple products.

SRI had a unit called VALS, for Values, Attitudes, and Lifestyles,
which was for a while the guiding light of a transformation
in corporate marketing. (The use of the term _transformation_
was long a signal of the technocratic/spiritual New Age.
It has been mostly replaced by _disruption_ since the Singularity
replaced Gurdjieff as the spiritual North Star.) . . .

Around the turn of the century, with the rise of Google, a new
merger of the techie and the New Age streams of Bay Area
culture appeared.

For some time, at least since those dinners at Marvin Minsky's
house, there had been talk of every manner of amazing
future tech revolution. Maybe we'll disassemble our bodies
temporarily into small parts that will be easier to launch
into space, where we'll be reassembled and then float naked
except for a golden bubble to shield us from radiation.

This was an utterly typical idea. But if there were anything
actionable, it would be in the realm of engineering. Could you
really sever and then reattach a head?

After the rise of Google, the tenor of these speculations
changed in Silicon Valley. Now the top-priority action item
was perfecting one's mentality, one's perspective and self-confidence.
Are you really enlightened enough to "get" accelerating change?
Are you really awake and aware, preparing for the Singularity?

The engineering will come about automatically, after all.
Remember, the new attitude is that technology is self-determined,
that it is a giant supernatural creature growing on its own,
soon to overtake people. The new cliché is that today's
"disruptions" will deterministically lead to tomorrow's
"Singularity."

jimf said...

And C. S. Lewis recognized that the religious impulse embodied
today in the notion of the Singularity has its roots as far back
as the 19th century:

"The central idea of the Myth is what its believers would call 'Evolution'
or 'Development' or 'Emergence'. . . I do not mean that the doctrine
of Evolution as held by practising biologists is a Myth. . . It is
a genuine scientific hypothesis. But we must sharply distinguish between
Evolution as a biological theorem and popular Evolutionism or
Developmentalism which is certainly a Myth. . .

The clearest and finest poetical expressions of the Myth come before
_The Origin of Species_ was published (1859) and long before it had
established itself as scientific orthodoxy. . . Almost before the
scientists spoke, certainly before they spoke clearly, imagination
was ripe for it."

See the comments at
http://amormundi.blogspot.com/2010/05/robot-cultists-have-won.html
(the two segments I posted there got reversed).

BTW, after you've had a taste of contemporary transhumanism,
C. S. Lewis's _That Hideous Strength_ (published in -- what --
1948?) is an absolute hoot to re-read. There are some
choice quotes at:

http://amormundi.blogspot.com/2009/02/robot-cultists-getting-too-nice-by-half.html

Alex said...

Saying global warming might lead to disaster is using trends to create a scenario or at least prediction. We measure the temperatures over many years and the UN makes its models.

It's all about which trends you pick.

The question that interests me is whether Hanson's choice of using world GDP (maybe along with the evolution of animal brains) is somehow an arbitrary choice of trend or whether there is some specific, compelling and more recent phenomenon that allows better predictions.

I hope there is, because I think a singularity is likely to turn out badly, unlike Hanson.

I worry that your faith-based technophiles are actually not interested in the same problems as you.

Detailed scenarios do seem pretty useful as showy displays to interest people in a topic. Or marketing PR as you say. The abstract is difficult to grasp but concrete scenarios are easier, though more arbitrary.

I would agree that people putting too much stock in just one scenario seems a common distortion of thinking.

If Alex is whom I think he is

I'm not.

Dale Carrico said...

Saying global warming might lead to disaster is using trends to create a scenario or at least prediction. We measure the temperatures over many years and the UN makes its models.

I actually disagree with this. This is something I mean to elaborate in a more systematic fashion when I eventually find the time to manage it. I believe the climate model is a false analogy on which contemporary futurology especially depends to pretend it is a legitimate quasi-scientific methodology rather than a rather derivative clumsy kind of science fiction literature conjoined to hyperbolic marketing forms even more than usually akin to active deceptions.

Given the complexity of ecosystems -- and their complex interactions from idiosyncratic local to planetary scales -- climate science provides a good fudge factor for futurologists to exploit in this way. It isn't accidental that climate change denialists are able to undermine scientific consensus in the field by displacing the debate onto a culture war terrain. Nor is it accidental that the scale of interventions futurological pretend feasibly to propose in their geo-engineering yackety-yack would be less predictable in their actual effects -- apart from the obvious profits that would accrue to the polluting plutocrats for whom these proposals are actually made -- than the state of the weather already is.

Every legibly constituted discipline produces models of phenomena, every legibly constituted discipline has a foresight dimension. This is because knowing better how phenomena behave under various conditions facilitates more practically useful interactions with them, and leads us to form expectations and make plans accordingly.

But "trends" are **narratives** more than models, strictly speaking, it is not scientists but English lit majors and PR muckety-mucks who can explain how they operate: they solicit identification the better to peddle forms of consumption.

Futurological scenarios inevitably circumvent historical situated social, cultural, and political dynamisms while purporting to model these dynamisms in relation to physical phenomena. Scenario spinning superficially skims the objects of a host of disciplines without the least mastery or often even grasp of the specificities of those disciplines -- it is an anti-disciplinarian pretense of inter-disciplinarity (a very slippery but indispensable academic aim futurology isn't remotely fit for).

The fact that you think it may actually be interesting that Hanson is pretending "GDP" (look, just pause over what the actual letters stand for, embed them in their institutional historical context, and you will be appalled at yourself for entertaining this idiocy) provides an actually real analytic vantage over a million years of history reveals you have allowed futurology to lead you profoundly astray.

I don't disagree that anything can be "trendified" and that you can "pick your trend" and then spit out talk that is legible for others you indulge this nonsense, but that is far from saying that it makes the least sense to make this methodological move if one wants to actually understand the world or facilitate sustainable equitable outcomes.

And, no, scenario spinning doesn't become better or more useful (except of course as a sales pitch) if you entertain and invest in four fantastically impoverished alternative "futures" rather than one. There are better ways of interesting people in a topic that actually affects them, but I agree with you that extended metaphors, little thought experiments, and anecdotal sketches have their place in mobilizing affect and concretizing abstractions. I teach rhetoric, after all. But I know better than to pretend rhetoric a way to grasp the substance and stakes of research or policy outcomes -- rather than effectively communicate those stakes and change conduct to facilitate ends once they have been determined by other, better means.