-- I would call our beliefs cogno-utopian and cogno-dystopian."
Apparently some singularitarians are whomping up some new terminological distraction to peddle their made-up bullshit to the rubes.
Despite Michael's pedantic preference for his pet-term of the moment "cogno-utopianian" it is mildly interesting but not at all surprising to note that in the paragraph immediately following the aria to "cognitive growth" he provides after proposing this crucial terminological shift, he continues on by talking about how "technology" (construed in the usual completely overgeneralized and politically-"neutralized" fashion) will solve the world's problem. And so, within a few sentences he has drifted back into precisely the techno-utopianism I was talking about in the first place. But, fine, who cares, "cogno-utopian," it is!
It is here that Anissimov starts speaking about matters closer to his heart:
The source of the change is the greater intelligence. The technology the greater intelligence produces would technically just be a second-order effect. Human civilization was not caused by technology, it was caused by cognitive improvement. Cognitive improvement will once again transform the planet, this time from human civilization to transhuman civilization. And what an amazing civilization it could be.
How splendid heaven will be come the transcension, intones the Robot Cultist in the usual manner.
There really is nothing to say to this sort of thing, probably, finally, but to pat the True Believer's head in a kindly way and say, "yes, dear."
It may seem unfair to those who sympathize with superlative techno-utopian rhetoric for me to bring "heaven" into this discussion at this point, but I do want to point out that Anissimov has himself introduced the curious notion of what he calls "the change" here. Not "change," but "the change." And he introduces for the first time into the discussion thereby the tonalities of technodevelopment recast as the approach to an "Event," of no doubt earthshattering dimensions. This is absolutely typical of singularitarian rhetoric, but no less deranging for all that. Once technodevelopmental changes assume the hyperbolic and transcendent coloration of "the change" the tiresome inevitability that we are to be treated with paeans to the amazing transhumanization of civilization on the horizon arrives just a few sentences thereafter.
It is bleakly amusing I suppose to recall a much longer and more intensive exchange I had with Anissimov a few years ago. In the comments thread to his critique of my position I wrote:
You say I am criticizing certain “technologies,” which you list as “mind uploading, molecular manufacturing, [and] superintelligence,” but none of these are technologies at all, but very particular abstract idealizations with which you have come to personally identify. These are discourses of technology, not technologies, and while they function as ends to help organize your advocacy (whether productively or not is an open question), they also express underlying assumptions about technoscientific change (whether usefully or not is also an open question), and symptomize in their superlativity what seem to me deeper irrational passions that often accompany technology talk, worries about finitude, mortality, control, the force of chance in human lives, the demands of diversity, and so on.
Many people identify with the vision put forth by Eric Drexler. It’s a specific technological goal, not an abstract identification target.
The superlativity aspect isn’t symptomatic of whatever crazy pathology you try to project upon the advocates of these technologies. They fall out of the specs of the technologies themselves.
That is to say, these "technologies" that do not exist apparently have "specs" that the big-brained Robot Cultists are simply "reading" when they assign to them super-predicated historical outcomes in human life, the arrival of a superintelligence that solves all problems, the arrival of a superabundance that circumvents all conflicts, the arrival of a superlongevity that dispenses with death, disease, and vulnerability, and so on. "Falling out" of "the specs" in the manuals only they can read these hardboiled technicians find deliverance of the very same infantile wish-fulfillment fantasies that (no doubt completely co-incidentally) priestly adepts always claim to be able to "read," even if nobody else can, from their canonical source material to the delight of the rubes for which "talent" they "earn" their right to pass round that collection plate. And while it is true that "many people identify with the vision put forth by Eric Drexler" (one might add, or with the one put forth by Ray Kurzweil, or with the one put forth by Vernor Vinge, or with the one put forth by Hans Moravek, or with the one put forth by David Pearce, or with the one put forth by Roger Penrose), it is still just a matter of me cruelly and unfairly "try[ing] to project… whatever crazy pathology… advocates of these [non-existing] technologies" when I describe this "advocacy" as "abstract idealizations with which [Robot Cultists] have come to personally identify."
And now Anissimov glibly informs me that "[h]uman civilization… was caused by cognitive improvement," and that "[c]ognitive improvement will once again transform the planet, this time from human civilization to transhuman civilization. And what an amazing civilization it could be." What could I be thinking, claiming that the techno-utopians confuse hyperbolic idealizations with which they personally identify with actual scientific and political discourse? How utterly foolish of me.
It is, no doubt, neither here nor there, that by "cognitive improvement" Anissimov has in mind here the project of bright boys coding a superintelligent post-biological Robot God who, should it manage to be Friendly enough, can solve all the world's problems, whatever that is supposed to mean, or, alternatively or supplementarily, the project to prosthetically "augment" human brains (either hyper-individualistically or borganistically) into sooper-brains that could no doubt do the same. All that stuff "falls out of the specs," why bother mentioning it? In the background of such "predictive scenarios" and "engineering specs" are all sorts of extraordinary assumptions about what intelligence actually is, what it is good for, where problems come from in the first place, and what it means to solve them.
The endlessly failing, every year on the year -- but never one jot less cocksure of its inevitable eventual vindication for all that -- old school Strong Program of AI never really did justice to the fact that intelligence is embodied. And somehow its advocates got into their heads that the sign of their superior hardheaded materialism was to pretend the intelligence of squishy brained living organisms was really, you know, deep down, a kind of immaterialized information and number-crunching operation indifferent -- and in the way, no doubt utterly co-incidentally, "spirit" was always said to be but information in fact never is -- to its material carriers.
Despite endlessly failing, every year on the year, to witness the predicted arrival of the robotic intelligences they expected and expect to "fall out" of their maths, the dead-enders who cling to the assumptions of Strong Program have, if anything, amplified their investments in these assumptions rather than qualifying them. Nowadays, they find in the methodological impoverishment of intelligence from an actually organismic complex of processes to an abstract coding and crunching of spiritualized digits not only the promise of the arrival of useful and edifying human engineered intelligences to share our world with us (which might happen eventually, indeed, although it really does matter that it hasn't, hasn't at all, and, quite possibly, hasn't for a reason or two), but, more, declare the faith that such an impoverished intelligence might in its poverty deliver humanity immortal habitation in cyberspace or in robot bodies.
Through a comparable impoverishment of the idea of what a "problem" is, reducing the interminable political contestation over means and ends among an ineradicable and ever-replenished diversity of human peers who share the world in history into a finite calculable constellation of instrumental difficulties on a machine-readable table -- all of them susceptible of an engineering solution or, if not, then demanding the red-pencil of oblivion -- these same dead-enders now fancy that by coding a problem-solving sooper-brain they can, through the blank bulldozing expedient of making the thing brutally bigger and faster than we are ourselves and slapping a god-moniker on the resulting mess, accomplish thereby the joyless minuet of exploring that impoverished table of instrumental difficulties in no time flat, thus solving all problems. And just so they would body forth by way of their dumb Robot God their longed-for mineral Millennium.
Just as they drain intelligence of its body they drain it of its sociality, too, and through these impoverishments ensure the permanent failure of their facile program, a failure that is scarcely diminished but fantastically exacerbated by the superlative investment of this nonsense with the paraphernalia of transcendent religiosity and priestly elitism.
In my original piece I wrote:
"Singularity" means different things to different people, for some naming a rather muzzy notion that technoscientific development is accelerating irresistibly into some unknowable imminent transformation of everything into which they can stuff all their present existential anxieties or wish-fulfillment fantasies, while for others naming variously more specific and "technical" (but usually still quite controversial and to my mind usually still hyperbolic) claims about networked and artificial intelligence "surpassing" conventional personal and social formations of problem-solving and organizational-intelligence with various projected impacts on questions of public security, deliberation, privacy issues, and so on. But whatever else one can say about these notions, it looks to me like an overwhelming majority of transhumanist-identified people affirm some version of them as true, as urgently important, and as abiding preoccupations.
To this, Anissimov exclaims like a drowning man clinging at an innertube: "Yes! Thank you for referencing this concept directly. What we are claiming is so different than what Kurzweil argues."
About this I have a couple of things to say right off the bat. First, I have always distinguished between the various "technical" positions of singularitarians, finding the various flavors of crazy they dish up deserving of their separate attentions and demolitions, and since I have delineated these distinctions in many previous argumentative rounds on the singularitarian carousel with Anissimov himself I'm not sure why he wants to imply that "referencing this… directly" is such a surprise now. But beyond that, second, I cannot say that I approve his conclusion that these differences really make so much of a difference as he seems to want to believe when all is said and done. No doubt the sectarian squabbles among rival theology scholars debating angels on pinheads seem enormously fraught for those who devote significant portions of their lives to them, but these "insiders" are not always the ones to whom we turn first if we want to assess the merits of these debates objectively, after all.
There is a school of singularitarian Robot Cultists who like to imply that technological change is accelerating and that this acceleration is itself accelerating and that all this acceleration is picking up a head of steam and driving irresistibly toward breaking through nearly every wall and nearly every limit the better to arrive at the change of everything all at once all over the place. This accelerationalization yields a kind of discursive free-for-all in its enthusiasts in which revolutionary, apocalyptic, transcendentalizing notions all collide noisily but not particularly sensibly, yielding up what singularitarians like to assure us a blank Beyondness which is literally incomprehensible on our side of The Change, but into which, curiously enough, they seem to plug all sorts of fervent hope and dread and no small number of wish-fulfillment fantasies about which they seem to exhibit, not incomprehension at all so much as the smug assurance of faithful True Belief.
Needless to say, technoscientific change is hardly a matter of monolithically "accelerating development" at all. Some research programs proceed more or less speedily, some efforts improve and other stall, developments ramify, collide, get stymied, unintended consequences are discovered that make what was promising seem instead to be a dead end, elegant models disintegrate into hydra-headed masses of unanticipated problems, intractable difficulties mobilize paradigm shifts out of which whole new disciplines arise to replace old ones, and so forth. "Accelerating change" is a facile valorization pronounced by those who imagine themselves to inhabit the summit of some parochial construal of progress or civilizational attainment. I have occasionally proposed, for example, that "accelerating change" is what the global instability and planetary precarization of neoliberal financialization of the economy looks like to those who are its relative beneficiaries or those who personally identify with the beneficiaries.
Anissimov seems to think his "cognitivization" of the singularity represents an earthshattering shift away from such accelerationalization (I guess we are all supposed to pretend we don't know that Anissimov's blog is called Accelerating Future and that his singularitarian discourse is utterly beholden to these discursive frames as well), but I must say that I fail to see much difference that makes a difference between those who claim we are monolithically accelerating to the historical black hole into which Robot Cultists who are so inclined can plug heaven and hell and those who claim we are building bigger and bigger brains like a garbage dump that will one day monolithically accumulate the mass to awaken to its Robot Godhood and solve all problems and so remake the earth into heaven and hell in much the same way. Singularitarians tend to point to the same fetishized gizmos and pie-charts when they are looking to peddle whichever flavor of manifest destiny their particular church favors, after all. Despite the palpable brittleness, the incessant crashes, the unnavigable junk manifested by actual as against idealized software, despite Lanier's Law that "software inefficiency and inelegance will always expand to the level made tolerable by Moore's Law," despite the fact that Moore's Law may be broken on its own terms either on engineering grounds or in its economic assumptions, many Singularitarians still seem to rely on a range of imbecilic to baroque variations on the faith that Moore's Law amounts to a rocket ship humanity is riding to Heaven. Others have shifted their focus these days to the nanoscale, but they still seem to find Destiny where scientific consensus sees a mountain range of problems demanding qualifications and care.
Anissimov fulminates that my
phrasing trivializes the potential hugeness of the rapid evolution of self-improving AI in the human-equivalent and human-surpassing realm of general intelligence.
As if Neanderthals would debate the potential technological creation of Homo sapiens by saying it raises "privacy issues"! It actually raises "completely transforming the way the world works" issues.
Needless to say, one needs to actually already be a member of the Robot Cult to find much force in such jeremiads, I'm afraid. Note the indispensability of "rapid evolution" to Anissimov's supposedly radically different "cognitivization" of singularity. But notice as well how much Anissimov's discourse relies on the incessantly asserted conjuration of "hugeness" and "complete transformation" here, how he locates himself and his singularitarian fellow-faithful among his human peers as a homo sapiens (a real human) among Neanderthals (not quite human).
For me, it is not a trivialization of superlative discourses to identify the actually-existing technoscientific and technodevelopmental problems these discourses try to invest with irrational passions, transcendental colorations, oversimplified overdramatized narratives, fraudulent and deranging hyperbole to the benefit of nobody (except, I suppose, for a few charlatans hoping to skim a few bucks or attract attention from the rubes) at an historical moment when good sense in the face of disruptive technodevelopmental change is urgently needed. For me, this is a matter of re-introducing some sense into these discussions. I have written that transhumanism without Superlativity is nothing, and to that conviction I still hold firm.
But this is not in the least to say that without transhumanism of all things, without the "contribution" of the Robot Cultists, nothing remains of theory, practice, deliberation, education, organizing, activism in the service of progressive technodevelopmental social struggle in this moment of disruptive technoscientific change.
Nothing, of course, could be further from the truth. For me, it is the superlative fantasists and irrationalists who are trivializing the work of actually emancipatory technodevelopmental social struggle through their facile wish-fulfillment fantasies, their New Age spiritualizations of technique at once transcendentalized and evacuated of consensus science, their anti-democratizing technocratic-elitisms and eugenicist daydreams, their whomping up of an hysterical futurological War on Terror peopled by clone armies and Robot Gods and planets reduced to goo, their derangements of collaborative problem-solving, peer-to-peer, to utterly fanciful, wasteful, incoherent money-holes answering their infantile scared-witless prayers for invulnerability, deathlessness, certainty, and wealth beyond the dreams of avarice.