Using Technology to Deepen Democracy, Using Democracy to Ensure Technology Benefits Us All
Friday, January 15, 2016
Friday, January 01, 2016
Robot Cultist Eliezer Yudkowsky's Ugly Celebration of Plutocracy
I was shocked, meeting Steve Jurvetson, because from everything I'd read about venture capitalists before then, VCs were supposed to be fools in business suits, who couldn't understand technology or engineers or the needs of a fragile young startup, but who'd gotten ahold of large amounts of money by dint of seeming reliable to other business suits.[My own (possibly over-general) generalized impression of VCs is that they are mostly privileged upward failing opportunists unscrupulously hyping vaporware for short-term cash from credulous marks or exploiting collective labor and intelligence via the shar(ecropp)ing economy with little awareness or interest in the costs or risks or suffering of others involved in their efforts. "[S]eeming reliable to other [VCs in] business suits" might describe this sociopathic state of affairs, but I do think better descriptions are available. I must add that many of these people seem to me to have "gotten a-hold of large amounts of money" by being born with it or with enough of it to schmooze others born with it, which is to say that they are "self-made men" in the usual way.--d]
One of the major surprises I received when I moved out of childhood into the real world, was the degree to which the world is stratified by genuine competence.[Since Yudkowsky has interposed this curious framing at this point in his narrative himself, I think it only fair to offer it up as a question for the reader rather than a premise we will all simply uncritically accept: Do we agree with Yudkowsky, admittedly a man veering into middle age at this point, that he has indeed "moved out of childhood" at all, let alone "into the real world"? Given the embarrassing narcissism, the simplistic conceits, the facile hero worship, the infantile wish-fulfillment on display, are we all quite ready to admit Yudkowsky into the ambit of adulthood? Or is his superlative futurology yet another, more than usually palpable, symptom of superannuated infancy?--d]
Now, yes, Steve Jurvetson is not just a randomly selected big-name venture capitalist. He is a big-name VC who often shows up at transhumanist conferences. But I am not drawing a line through just one data point.[Quite a lot of the material I snipped from the beginning of Yudkowsky's piece involved his praise of Steve Jurvetson in particular who may, for all I know, actually be a bright and worthy person (although, contra Yudkowsky, I cannot say his attendance at robocultic transhumanist conferences, if that is true, inspires confidence in his judgment) or may, again for all I know, simply be someone Yudkowsky is buttering up in hopes of some collection plate action for his robocultic causes.--d]
I was invited once to a gathering of the mid-level power elite, where around half the attendees were "CEO of something" -- mostly technology companies, but occasionally "something" was a public company or a sizable hedge fund. I was expecting to be the youngest person there, but it turned out that my age wasn't unusual -- there were several accomplished individuals who were younger. This was the point at which I realized that my child prodigy license had officially completely expired.[Can there really be people who refer non-derisively and non-satirically to groups of the rich as "the power elite"? Can there really be people who refer to themselves -- setting aside the question of people in their thirties who refer to themselves -- affirmatively as "child prodigies"? With much discomfort and sadness, let us soldier on.--d]
Now, admittedly, this was a closed conference run by people clueful enough to think "Let's invite Eliezer Yudkowsky" even though I'm not a CEO. So this was an incredibly cherry-picked sample. Even so...[Even if this hyperbole is meant to signal irony, the boasting in it is so transparently a compensation for insecurity it is actually painful to observe.--d]
Even so, these people of the Power Elite were visibly much smarter than average mortals. In conversation they spoke quickly, sensibly, and by and large intelligently. When talk turned to deep and difficult topics, they understood faster, made fewer mistakes, were readier to adopt others' suggestions.[Again, with the "Power Elite" business. The capital letters and, if I may say so, simple commonsense make we want to assume the phrasing is parodic -- but nothing anywhere else suggests this. Indeed, one has the horrified suspicion that the letters are also capitalized in Yudkowsky's head. We will set aside as too horrific to contemplate the suggestion that it was simply their likely whiteness and maleness that made the Power! Elite! gathered in that room seem "visibly much smarter than average mortals." Notice that we must take Yudkowsky's word that the topics under discussion were "deep" and "difficult" and that they spoke of them "sensibly" and "intelligently" and "made fewer mistakes" (he would have caught them if they had). Were they "speaking quickly" because they had so much to say and were excited by their topics -- or just because they are used to fast-talking salesmanship and bullshit artistry? Were they "adopting each others suggestions" because they were open to intelligent criticisms or because they are yes-men flattering and cajoling each other for networking's sake or because groups of people like this are already largely in agreement about what matters and why it matters especially when it comes to "tech" talk?--d]
No, even worse than that, much worse than that: these CEOs and CTOs and hedge-fund traders, these folk of the mid-level power elite, seemed happier and more alive.[There you go. Read it again. Hedge fund managers and tech VCs are happier and more alive than other people. MORE ALIVE. The rich are not like you and me. They are tapped into exquisite joys and alivenesses unavailable to majorities, they they are more real. This bald endorsement of reactionary plutocratic superiority is so ignorant of the richness of the lives and intelligence of the majorities it dismisses and is so flatly pernicious in its eventually genocidal political entailments, I must say it is a rare thing to see in a public statement... Although, again, I have already noted that such public statements are indeed comparatively more commonplace, and notoriously so, among these very same sort of rich "tech" VCs and banksters. But there it is. Of course, Yudkowsky doesn't really mean "worse" or "much worse" in anything like the conventional sense, when he declares these (are we meant to think reluctant?) truths. No, Yudkowsky is relishing the awfulness of what he is saying, he is savoring the ugliness in his mouth, tonguing his anti-democratic morsel from tooth to tooth, smacking his lips in an unseemly dance of contrarian "political incorrectness," drinking in the imagined opprobrium of the unwashed useless eating masses he cheerfully consigns to computronium feedstock here. One is all too used by now to these online spectacles of man-child id celebrating racist police violence or rape culture or what have you in the faces of the vulnerable, smearing their feces on the walls of the world. What it is useful to recall at this juncture, again, is that mild-mannered "tech philosopher" Nick Bostrom at Oxford and widely worshiped celebrity "tech" CEO Elon Musk are discursively, sub(cult)urally and institutionally connected to this person, are conversant with his "ideas" and "enterprises," are his colleagues.--d]
This, I suspect, is one of those truths so horrible that you can't talk about it in public. This is something that reporters must not write about, when they visit gatherings of the power elite.[Again, nothing could be clearer than that Yudkowsky does not find this "truth" to be in the least horrible. He is palpably relishing it -- his enjoyment is so rich he does not even care about the perverse contradiction of describing as absolutely prohibited the speaking of the very truths he is in the act of megaphoning about at top volume -- and to the extent that he has already figured himself as adjudicating this gathering of rich happy genius elite superbeings, he is also making a spectacle of confirming his own status as such a being himself. Again, one doesn't have to scratch too deep beneath this ungainly superficial boasting to detect what look to be the rampaging insecurities desperately compensated by this embarrassing self-serving spectacle, but I do not so much discern in this so much a cry for help as an announcement of a punitive rage for order putting us all on notice. Such dangerous and costly performances of insecure personhood do not elicit my sympathy but ready my defenses. It is amusing to note that in the comments to his post, an early one responds to Yudkowsky's observation that "This [elitism], I suspect, is one of those truths so horrible that you can't talk about it in public" by assuring us "Charles Murray talked about [it] in The Bell Curve." Quite so! A later comment adds, "And Ayn Rand wrote about it repeatedly." All too, too true. I will add myself that while it is true that few reporters write about tech billionaires that they are literal gods the rest of us should be so lucky get pooped on by, the endless fluffing these people get in informercial puff-pieces and think-tank PowerPoints and TED ceremonial ecstasies are all premised on only slightly more modest variations of the attribution of superiority Yudkowsky is indulging here. That he takes this praise to such bonkers extremities doesn't actually make his argument original, in particular, it just makes it even more than usually stupid.--d]
Because the last news your readers want to hear, is that this person who is wealthier than you, is also smarter, happier, and not a bad person morally. Your reader would much rather read about how these folks are overworked to the bone or suffering from existential ennui. Failing that, your readers want to hear how the upper echelons got there by cheating, or at least smarming their way to the top. If you said anything as hideous as, "They seem more alive," you'd get lynched.[Yudkowsky was not lynched for saying these very things, and of course he is lying when he pretends to expect anything remotely otherwise. Dumb emotionally stunted smug straight white assholes aren't the people who have historically been lynched in this country, as it happens. Charles Murray didn't write about that in The Bell Curve nor did Ayn Rand devote a chapter to it in one of her execrable bodice-rippers. You know, I would be surprised if many, indeed if anybody has even been meaner to Eliezer Yudkowsky about his horrible screed than I am being right now in the near-decade since he wrote all these awful ugly things he has never since recanted nor qualified. Of course, one expects straight white techbros to lose themselves in grandiose fantasies of imagined victimhood for just innocently being themselves in the world of politically correct oversensitive naturally inferior social justice warriors blah blah blah blah blah. It is indeed evocatively Ayn Randian of Yudkowsky to presume that we sour-smelling masses contemplate our rich productive techbro betters with envious projections onto them of misery and ennui -- but of course the truth is that such protestations about the lives of stress and stigma and suffering and risk suffered by our indispensable beneficient entrepreneurial Maker elites are usually self-serving rationalizations for bail-outs and tax-cuts and ego-stroking offered up by themselves rather than those of us, Takers all, they so relentlessly exploit and disdain. In any case, nothing could be clearer than that Yudkowsky and his readership do not identify in the main with such envious errant mehum masses, but largely consist instead of useful idiots who fancy themselves Tomorrow's Power Elite awaiting their own elevation via the coding or crafting of the Next! Big! Thing! That! Changes! Everything! and hence they actually identify with the pretensions of the plutocrats Yudkowsky is describing and disdain in advance those who in disdaining them disdain their future selves -- the poor pathetic suckers! I leave to the side the fact that many do not expect merely to Get Rich Quick soon enough, but in the fullness of time expect, given their robocultishness, to live in shiny robot bodies in nanobotic treasure caves filled with sexy sexbots when they are not rollicking in Holodeck Heaven as cyber-angelic info-souls under the ministrations of a history-ending super-parental Friendly Robot God.--d]
[There is much more evil crapola to be found in this vein in Yudkowsky's e-pistle. One particularly crazy utterance several more pages into the screed asserts that "Hedge-fund people sparkle with extra life force. At least the ones I've talked to. Large amounts of money seem to attract smart people. No, really." Oh, how our rich elites sparkle! As I said, it is really just more of the same -- including more of these faux "No, really" protestation against objections to all this objectionable idiocy that never really arrive nor are really, no, really, expected to from his readership.--d]
[By way of conclusion, it is interesting to note that like many who lack training in structural critique Yudkowsky finds himself indulging in a rather romantic misconception of the complexities of historical, social, and cultural dynamisms -- investing heroized protagonists with magickal force and indulging in frankly conspiracist mappings of power.--d]
[For what I mean by magick--d:]
Visiting that gathering of the mid-level power elite, it was suddenly obvious why the people who attended that conference might want to only hang out with other people who attended that conference. So long as they can talk to each other, there's no point in taking a chance on outsiders who are statistically unlikely to sparkle with the same level of life force. When you make it to the power elite, there are all sorts of people who want to talk to you. But until they make it into the power elite, it's not in your interest to take a chance on talking to them. Frustrating as that seems when you're on the outside trying to get in! On the inside, it's just more expected fun to hang around people who've already proven themselves competent. I think that's how it must be, for them. (I'm not part of that world, though I can walk through it and be recognized as something strange but sparkly.)[For what I mean by conspiracy--d:]
There's another world out there, richer in more than money. Journalists don't report on that part, and instead just talk about the big houses and the yachts. Maybe the journalists can't perceive it, because you can't discriminate more than one level above your own. Or maybe it's such an awful truth that no one wants to hear about it, on either side of the fence. It's easier for me to talk about such things, because, rightly or wrongly, I imagine that I can imagine technologies of an order that could bridge even that gap. I've never been to a gathering of the top-level elite (World Economic Forum level), so I have no idea if people are even more alive up there, or if the curve turns and starts heading downward.[As I said, one is left questioning more than Yudkowsky's intelligence after reading such stuff, but wondering -- to the extent that we take this stuff straight, and not as a bit of pathetic but probably lucrative self-promotional myth-making -- if his many accomplishments (writing Harry Potter fan-fiction, writing advertizing copy about code that doesn't exist, extolling rationality while indulging in megalomaniacal crazytalk) will one day include an arrival at either basic competent adulthood or basic moral sanity. Yudkowsky ends his missive in what seems an ambivalent bit of loose-talking guruwannabe-provocation or possibly ass-saving: "I'm pretty sure that, statistically speaking, there's a lot more cream at the top than most people seem willing to admit in writing. Such is the hideously unfair world we live in, which I do hope to fix." We are left to wonder if the reference to "hideous unfair[ness]" is ironic or earnest. It is hard to square his conventional meritocratic rationalization for inequity with the belief that this state of affairs is really so very unfair after all, so far as it goes, though who of us can say just where the balance finally falls once one ascends to the Olympian heights from which we are assured that Yudkowsky, elite above the elites, hopes finally to "fix" things? The ways of self-appointed godlings are mysterious.--d]
Tuesday, December 22, 2015
Divided Government After the Great Sort
Most representative constitutional governments established in the aftermath of our own experiment in the United States have eschewed those idiosyncrasies of our system owing to the Founders' facile anti-partisan fetish and implemented parliamentary systems instead -- and very much to their benefit for the most part.
Basic administrative functions (like raising the debt ceiling, filling key posts in a timely way) should be professionalized. The Senate Leader and House Speaker should be of the party of the Executive, and (if necessary, multiparty, multifaction) coalitions should form to support the implementation of the policy platform in the service of which the Executive are elected, else the government has no confidence. Of course, here in the United States, none of this is likely ever to be.
Given our present thoroughly institutionalized party duopoly, it is unclear that the organized and by now thoroughly anti-democratic force of the GOP can be sufficiently marginalized even in a conspicuously diversifying, secularizing, planetizing polyculture to be circumvented in a sufficiently timely and sustained way for majorities seeking to address urgent and obvious common problems -- socioeconomic precarity, climate and pandemic catastrophe, global conflicts exacerbated by global trafficking in military weapons, any one of which threaten the struggle for civilization (which I define as sustainable equity-in-diversity) and in combination threaten still worse.
"Divided Government" is dysfunctional, depressive of participation, and confuses the necessary of assignment of responsibility for policy outcomes. It seems to me that the various Golden Ages of bipartisan co-operation celebrated by Village pundits were mostly periods in which the great evil of the slave-holding and then segregated South were marginalized through their distribution into and management by both parties -- a strategy that never worked well (and could prevent neither a Civil War to resolve the question of slavery nor the betrayal of Reconstruction in the establishment of Jim Crow) and has worked ever less well during the generational "Great Sort" of the Parties in respect to white supremacy from the New Deal coalition through the Civil Rights era to the Southern Strategy and the descent into the Summers of Tea and the Winter of Trump.
It seems to me that the Founders' celebration of a hyper-individualist conception of "public happiness" informed by the specificity of their experience of Revolutionary politics undermined their appreciation of forms of other more democratic dimensions of public happiness connected to assembly, administration, organization, loyalty. (The guardian angel of this blog, Hannah Arendt, the phenomenologist of political power, elaborated the Founders' experience better than anyone, and perhaps shares some of their blind spots.) Their abstract commitments were implemented in Constitutional doctrines that have articulated progressive historical struggles in the United States. The Founders were wrong and we're stuck with their mistake.
And we ARE stuck with it: much like quixotic third-party fantasies, in which the politics to create a viable third party to solve certain very real pathologies of our duopoly are harder to achieve than to solve those problems through and in spite of the duopoly, so too the politics to create a parliamentary system to solve certain very real pathologies of the anti-factionalist quirks of our Constitution are harder to achieve than to solve those problems through and in spite of the quirks of our anti-factionalist Constitution.
Saturday, December 19, 2015
Robocultic Q & A With a Tech Journalist
Last month I spent a few weeks in correspondence with an interesting writer and occasional journalist who stumbled upon some transhumanist sub(cult)ures and wanted to publish an expose in a fairly high-profile tech publication. She is a congenial and informed and funny person, and I have no doubt she could easily write a piece about futurology quite as excoriating as the sort I do, but probably in a more accessible way than my own writing provides. I was rather hoping she would write something like that -- and I suspect she had drafts that managed the trick -- but the published result was a puff-piece, human interest narratives of a handful of zany robocultic personalities, that sort of thing, and ended up being a much more promotional than critical engagement, with a slight undercurrent of snark suggesting she wasn't falling for the moonshine without saying why exactly or why it might matter. I'm not linking to the piece or naming my interlocutor because, as I said, I still rather like her, and by now I can't say that I am particularly surprised at the rather lame product eventuating from our (and her other) conversations. She is a fine writer but I don't think there is much of an appetite for real political or cultural criticism of futurological discourse in pop-tech circles, at any rate when it doesn't take the form of making fun of nerdy nerds or indulging in disasterbatory hyperbole.
The transhumanists, singularitarians, techno-immortalists, digi-utopians, geo-engineers and other assorted futurological nuts I corral under the parodic designation "Robot Cultists" remain sufficiently dedicated to their far-out viewpoints that they do still continue to attract regular attention from journalists and the occasional academic looking for a bit of tech drama or tech kink to spout about. I actually think the robocultic sub(cult)ure is past its cultural heyday, but its dwindling number of stale, pale, male enthusiasts has been more than compensated lately by the inordinate amount of high-profile "tech" billionaires who now espouse aspects of the worldview in ways that seem to threaten to have Implications, or at least make money slosh around in ways it might not otherwise do.
Anyway, as somebody who has been critiquing and ridiculing these views in public places for over a quarter century I, too, attract more attention than I probably deserve from journalists and critics who stumble upon the futurological freakshow and feel like reacting to it. For the last decade or so I have had extended exchanges with two or three writers a year, on average, all of whom have decided to do some sort of piece or even a book about the transhumanists. For these futurologically-fascinated aficionados I inevitably provide reading lists, contacts, enormous amounts of historical context, ramifying mappings of intellectual and institutional affiliation, potted responses to the various futurological pathologies they happen to have glommed onto, more or less offering an unpaid seminar in reactionary futurist discourse.
Articles do eventually appear sometimes. In them I am sometimes a ghostly presence, offering up a bit of decontextualized snark untethered to an argument or context to give it much in the way of rhetorical force. But far more often the resulting pieces of writing neither mention me nor reflect much of an engagement with my arguments. As a writer really too polemical for academia and too academic for popular consumption, I can't say that this result is so surprising. However, lately I have made a practice of keeping my side of these exchanges handy so that I can at least post parts of them to my blog to see the light of day. What follows is some comparatively pithy Q & A from the latest episode of this sort of thing, edited, as it were, to protect those who would probably prefer to remain nameless in this context:
Q & A:
Q: What do you think the key moral objections are to transhumanism?
Well, I try not to get drawn into discussions with futurists about whether living as an immortal upload in the Holodeck or being "enhanced" into a sexy comic book superhero body or being ruled by a superintelligent AI would be "good" or "bad." None of these outcomes are going to arrive to be good or bad anyway, none of the assumptions on which these prophetic dreams are based are even coherent, really, so the moral question (or perhaps this is a more a question for a therapist) should probably be more like -- Is it good or bad to be devoting time to these questions rather than to problems and possibilities that actually beset us? What kind of work is getting done for the folks who give themselves over to infantile wish-fulfillment fantasizing on these topics? Does any of this make people better able to cope with shared problems or more attuned to real needs or more open to possibilities for insight or growth?
You know, in speculative literature the best imaginative and provocative visions have some of the same sort of furniture in them you find in futurological scenarios -- intelligent artifacts, powerful mutants, miraculous abilities -- but as in all great literature, their strangeness provides the distance or slippage that enables us to think more critically about ourselves, to find our way to sympathetic identification with what might otherwise seem threatening alienness, to overcome prejudices and orthodoxies that close us off to hearing the unexpected that changes things for the better. Science fiction in my view isn't actually about predictions at all, or it is only incidentally so: it is prophetic because finds the open futurity in the present world, it builds community from the strangenessand promise in our shared differences.
But futurism and tech-talk isn't prophetic in this sense at all, when you consider it more closely -- it operates much more like advertising does, promising us easy money, eternal youth, technofixes to end our insecurities, shiny cars, skin kreme, boner pills. The Future of the futurists is stuck in the parochial present like a gnat in amber. It freezes us in our present prejudices and fears, and peddles an amplification of the status quo as "disruption," stasis as "accelerating change." Futurology promises to "enhance" you -- but makes sure you don't ask the critical questions: enhances according to whom? for what ends? at what costs? Futurology promises you a life that doesn't end -- but makes sure you don't ask the critical questions: what makes a life worth living? what is my responsibility in the lives of others with whom I share this place and this moment? Futurology promises you intelligent gizmos -- but makes sure you don't ask the critical questions: if I call a computer or a car "intelligent," how does that change what it means to call a human being or a great ape or a whale intelligent? what happens to my sense of the intelligence lived in bodies and incarnated in historical struggles if I start "recognizing" it in landfill-destined consumer devices? I think the urgent moral questions for futurologists have less to do with their cartoonish predictions but with the morality of thinking futurologically at all, rather than thinking about real justice politically and real meaning ethically and real problems pragmatically.
Q: Why do you think climate change denial is so rife among this movement?
Many futurologists like to declare themselves to be environmentalists, so this is actually a tricky question. I think it might be better to say futurism is about the displacement rather than the outright denial of catastrophic anthropogenic climate change. For example, you have futurists like Nick Bostrom and Elon Musk who will claim to take climate change seriously but then who will insist that the more urgent "existential risk" humans face is artificial superintelligence. As climate refugees throng tent-cities and waters flood coastal cities and fires rage across states and pandemic disease vectors shift with rising temperatures these Very Serious futurological pundits offer up shrill warnings of Robocalypse.
Since the birth of computer science, generation after generation after generation, its intellectual luminaries have been offering up cocksure predictions about the imminence of world changing artificial intelligence, and they have never been anything but completely wrong about that. Isn't that rather amazing? The fact is that we have little scientific purchase on the nature of human intelligence and the curiously sociopathic body-alienated models of "intelligence" that suffuse AI-enthusiast subcultures don't contribute much to that understanding -- although they do seem content to code lots of software that helps corporate-military elites treat actually intelligent human beings as if we were merely robots ourselves.
Before we get to climate change denial, then, I think there are deeper denialisms playing out in futurological sub(cult)ures -- a terrified denial of the change that bedevils the best plans of our intelligence, a disgusted denial of the aging, vulnerable, limited, mortal body that is the seat of our intelligence, a horrified denial of the errors and miscommunications and humiliations that accompany the social play of our intelligence in the world. Many futurists who insist they are environmentalists like to talk about glorious imaginary "smart" cities or give PowerPoint presentations about geo-engineering "technofixes" to environmental problems in which profitable industrial corporate-military behemoths save us from the destruction they themselves have caused in their historical quest for profits. The futurists talk about fleets of airships squirting aerosols into the atmosphere, dumping megatons of filings into the seas, building cathedrals of pipes to cool surface temperatures with the deep sea chill, constructing vast archipelagos of mirrors in orbit to reflect the sun's rays -- and while they are hyperventilating these mega-engineeering wet-dreams they always insist that politics have failed, that we need a Plan B, that our collective will is unequal to the task. Of course, this is just another variation of the moral question you asked already. None of these boondoggle fantasies will ever be built to succeed or fail in the first place, there is little point in dwelling on the fact that we lack the understanding of eco-systemic dynamics to know whether the impacts of such pharaohnic super-projects would be more catastrophic than not, the whole point of these exercises is to distract the minds of those who are beginning to grasp the reality of our shared environmental responsibilities from the work of education, organization, agitation, legislation, investment that can be equal to this reality. Here, the futurological disgust with and denial of bodies, embodied intelligence, becomes denial of the material substance of political change, of historical struggle, bodies testifying to violation and to hope, assembled in protest and in collaboration.
Many people have been outraged recently to discover that Exxon scientists have known the truth about their role in climate catastrophe for decades and lied about it to protect their profits. But how many people are outraged that just a couple of years ago Exxon-Mobile CEO Rex Tillerson declared that climate change is simply a logistical and engineering problem? This is the quintessential form that futurological climate-change displacement/denialism takes: it begins with an apparent concession of the reality of the problem and then trivializes it. Futurology displaces the political reality of crisis -- who suffers climate change impacts? who dies? who pays for the mitigation efforts? who regulates these efforts? who is accountable to whom and for what? who is most at risk? who benefits and who profits from all this change? -- into apparently "neutral" technical and engineering language. Once this happens the demands and needs diversity of the stakeholders to change vanish and the technicians and wonks appear, white faces holding white papers enabling white profits.
Q: What are the most obvious historical antecedents to this kind of thinking?
Futurological dreams and nightmares are supposed to inhabit the bleeding edge, but the truth is that their psychological force and intuitive plausibility draws on a deeply disseminated archive of hopes and tropes... Eden, Golem, Faust, Frankenstein, Excaliber, Love Potions, the Sorcerer's Apprentice, the Ring of Power, the Genie in a Bottle, the Fountain of Youth, Rapture, Apocalypse and on and on and on.
In their cheerleading for superintelligent AI, superpowers/techno-immortalism, and digi-nano-superabundance it isn't hard to discern the contours of the omni-predicates of centuries of theology, omniscience, omnipotence, omnibenevolence. Patriarchal priests and boys with their toys have always marched through history hand in hand. And although many futurologists like to make a spectacle of their stolid scientism it isn't hard to discern the old fashioned mind-body dualism in their digital-utopian virtuality uploading fantasies. Part of what it really means to be a materialist is to take materiality seriously, which means recognizing that information is always instantiated on a non-negligible material carrier, which means it actually matters that all the intelligence we know as such as yet has been biologically incarnated. There is a difference that should make a difference to a materialist in the aria sung in the auditorium, heard on vinyl, pulled up on .mp3. Maybe something like intelligence can be materialized otherwise, but will it mean all that intelligence means to us in an imaginative, empathetic, responsible, rights-bearing being sharing our world? And if it doesn't is "intelligence" really the word we should use or imagine using to describe it?
Fascination with artifacts that seem invested with spirit -- puppets, carnival automata, sex-dolls are as old or older than written history. And of course techno-fetishism, techno-reductionism, and techno-triumphalism has been with us since before the Treaty of Westphalia ushered in the nation-state modernity that has preoccupied our attention with culture wars in the form of les querelles des anciens et des modernes right up to our late modern a-modern post-modern post-post-modern present: big guns and manifest destinies, eugenic rages for order, deaths of god and becoming as gods, these are all old stories. The endless recycling of futurological This! Changes! Everything! headlines about vat-grown meat and intelligent computers and cost-free fusion and cures for aging every few years or so is the consumer-capitalist froth on the surface of a brew of centuries old techno-utopian loose-talk and wish-fulfillment fantasizing.
Q: Why should people be worried about who is pushing these ideas?
Of course, all of this stuff is ridiculous and narcissistic and technoscientifically illiterate and all too easy to ignore or deride... and I do my share of that derision, I'll admit that. But you need only remember the example of the decades long marginalized Neoconservative foreign-policy "Thought Leaders" to understand the danger represented by tech billionaires and their celebrants making profitable promises and warnings about super-AI and immortality-meds and eco escape hatches to Mars. A completely discredited klatch of kooks who fancy themselves the Smartest Guys in the Room can cling to their definitive delusions for a long time -- especially if the nonsense they spew happens to bolster the egos or rationalize the profits of very rich people who want to remain rich above all else. And eventually such people can seize the policy making apparatus long enough to do real damage in the world.
For over a generation the United States has decided to worship as secular gods a motley assortment of very lucky, rather monomaniacal, somewhat sociopathic tech venture capitalists few of whom every actually made anything but many of whom profitably monetized (skimmed) the collective accomplishments of nameless enthusiasts and most of whom profitably marketed (scammed) gizmos already available and usually discarded elsewhere as revolutionary novelties. The futurologists provide a language in which these skim and scam operators can reassure themselves that they are Protagonists of History, shepherding consumer-sheeple to techno-transcendent paradise and even godlikeness. It is a mistake to dismiss the threat represented by such associations -- and I must say that in the decades I have been studying and criticizing futurologists they have only gained in funding, institutional gravity, and reputational heft, however many times their animating claims have been exposed and pernicious nonsense reviled.
But setting those very real worries aside, I also think the futurologists are interesting objects and subjects of study because they represent a kind of reductio ad absurdum of prevailing attitudes and assumptions and aspirations and justificatory rhetoric in neoliberal, extractive-industrial, consumer-oriented, marketing-suffused, corporate-military society: if you can grasp the desperation, derangement and denialism of futurological fancies, it should put you in a better position to grasp the pathologies of more mainstream orthodoxies in our public discourse and authorizing institutions, our acquiescence to unsustainable consumption, our faith in technoscientific, especially military, circumventions of our intractable political problems, our narcissistic insistence that we occupy a summit from which to declare differences to be inferiorities, our desperate denial of aging, disease, and death and the death-dealing mistreatment of others and of ourselves this denialism traps us in so deeply.
Q (rather later): [O]ne more thing: who were the most prominent members of the extropians list? Anyone I've missed? Were R.U Sirius or other Wired/BoingBoing writers and editors on the list? Or engineers/developers etc?
Back in Atlanta in the 1990s, I read the Extropy zine as a life-long SF queergeek drawn to what I thought were the edges of things, I suppose, and I was a lurker on the extropians list in something like its heyday. This was somewhere in the '93-'99 range, I'm guessing. I posted only occasionally since even then most of what I had to say was critical -- the philosophy seemed like amateur hour and the politics were just atrocious -- and it seemed a bit wrong to barge into their clubhouse and piss in the punch bowl if you know what I mean... I was mostly quiet.
The posters I remember as prominent were Max and Natasha-Vita More, of course, Eliezer Yudkowsky, Damien Broderick (an Australian SF writer), Eugen Leitl, Perry Metzger, Hal Finney, Sasha Chislenko, Mark Plus, Giulio Prisco, Ramona Machado, Nancy Lebovitz… You know, people tend to forget the women's voices because it was such an insistently white techbro kinda sorta milieu. I'm not sure how many women stuck with it, although Natasha is definitely a piece of work, and Romana was doing something of a proto Rachel Haywire catsuited contrarian schtick, Haywire's a more millennial transhumanoid who wasn't around back then. Let's see. There was David Krieger too (I made out with him at an extropian pool party in the Valley of the Silly Con back in 95, I do believe).
I don't think I remember RU Sirius ever chiming in, I personally see him as more of an opportunistic participant/observer/stand-up critic type, really, and I know I remember Nick Szabo's name but I'm not sure I remember him posting a lot. You mentioned Eric Drexler, but I don't remember him posting, he was occasionally discussed and I know he would appear at futurist topic conferences with transhumanoid muckety mucks like More and the cypherpunks like Tim May and Paul Hughes. I do remember seeing Christine Peterson a couple of times.
Wired did a cover story called "Meet The Extropians" which captures well some of the flavor of the group, that was from 1993. Back then, I think techno-immortalism via cryonics and nanobot miracle medicine was the big draw (Aubrey de Grey appeared a bit later, I believe, but the sub(cult)ure was ready for him for sure), with a weird overlap of space stuff that was a vestige from the L5 society and also a curious amount of gun-nuttery attached to the anarcho-capitalist enthusiasm and crypto-anarchy stuff.
It's no surprise that bitcoinsanity had its birth there, and that the big bucks for transhumanoid/ singularitarian faith-based initiatives would come from PayPal billionaires like the terminally awful robocultic reactionary Peter Thiel, given the crypto-currency enthusiasm. Hal Finny was a regular poster at extropians and quite a bitcoin muckety muck right at the beginning -- I think maybe he made the first bitcoin transaction in fact.
Back in those days I was working through connections of technnocultural theory and queer theory in an analytic philosophy department in Georgia, and the extropians -- No death! No taxes! -- seemed to epitomize the California Ideology. I came to California as a Queer National with my mind on fire to work with Judith Butler, and I was lucky enough to spend a decade learning from her in the Rhetoric Department at Berkeley, where I ended up writing my diss about privacy and publicity in neoliberal technocultures, Pancryptics. But I never lost sight of the transhumanists -- they seemed and still seem to me to symptomize in a clarifying extreme form the pathologies of our techno-fetishistic, techno-reductionist, techno-triumphalist disaster capitalism. Hope that helps!
Q (much later): Tackling this thing has been a lot more difficult than I imagined it would be. Right now it's sitting on 20,000 words and has to come down to at least half that (pity my editor!). I've gone through quite a journey on it. I still think very much that these ideas are bad and a reflection of a particularly self-obsessed larger moment, and that people should be extremely concerned about how much money is going into these ideas that could be so much better spent elsewhere. The bizarre streak of climate denialism is likewise incredibly disturbing…. But then I kind of came around in a way to sympathising with what is ultimately their fear which is driving some of this, an incredibly juvenile fear of dying. But a fear of being old and infirm and in mental decline in a society that is in denial about the realities of that, and which poses few alternatives to that fate for all of us, in a way I can understand that fear…. In any case, amazing that they let you proof read [their official FAQ] for them, even though you are so critical of their project! Or do you think they were just grateful for someone who could make it read-well on a sentence level?
You have my sympathies, the topic is a hydra-headed beast when you really dig in, I know. Nick Bostrom and I had a long phone conversation in which I leveled all sorts of criticisms of transhumanism. That I was a critic was well known, but back then socialist transhumanist James Hughes (who co-founded IEET with him) and I were quite friendly, and briefly I was even "Human Rights" fellow at IEET myself -- which meant that they re-published some blog posts of mine. (I write about that and its rather uncongenial end here.) Anyway, Bostrom and I had a wide-ranging conversation that took his freshly written FAQ as our shared point of departure. He adapted/qualified many claims in light of my criticisms, but ignored a lot of them as well and of course the central contentions of the critique couldn't be taken up without, you know, giving up on transhumanism. As a matter of fact, we didn't get past the first half of the thing. It was a good conversation though, I remember it was even rather fun. I do take these issues seriously as you know and, hell, I'll talk to anybody who is going to listen in a real way.
You know, I've been criticizing futurism for decades -- there were times when I was one of the few people truly informed of their ideas even if I was critical of them, and some of them appreciated the chance to sharpen their arguments on a critic. I've had many affable conversations with all sorts of these folks, Aubrey de Grey, Robin Hanson, Max More even. The discourse is dangerous and even evil in my opinion, but its advocates are human beings which usually means conversations can happen face to face.
I know what you mean when you say you sympathize after a fashion upon grasping the real fear of mortality driving so much of their project -- and I would say also the fear of the uncontrollable role of chance in life, the vulnerability to error and miscommunication in company. But you know reactionary politics are always driven by fear -- and fear is always sad. I mean, the choices are love or fear when it comes down to it, right? And to be driven by fear drives away so much openness to love and there's no way to respond to that but to see the sadness of it -- when it comes to it these fears are deranging sensible deliberation about technoscientific change at a historical moment when sense is urgently needed, these fears make them dupes, and often willing ones, of plutocratic and death-dealing elites, these fears lead them to deceive themselves and deceive others who are also vulnerable. One has to be clear-headed about such things, seems to me.
Q (still later): Have entered new phase: What if the Extropians were just a Discordian-type joke that other people came to take seriously?
Yes, they're a joke. But it's on us, and they aren't in on it. As I mentioned before, the better analogy is the Neocons: they were seen as peddlers of nonsense from the perspective of foreign policy professionals (even most conservatives thought so) but they were well-funded because their arguments were consoling and potentially lucrative to moneyed elites and eventually they stumbled into power via Bush and Cheney whereupon they implemented their ideas with predictable (and predicted) catastrophic consequences in wasted lives and wasted wealth. To be clear: the danger isn't that transhumanoids will code a Robot God or create a ruler species of immortal rich dudes with comic-book sooper-powers, but that they will divert budgets and legislation into damaging policies and dead ends that contribute to neglected health care, dumb and dangerous software, algorithmic harassment and manipulation, ongoing climate catastrophe, the looting of public and common goods via "disruptive" privatization, exploitative "development," cruel "resilience," and upward-failing techbro "Thought Leadership."
Sunday, December 13, 2015
Sunday, December 06, 2015
Three Fronts in the Uploading Discussion -- A Guest Post by Jim Fehlinger
- Longtime friend and friend-of-blog Jim Fehlinger posted a cogent summarizing judgment (which doesn't mean concluding by any means) of the Uploading discussion that's been playing out in this Moot non-stop for days. I thought it deserved a post of its own. In the Moot to this post, I've re-posted my responses to his original comments to get the ball rolling again. I've edited it only a very little, for continuity's sake, but the link above will take the wary to the originals.--d
It strikes me that this conversation (/disagreement) has been proceeding along three different fronts (with, perhaps, three different viewpoints) that have not yet been clearly distinguished:
1. Belief in/doubts about GOFAI ("Good Old-Fashioned AI") -- the 50's/60's Allen Newell/Herbert Simon/Seymour Papert/John McCarthy/Marvin Minsky et al. project to replicate an abstract human "mind" (or salient aspects of one, such as natural-language understanding) by performing syntactical manipulations of symbolic representations of the world using digital computers. The hope initially attached to this approach to AI has been fading for decades. Almost a quarter of a century ago, in the second edition of his book, Hubert Dreyfus called GOFAI a "degenerating research program":
Almost half a century ago [as of 1992] computer pioneer Alan Turing suggested that a high-speed digital computer, programmed with rules and facts, might exhibit intelligent behavior. Thus was born the field later called artificial intelligence (AI). After fifty years of effort [make it 70, now], however, it is now clear to all but a few diehards that this attempt to produce artificial intelligence has failed. This failure does not mean this sort of AI is impossible; no one has been able to come up with a negative proof. Rather, it has turned out that, for the time being at least, the research program based on the assumption that human beings produce intelligence using facts and rules has reached a dead end, and there is no reason to think it could ever succeed. Indeed, what John Haugeland has called Good Old-Fashioned AI (GOFAI) is a paradigm case of what philosophers of science call a degenerating research program.
[That research program i]s still degenerating, as far as I know.
A degenerating research program, as defined by Imre Lakatos, is a scientific enterprise that starts out with great promise, offering a new approach that leads to impressive results in a limited domain. Almost inevitably researchers will want to try to apply the approach more broadly, starting with problems that are in some way similar to the original one. As long as it succeeds, the research program expands and attracts followers. If, however, researchers start encountering unexpected but important phenomena that consistently resist the new techniques, the program will stagnate, and researchers will abandon it as soon as a progressive alternative approach becomes available.
Dale and I agree in our skepticism about this one. Gareth Nelson, it would seem (and many if not most & Hists, I expect) still holds out hope here. I think it's a common failing of computer programmers. Too close to their own toys, as I said before. ;-&
2. The notion that, even if we jettison the functionalist/cognitivist/symbol-manipulation approach of GOFAI, we still might simulate the low-level dynamic messiness of a biological brain and get to AI from the bottom up instead of the top down. Like Gerald Edelman's series of "Darwin" robots or, at an even lower and putatively more biologically-accurate level, Henry Markram's "Blue Brain" project.
Gareth seems to be on-board with this approach as well, and says somewhere above that he thinks a hybrid of the biological-simulation approach and the GOFAI approach might be the ticket to AI (or AGI, as Ben Goertzel prefers to call it).
Dale still dismisses this, saying that a "model" of a human mind is not the same as a human mind, just as a picture of you is not you.
I am less willing to dismiss this on purely philosophical grounds. I am willing to concede that if there were digital computers fast enough and with enough storage to simulate biological mechanisms at whatever level of detail turned out to be necessary (which is something we don't know yet) and if this sufficiently-detailed digital simulation could be connected either to a living body with equally-miraculously (by today's standards) fine-grained sensors and transducers, or to a (sufficiently fine-grained) simulation of a human body immersed in a (sufficiently fine-grained) simulation of the real word -- we're stacking technological miracle upon technological miracle here! -- then yes, this hybrid entity with a human body and a digitally-simulated brain, I am willing to grant, might be a good-enough approximation of a human being (though hardly "indistinguishable" from an ordinary human being, and the poor guy would certainly find verself playing a very odd role indeed in human society, if ve were the first one). I'm even willing to concede (piling more miracles on top of miracles by granting the existence of those super-duper-nanobots) the possibility of "uploading" a particular human personality, with memories intact, using something like the Moravec transfer (though again, the "upload" would find verself in extremely different circumstances from the original, immediately upon awakening). This is still not "modelling" in any ordinary sense of the word in which it occurs in contemporary scientific practice! It's an as-yet-unrealized (except in the fictional realm of the SF novel) substitution of a digitally-simulated phenomenon for the phenomenon itself (currently unrealized, that is, except in the comparatively trivial case in which the phenomenon is an abstract description of another digital computer).
However, I am unpersuaded, Moravec and Kurzweil and their fellow-travellers notwithstanding, that Moore's Law and the "acceleration of technology" are going to make this a sure thing by 2045. I am not even persuaded that we know enough to be able to predict that such a thing might happen by 20450, or 204500, whether by means of digital computers or any other technology, assuming a technological civilization still exists on this planet by then.
The physicist Richard C. Feynman, credited as one of the inventors of the idea of "nanotechnology", is quoted as having said "There's plenty of room at the bottom." Maybe there is. Hugo de Garis thinks we'll be computing using subatomic particles in the not too distant future! If they're right, then -- sure, maybe all of the above science-fictional scenarios are plausible. But others have suggested that maybe, just maybe, life itself is as close to the bottom as our universe permits when it comes to, well, life-like systems (including biologically-based intelligence). If that's so, then maybe we're stuck with systems that look more-or-less like naturally-evolved biochemistry.
3. Attitudes toward the whole Transhumanist/Singularitarian mishegas. What Richard L. Jones once called the "belief package", or what Dale commonly refers to as the three "omni-predicates" of & Hist discourse: omniscience=superintelligence; omnipotence=super-enhancements (including super-longevity); omnibenevolence=superabundance.
This is a very large topic indeed. It has to do with politics, mainly the politics of libertarianism (Paulina Boorsook, Cyberselfish, Barbrook & Cameron, "The Californian Ideology," religious yearnings (the "Rapture of the Nerds"), cult formation (especially sci-fi tinged cults, such as Ayn Rand's [or Nathaniel Branden's, if you prefer] "Objectivism", L. Ron Hubbard's "Scientology", or even Joseph Smith's Mormonism!), psychology (including narcissism and psychopathy/sociopathy), and other general subjects. Very broad indeed!
Forgive me for putting it this insultingly, but I fear Gareth may still be savoring the Kool-Aid here.
Dale and I are long past this phase, though we once both participated on the Extropians' mailing list, around or before the turn of the century. When we get snotty (sometimes reflexively so ;-&), it's the taste of the Kool-Aid we're reacting to, which we no longer enjoy, I'm afraid.
Friday, December 04, 2015
The Immaterialism of Futurological Materialism
Whereupon he reacted, with robotic predictability:
Bit of a non sequitur that. I say the internal implementation does not matter so long as the external behaviour still yields intelligence, in what way does that contradict materialism? If anything, claiming that it matters whether there's neurons or silicon chips implementing intelligent behaviour is claiming there's something important about neurons that goes beyond their material behaviour.An actual materialist should grasp that the actually-existing material incarnation of minds, like the actually-existing material carrier of information, is non-negligible to the mind, to the information. The glib treatment of material differences as matters of utter indifference, as perfectly inter-translatable without loss, as cheerfully dispensable is hardly the attitude of a materialist. One might with better justice describe the attitude as immaterialist.
Once again, you airily refer to "silicon chips implementing intelligent behavior" when that has never once happened and looks nothing like something about to happen and the very possibility of which is central to the present dispute. However invigorating the image of this AI is in your mind -- it is not real, nor is it a falsifiable thought-experiment, nor is it a destiny, nor is it a burning bush, nor is it writing on a wall, and those of us who fail to be moved as you are by this futurological fancy are not denying reality, its stipulated properties -- however fervently asserted by its futurological fanboys -- are not facts in evidence. In response to this charge you will deny, as you have done every other time I have made it, that you are in fact claiming AI is "real" or would be "easy" -- but time after time after time you conjure up these fancies in making your rhetorical case and attribute properties to them with which skeptics presumably have to deal, just because you want them to be true so fervently. Just as well argue how many angels can dance on a pin head.
And then, too, once again, in this formulation you insinuate my recognition that such real-world intelligence that actually exists all happens to be materialized in biological organization amounts to positing something magical or supernatural about brains. No, Gareth: the intelligence that exists is biological and the artificial intelligence to which you attribute all sorts of pet properties does not exist. To entertain the logical possibility that phenomena legible to us as intelligent might be materialized otherwise does not mean that they are, that we can engineer them, or that we know enough about the intelligence we materially encounter to be of any help were we to want to engineer intelligence otherwise. None of that is implied in the realization that there is no reason to treat intelligence of somehow supernatural. None of it. You may need to have a good cry in your pillow for a moment after that sinks in before we continue. It's fine, I'll wait.
Now, again, a "materialism" about mind demands recognition that the materialization of such minds as are in evidence is biological. That intelligence could be materialized otherwise is possible, but not necessarily plausible, affordable, or even useful. Maybe it would be, maybe not. Faith-based techno-transcendental investment of AI with wish-fulfillment fantasies of an overcoming of the scary force of contingency in life, an arrival at omnicompetence no longer bedeviled by the humiliations of error or miscommunication, the driving of engines of superabundance delivering treasure beyond the dreams of avarice, or offering up digital immortalization of an "info-soul" in better-than-real virtuality may make AI seem so desirable that techno-transcendentalists of the transhumanoid, singularitarian kinds want to pretend we know enough to know how do build it when we do not, but that has nothing to do with science or materialism. Gareth and his futurological friends' attitudes look to be common or garden variety religiosity of the most blatant kind, if I may say so. And even if the faithful wear labcoats rather than priest's vestments, it's not like we can't see it's all still from Party City.
The human mind is not immune from scientific investigation and understanding, and neither is the brain (the physical implementation of the mind). That should be a fairly uncontroversial viewpoint. I simply go one further and say that human brains are not immune from simulation, and simulating a brain would automatically get you a mind.No one has denied that intelligence can be studied and better understood. I do wonder whether Gareth's parenthetic description of the brain as "the physical implementation of the mind" already sets the stage for his desired scene of an interested agent implementing an intelligence when there is actually no reason to assume such a thing where the biologically incarnated mind is concerned. People in robot cults should possibly take care before assuming the air of adjucating just which disputes are scientifically controversial or not, by the way. When he goes on to say "I simply go one [step] further" in turning to the claim that simulating a brain automatically gets you a mind I disagree that there is anything "simple" about that leap, or that it is in any sense a logical elaboration of a similar character to the preceding (as he implies by the word "further"). Not only does simulating a brain not obviously or necessarily "automatically" get you a mind, it quite obviously does not, and necessarily not get you the mind so simulated. To say otherwise is not materialist, but immaterialist -- but worse it is palpably insane. You are not a picture of you, and a picture of brain is not a brain, and a moving picture of a mind's operation in some respects is not the mind's operation. You may be stupid and insensitive enough not to see the difference between a romantic partner and a fuck doll got up to look like that romantic partner, but you should not necessarily expect others to be so dull if you bring your doll to meet the family or hope to elude prosecution for murdering your partner when the police come calling.
PS: In Section Three of Futurological Discourses and Posthuman Terrains I connect such pathologically robocultically extreme, immaterialist ideology as I ridicule here to more prevailing, mainstream neoliberal futurology in which immaterialist ideology plays out in, for example, celebrations or at any rate justifications for fraudulent financialization in global, digital developmentalist corporate-military think-tank discourse.
Saturday, November 28, 2015
My Silly Skepticism About AI and Uploading
- The following is edited and adapted from the Moot to this post, an exchange with "Gareth Nelson" (his contributions are italicized, follow the link for the unedited version and context) very much like countless exchanges I have had with cocksure robocultic AI-deadenders over the past few decades, but what the hell, new readers may not be bored unto death at such re-hashes as I am. By all means follow the link and make your own contributions, if you like.
Dale -- you've said (and I love this quote of yours) "a picture of you is not you", which is entirely true and I do not claim it is at all possible to "transfer" conciousness from a brain into a computer. But if we stick with the picture analogy, I would argue that a copy of a picture is still as useful for the same purposes. When we look at a beautiful work of art, we derive pleasure from appreciating the skill of the artist... assuming the copy is of high quality we can use it for the same purposes -- appreciating the beauty.
Setting aside the obvious fact that collectors spend millions for originals while disdaining reproductions for reasons that are not entirely dismissable as snobbery, I have no objection to the fact that some people might want to believe they get the same value from a recent digitally animated Aubrey Hepburn avatar selling a candy bar as they do from her actual performance in Sabrina, I have no objection to some pervbro who wants to believe his blow up fuck doll provides as rich a relationship as he is capable of enjoying with a human partner, I have no objection to somebody who wants to believe that they make some profound connection with the Great Emancipator via his stiff animatronic duplicate in Disney World's Hall of Presidents. Hey, there's no accounting for taste.
If we say that the purpose of a brain is to yield intelligent behaviour (and secondly to control a body -- but we generally value people for what's in their cerebral cortex, not their brain stem) then a copy of a brain that yields intelligent behaviour serves the purpose just fine, at least for other people.
I'm an atheist so I don't believe the brain exists for a "purpose" in the way you seem to mean. This is not a quibble, because the theology here already figures intelligence as purposively designed in a way that smuggles your erroneous conclusions into your framing of your position in that very dispute. Your second framing of the brain as "controlling" the body is also considerably more problematic and prejudicial than you seem to realize. The brain IS the body, not a separate or superior supervisor of it. There is a whiff here of the very dualism you falsely attribute to opponents of your faith-based formulation of the "info-soul." I also think this business of introducing "control" into the picture so early is rather symptomatic, but we needn't go into all that. I do hope you see a therapist on a regular basis.
- If you're a true materialist you accept that the brain is just a
physical object with some complex chemical and electrical processes
being responsible for its behaviour. It stands to reason that modelling
those processes accurately should allow the same behaviour.
Not only does this assertion not "stand to reason," but it is a patent absurdity. I am a materialist in the matter of red wagons, but I hardly think a computer modeling a red wagon would be one, even if it might generate an image I would recognize as the representation of one. I certainly would not expect a modeled red wagon to be capable of all the things a red wagon is, nor (knowing what I do know of computer modeling) would I expect that those shared recognizably red wagonish effects would be achieved in the same way by the red wagon and the red wagon model. Not incidentally, I do not agree that we know at present that the material processes that give rise to the experience of thought (including the experience of witnessing its exhibition in others) are reducible to only those chemical and electrical processes in the brain -- and also possibly elsewhere in the body -- that we presently know and in the way we presently know them. They certainly might, but our present accounts are hilariously far from sufficient to pretend we know for sure. And there is no need in the least to invoke supernatural phenomena to recognize the highly provisional status of much of our present understanding of brain processes and to treat grandiloquent extrapolations from our present knowledge onto futurological imagineering predictions with extreme skepticism and their confident proponents as ridiculous.
You could get really silly and claim that a model of a human brain which "seems" intelligent is actually just simulating intelligence and the model is just accurately predicting the behaviour of a human brain and outputting that behavioural prediction, but then you're just arguing semantics.
How terrible it would be to elicit the judgment that I am being "silly" from you of all people! You are acting as though AI or simulated apparent persons are actual accomplishments, not futurological fancies, and that my skepticism about their realization given the poverty of our understanding is some kind of a denial of facts in evidence. You'll forgive me, but it is not the least bit silly nor merely semantic for me to point out that AI is not in evidence, that AI champions are always certain AI is around the corner when it serially fails to arrive, that our understanding of intelligence is incomplete in ways that seem likely to bedevil the construction of actually intelligent/agentic artifacts for some time, and that AI discourse and the subcultures of its enthusiasts have always been and remain indebted to pathological overconfidence, uninterrogated metaphors, troubling antipathies to materiality and biology, sociopathic aspirations of mastery, control, omniscience none of which bode well for the project to which they are devoted.
Friday, November 27, 2015
Path of the Critique of Commodity Fetishism
Monday, November 23, 2015
What They Fear
Ebola, terrorism, immigration, PC-police are all proxies. Republicans are scared to death and they are acting like it's the end of the world. They are right. White racist patriarchal extractive-industialism cannot survive a diversifying, secularizing, planetizing society. They feel cornered. They think the final battle is upon them. They experience progressive change as an existential threat. It is because they have already lost that they can mobilize the imaginative and organizational resources to do their last bit of mischief. White racist patriarchal Republicanism isn't just, Godwin do forgive me, Hitler but Hitler in the bunker now. They cannot win but do not underestimate the damage they can do in losing: victory in a smoking planetary ruin traumatized beyond healing.
Friday, November 20, 2015
Crypto-Dildo
Monday, November 16, 2015
Sins of the Futurologists
Chapter One
Sins of Futurologists: Life expectancy at retirement age -- esp at lower end of the income distribution -- are not increasing and yet glib futurological declarations to the contrary, genuflecting vacuously to Boomer-daydreams of face-lifts, boner pills and sooper meds are repeated endlessly and now provide an unchallenged basis for attacks on social security and calls to raise the retirement age a case with intuitive plausibility for pampered gerontocratic US Senators who live no longer than did many Senators of ancient Rome, and who seem quite uninterested in dilemmas of working class majorities with hollowed out finances, accumulated health and stress issues, and see nothing but dollar signs (every year added to retirement age steals 7% of benefits owed citizens) rationalized by cyborg daydreams.
Chapter Two
Sins of the futurologists: When Exxon-Mobile CEO Rex Tillerson declared climate change will have an "engineering solution" he indulged in the futurological conceit of "geo-engineering," in the futurological genre of the imaginary technofix of sociopolitical problems.To those who know the genre it will come as no surprise that Tillerson's glib recourse to daydream megatech solutionism was accompanied by condemnation of climate activism and regulation as "alarmism" and Big Government/Socialist opportunism.
Although many are scandalized to discover that petro-companies indulged in profitable climate-change denialism when they knew better, it is crucial to grasp that concession of facts of climate change coupled to geo-engineering solutionism and refusal of political action (note that this refusal often takes the form of oh so despondent libertopian orthodoxy resigned to the *inevitable* failure of politics) is just next-stage climate-change denialism, a continuation of profitable extractive-industry (now incl. remediation r&d) via futurology.
Chapter Three
Sins of the futurologists: promises of redemptive techno-abundance as against struggles for realizable abundance, equity-in-diversity constitute the smoking ruins we now sift to survive: from false redemption of the sin of Hiroshima in nuclear energy "too cheap to meter," to I've got one word for you -- "plastic" -- phony crap abundance on the cheap, now accumulating in toxic landfills, to Futurama dreams of car culture snarled in jams, poison clouds of lead and smog, white-racist flight to eco-catastrophic suburban lawns, to fantasies of sustainable design without sustainable politics, digital democracy with corporate-military surveillance and zero comments, saucer-eyed promises of ubicomp abundance nano-abundance 3D-printer abundance as disorganized labor power dies&social supports crumble.
Chapter Four
Sins of the futurologists: Serially failed, cocksure proponents of artificial intelligence, with their disembodied sociopathic models and their pretense that projections from our palpably incomplete understanding of actual biological intelligence are firm foundations keep describing as "smart" and as "intelligent" artifacts that obviously exhibit no autonomy or intelligence whatsoever with the consequences that first, we lose sight of the actual intelligence of human and nonhuman animals in ways that loosen our grip on supports for their dignity, second, we valorize inept/inapt designs (clumsy, sociopathic algorithms) to which we attribute intelligence or glimpse its "holy" coming, third, we stop tracking responsibilities, refusing to hold designers owners users accountable for military crime & software-abetted fraud, fourth, we celebrate plutocratic skim-and-scam operators who monetize crowdsourced problem-solving, labor and enthusiasm, cheerleading reactionary celebrity techbro-CEOs like Bill Gates, Peter Thiel and Elon Musk for their PR "accomplishments" pretending that upward failing is "innovation" and deregulatory privatization is "disruption" and social darwinism is "resilience" and that the cyberspace feeding on coal-smoke, accessed on toxic devices made by wage slaves is a digital "spirit realm," Home of Mind, while they rake in their billions promising history-shattering sooper-AI Robot Gods who will solve all our problems for us or, precisely as lucratively, warning us against robocalypse Robot Gods reducing the world to goo, so keep those r&d dollars coming folks! So Futurological AI-ideologues derange sensible consideration of network security or user-friendliness into heaven/hellscape vaporware.
Monday, November 09, 2015
The Profitable Vacuity of Futurological "Enhancement" and "Intelligence"
There is a crucial continuity in the errors and deceptions committed by transhumanist/singularitarian futurist sub(cult)ural projects. The "enhancement" of eugenic transhumanists and the "intelligence" of singularitarian AI are evacuated of indispensable normativity.
"Enhancement" is always enhancement… For whom? From where? In the service of what? At what cost? "Intelligence," too is always intelligence… For whom? From where? In the service of what? At what cost? There is no such thing as "enhancement" or "intelligence" as such, out of nowhere, for all ends, without costs.
Eugenic and singularitarian "techno-transcendence" is in fact a disavowal of the worldly substance of struggle and sense. In this techno-transcendental futurisms are a *reductio* of mainstream, authoritative developmentalist discourses and marketing forms.
Prevalent neoliberal futurist discourses refigure, the better to deny, indispensably political categories like freedom and history, rendering them instrumentalities and hence indifferent, accumulative, projective, amplifying -- and so mattering, struggle, responsibility, transformation are rendered illegible to public deliberation and testimony. Our capacity to grasp the stakes of lived technodevelopmental social struggle is deranged, but the more proximate result (probably the only one that matters to those who indulge it) is its naturalization of elite-incumbency.
Tech talk confuses our understanding of the shared/contested public world, then peddles status-quo amplification as emancipatory history. Eugenicism/Roboticism/Neoliberalism support one another as justificatory corporate-militarist rationalies, Market Futures remind us that The Market and The Future are co-constructed and co-extensive imaginaries. The futurological denigration of SF, for instance, would reduce its speculation to legibility under the horizon of financial speculation, would reduce sf's world-disclosing testimony & world-making provocation to the futurological scenario, to ads for financial development, then pout and stamp that it lacks the cruel compulsory optimism of "positivity" or bigot-reassuring lies of "political incorrectness." There are no sadder puppies in all the world than the libertechbrotarian Robot Cultists and "Thought Leaders" of the VC silly con.
Friday, October 30, 2015
Reconciliation, Translation, Reflection
I am an atheist, and yet I find that I can sympathize with religious or spiritual practices to the extent that I can translate them into aesthetic terms. I am a socialist, and yet I find that I can sympathize with anarchist aspirations to the extent that I can translate them into democratizing terms. It is an open question to me whether these translations are more respectful than disrespectful.
Thursday, October 29, 2015
Possibly Something To Do With The Mirror Universe?
Tuesday, October 20, 2015
Monday, October 19, 2015
Smart Car, Kill Thyself
Sunday, October 18, 2015
Look, Ma, A Technologist!
Saturday, October 17, 2015
The Graeber and Thiel Non-Debate on Technological Progress
Watching the Graeber/Thiel "debate" I'm struck once again by the affinity of "tech-talk" with fantasies of spontaneous order left & right. "Tech" discourses encourage catastrophic re-framing of political categories as instrumental ones: especially freedom as capacitation. Both Thiel and Graeber break the techno-transcendental accelerationalism orthodoxy and admit the dark secret of recent stagnation, but it's hard to see either Thiel's Randroidal Great Man stifled by mehums or Graeber's bureaucratization thesis as serious proposals. (Tho' I'll grant both theses are nice promotional tie-ins with their recent books, the flogging of which may be the purpose of the exercise.)
Thiel's stagnation concession is curious, in a way, given his own public reliance on transhumanoid/singularitarian articles of faith, Graeber's initial framing of his disappointment in terms of a failure of tech to deliver light-speed, teleporters, magicmeds and so on may suggest that in his youth he accepted no small amount of that old time techno-transcendental religion himself when it comes to it. For me it matters both that that very vision of "tech" / "progress" was always profoundly incoherent but also that MANY always knew that.
Prior to the arrival of our present stagnation, when advances were transforming hopes and expectations in nearly revolutionary ways, you wouldn't find Lewis, Ellul, Arendt or Mumford falling for techno-transcendental framings of technodevelopmental social struggle.
This may be blunt, but if you expected teleporters or galactic travel because we got the Pill and landed on the Moon (Graeber?) or expect a Robot God to end History or nanobots to give us treasure caves or uploading to Holodeck Heaven (Thiel?) it isn't bureaucracy or multiculturalism or anti-Business moochers that have stifled your futurological hopes but the fact that your "hopes" are at best stupid and crazy and are at worst deceptions and frauds (I'm lookin' at you, Thiel).
I've no doubt saying so will invite howls that I lack the visionary visionality, sticktoitiveness, etc, required to be a Thought Leader.
Guilty.
But I'm pretty sure that we live in the great stagnation because incumbent elites realized they could sell stasis as accelerating change. I believe we don't have technoscience progress for the same reason infrastructure is crumbling: because these are common/public goods.
I believe Reagan said government is the problem, Clinton said the era of Big Government is over, and Bush blew everything on war-crimes. I believe that in their own ways Thiel and Graeber agree too much with Reagan/Clinton (& futurist Gore) to solve the problem at hand.
I'll add they share too much techno-religiosity to see progress clearly, quite apart from their differently reactionary anti-statisms. I mean, if you want humans to go to Mars for discovery you need a real space program: neither a Muskian for-profit LEO amusement park, nor a scaled-up People's Mic. Space programs aren't Hackathons or drum circles.
But quite apart from that, if you want to go to Mars to have a neo-colonial wet-dream or escape human pollution or human irrationality, I'm sorry to be the one to tell you, but you want fundamentally incoherent, infantile and irresponsible things. That isn't visionary. We could have invested in renewable infrastructure and healthcare research under the auspices of a public healthcare program. We could have supported, educated, invested in ALL our citizens rather than retreat into costly criminal white supremacy and patriarchy.
Instead, in the neoliberal epoch, Reagan shut down public investment and doubled down on the Southern Strategy while Clinton/Gore were cheerleaders for irrationally exuberant fake digital/financial skim/scam "New Tech" coupled to mass incarceration/welfare reform.
Progress isn't an indifferent accumulation of gizmos but an equitable distribution of technoscientific costs/risks/benefits in diversity. Progress -- even technoscientific progress -- is a political not an instrumental phenomenon. Anti-political tech talk is blind to this.
Progress is also a worldly phenomenon: techno-transcendental re-framings of its stakes/aspirations confuse us all and facilitate fraud. Stasis will end when and as majorities struggle democratically for sustainable equity-in-diversity & a public address of shared problems.
Thursday, October 15, 2015
The Dreamtime of the Driverless Car
In the Bloomberg Business advertorial (as expected, every single "journalism" outlet coughed up some promotional hairball flogging the Tesla press release for free in the name of what passes for technology news) the car's new features were framed in customary futurological narrative mode. The first sentence of the first paragraph casts the software package as the materialization of a long-deferred dream: "Tesla Motors Inc. will begin rolling out the first version of its highly anticipated Autopilot features to some owners of its all-electric Model S sedan Thursday."
Since the infantile fetishists thronging gizmo-fandoms do indeed wait in long lines to purchase the latest landfill-destined models of this or that handheld gadget, I concede that this first sentence is factual enough as far as these things go, and do not doubt that visions of Tesla Autopilot sugarplums have been dancing in the heads of cheerleaders Musky for the latest low-earth-orbit SpaceX amusement park ride or coffin train Hyperloop cartoon gifted to the world by our soopergenius savior celebrity CEO.
However, as a rhetorician I have to point out that the real argumentative heavy lifting performed by the framing of a product as the fulfillment of a collective dream, the arrival into The Future promised by futurists past, is that it offers up narrative collateral investing the futurological dream of the sentence following it with the plausibility and force to make the bigger sale: "Autopilot is a step toward the vision of autonomous or self-driving cars, and includes features like automatic lane changing, auto steering and the ability to parallel park itself."
Enraptured by this "vision" you may have overlooked that the none of the features actually listed there is new -- some amount to the phony novelty of marketing neologisms repackaging features decades old, like slightly souped-up 70s-era cruise control, while others have been available for a few years now in other cars and offer a mixed record of welcome minor conveniences as well as troubling new occasions for accidents, like automatic parallel parking features.
Of course, there is nothing like futurology to distract you from the disappointment and even danger of present offerings by recasting them as stepping stones to future satisfactions in which you are somehow participating aspirationally now, even if in the form of disappointment and danger. Consumer capitalism does few things better than tricking us into paying for the dissatisfaction of deferred satisfactions as satisfaction (a deferral that ends in our deaths, by the way, and eventually, very possibly, in the death of our planet).
As with the futurological nightmare of truly autonomous weapons systems, or Killer Robots, exclamation point, the futurological daydream of truly autonomous automobiles, or Driverless Cars, exclamation point, is far from reality -- of a piece with the general denigration of intelligence, recognition of which is indispensable to the support of human dignity, in faith-based futurist discourses of "artificial intelligence," as well as with the general demoralization of intelligence distracted by small screens and harassed by targeted marketing and scoring -- but quite apart from that reality on the ground, the "vision" of autonomous artifice as a documentary and justificatory rhetoric is palpably ideological, functioning to distract our attention away from the risks and costs of parochially profitable technodevelopmental changes and especially away from any grasp of the culpability of the investors, owners, designers, coders, marketers, sellers of artifacts in the suffering and death that accompanies that parochial profitability by divesting actual actors of agency and imaginatively investing artifacts with agency.
Given the long-held American romance with cars as cyborg shells -- a romance adjacent and often entangled with fantasies of gun-ownership and open-carry prostheses -- at once "enhancing" and ruggedizing us as individuals ready to compete for positional advantage or more usually momentary survival in a Hobbesian-Darwinian marketplace (the never-needed four-wheel-drive wilderness vehicle or the unsafe-security-theater-massiveness of the mini-van enlisted for the work commute, the exurban shopping trek, the flight to heteronormative suburbia) as well as providing avenues for comfortably conformist pseudo-rebellions against the exactions of this relentless competition (the road trip of the youth not yet or the retiree no longer defined by wage-slavery, the vestigial frontier of the lonely highway, the alluring transcendental myth of traffic flow), it is initially hard to see how the relinquishment of agency promised by the "driverless car" exerts its ideological tug in the first place.
Rather like the plummeting sticker-value of a freshly purchased car the second it is driven off the lot, perhaps there is a likewise instantaneous plummeting of a car's dream-value the moment it is snarled in a traffic jam, resounds with the collision of an empty shopping cart, bleeps an engine-temperature warning, or the needle edges its way all too soon toward empty. But surely the deeper ideological work of "the driverless car" is that it provides a discursive space in which one can concede the conspicuous catastrophes of car culture -- the pollution and waste and unsustainable suburban sprawl, the white-racist demolition of thriving diverse neighborhoods to make way for highways and overpasses facilitating the fiscal and institutional abandonment of majorities living in our cities -- catastrophic outcomes inspired by an earlier generation of futurists, hell, some of history's most influential futurists actually called their dream of car culture Futurama -- all the while disavowing the need to address these catastrophes in a substantive way: The Driverless Car is the futurological promise that we will save ourselves from car-culture by saving... car culture!
Why change public policies or budgetary priorities to facilitate dense diverse walkable neighborhoods and bike lanes and public transportation and continental rapid rail when you can pretend instead that simply purchasing millions more cars year after year as we have done year after year -- sure, so soon to be hybrid and electric, so soon to be artificially intelligent, so soon to be driven entirely by precarious, disorganized, unregulated drivers or,I suppose, "AIs" summonable with a digital handheld app, or so the story goes, as if that means anything real or would mean anything good even were it to become real in any measure -- purchasing millions more demanding, costly, lethal, indistinguishable cars, inching day by day through jammed traffic and an amplifying status quo toward The Future of flaming wreckage, bleak cube-stack mountains, and toxic landfills -- that somehow, somehow, this treadmill will take us somehow somewhere new, will address somehow our existential car culture grievances, will solve somehow our planetary car culture problems.
I predict...
It won't.