Using Technology to Deepen Democracy, Using Democracy to Ensure Technology Benefits Us All

Wednesday, January 28, 2015

The Yearning Annex: Google Commits Millions for Robot Cult Indoctrination in Plutocratic Venture-Capitalist Dystopia

Also posted at the World Future Society.
Their prose was all purple, there were VCs running everywhere, tryin' to profit from destruction, you know we didn't even care.
via Singularity Hub (h/t David Golumbia):
Google, a long-time supporter of Singularity University (SU), has agreed to a two-year, $3 million contribution to SU's flagship Graduate Studies Program (GSP). Google will become the program's title sponsor and ensure all successful direct applicants get the chance to attend free of charge. Held every summer, the GSP's driving goal is to positively impact the lives of a billion people in the next decade using exponential technologies. Participants spend a fast-paced ten weeks learning all they need to know for the final exam—a chance to develop and then pitch a world-changing business plan to a packed house.
"Exponential technologies" is a short hand for the false and facile narrative superlative futurologists spun from Moore's Law -- the observation in 1965 (the year I was born) that the number of transistors on an integrated circuit had been roughly doubling every two years, and the paraphrase of that observation into a law-like generalization that chip performance more or less doubles every two years -- into the faith-based proclamation that this processing power will inevitably eventuate in artificial intelligence, and soon thereafter a history shattering super-intelligence that will control self-replicating programmable nanoscale robots that will provide a magical superabundance on the cheap and deliver near immortality through prosthetic medical enhancement and the digital uploading of "informational soul-selves" into imperishable online paradises.

The arrival of superintelligent artificial intelligence is denominated "the Singularity" by these futurologists, a term drawn from the science fiction of Vernor Vinge, as are the general contours of this techno-transcendental narrative, taken up most famously by one-time inventor and now futurological "Thought Leader" Ray Kurzweil and a coterie of so-called tech multimillionaires like Peter Thiel, Elon Musk, Jaan Tallinn all looking to rationalize their good fortune in the irrational exuberance of the tech boom and secure their self-declared destinies as protagonists of post-human history by proselytizing and investing in transhumanist/singularitarian eugenic/digitopian ideology across the neoliberal institutional landscape at MIT, Stanford, Oxford, Google, and so on.

That most of these figures are skim-and-scam artists with little sense and too much money on their hands goes without saying as does the obvious legibility of their "technoscientific" triumphalism as a conventional marketing strategy for commercial crap (get rich quick! anti-aging! sexy-sexy!) but amplified into a scarcely stealthed fulminating faith re-enacting the theological terms of an omni-predicated godhead delivering True Believers eternal life in absolute bliss with perfect knowledge. Not to put too fine a point it, the serially-failed program of AI doesn't become more plausible by slapping "super" in front of the AI, especially when the same sociopathic body-loathing digi-spiritualizing assumptions remain in force among its adherents; exponential processing power checked by comparable ballooning kruft is on a road to nowhere like transcendence; and since a picture of you isn't you and cyberspace is buggy and noisy and brittle hoping to live there forever as an information spirit is pretty damned stupid even if you call yourself a soopergenius.

Since the super-intelligent and nanotechnological magicks on which techno-transcendentalists pin their real hopes are not remotely in evidence, these futurologists tend to hype the media and computational devices of the day, celebrating algorithmic mediation and Big Data framing and kludgy gaming virtualities like Oculus Rift and surveillance media like the failed Google Glass and venture capitalist "disruption" like airbnb and uber. That this is the world of hyping toxic wage-slave manufactured landfill-destined consumer crap and reactionary plutocratic wealth concentration via the looting and deregulation of public and common goods coupled with ever-amplifying targeted marketing harassment and corporate-military surveillance should give the reader some pause when contemplating the significance of declarations like "GSP's driving goal is to positively impact the lives of a billion people in the next decade using exponential technologies."

The press release suavely reassures us that "Google is, of course, no stranger to moon shot thinking and the value of world-shaking projects." I think it is enormously important to pause and think a bit about what that "of course" is drawing on and standing for. It should be noted what "moon shot thinking" amounts to in a world that hasn't witnessed a moonshot in generations. There are questions to ask, after all, about Google's "world-shaking projects" advertorially curating all available knowledge in the service of parochial profit-taking, all the while handwaving about vaporware like immortality meds and driverless car-culture and geo-engineering greenwash. There are questions to ask about the techno-utopian future brought about by a "grad school" at a "university" for which "the final exam" is "a chance to develop and then pitch a world-changing business plan to a packed house." I will leave delineating the dreary dystopian details to the reader.

Thursday, January 22, 2015

Syllabus for my Digital Democracy, Digital Anti-Democracy Course (Starting Tomorrow)

Digital Democracy, Digital Anti-Democracy (CS-301G-01)

Spring 2015 01/23/2015-05/08/2015 Lecture Friday 09:00AM - 11:45AM, Main Campus Building, Room MCR

Instructor: Dale Carrico; Contact: dcarrico@sfai.edu, ndaleca@gmail.com
Blog: http://digitaldemocracydigitalantdemocracy.blogspot.com/

Grade Roughly Based On: Att/Part 15%, Reading Notebook 25%, Reading 10%, In-Class Report 10%, Final Keywords Map 40%

Course Description:

This course will try to make sense of the impacts of technological change on public life. We will focus our attention on the ongoing transformation of the public sphere from mass-mediated into peer-to-peer networked. Cyberspace isn't a spirit realm. It belches coal smoke. It is accessed on landfill-destined toxic devices made by wretched wage slaves. It has abetted financial fraud and theft around the world. All too often, its purported "openness" and "freedom" have turned out to be personalized marketing harassment, panoptic surveillance, zero comments, and heat signatures for drone targeting software. We will study the history of modern media formations and transformations, considering the role of media critique from the perspective of several different social struggles in the last era of broadcast media, before fixing our attention on the claims being made by media theorists, digital humanities scholars, and activists in our own technoscientific moment.

Provisional Schedule of Meetings

Week One, January 23: What Are We Talking About When We Talk About "Technology" and "Democracy"?

Week Two, January 30: Digital,

Laurie Anderson: The Language of the Future
Martin Heidegger, The Question Concerning Technology 
Evgeny Morozov, The Perils of Perfectionism
Paul D. Miller (DJ Spooky), Material Memories 
POST READING ONLINE BEFORE CLASS MEETING

Week Three, February 6: The Architecture of Cyberspatial Politics

Lawrence Lessig, The Future of Ideas, Chapter Three: Commons on the Wires
Yochai Benkler, Wealth of Networks, Chapter 12: Conclusion
Michel Bauwens, The Political Economy of Peer Production
Saskia Sassen, Interactions of the Technical and the Social: Digital Formations of the Powerful and the Powerless 
My own, p2p Is Either Pay-to-Peer or Peers-to-Precarity 
Jessica Goodman The Digital Divide Is Still Leaving Americans Behind 
American Civil Liberties Union, What Is Net Neutrality
Dan Bobkoff, Is Net Neutrality the Real Issue?

Week Four, February 13: Published Public

Dan Gillmour, We the Media, Chapter One: From Tom Paine to Blogs and Beyond
Digby (Heather Parton) The Netroots Revolution
Clay Shirky, Blogs and the Mass Amateurization of Publishing
Aaron Bady, Julian Assange and the Conspiracy to "Destroy the Invisible Government"
Geert Lovink Blogging: The Nihilist Impulse

Week Five, February 20: Immaterialism

John Perry Barlow, A Declaration of the Independence of Cyberspace
Katherine Hayles, Liberal Subjectivity Imperiled: Norbert Weiner and Cybernetic Anxiety
Paulina Borsook, Cyberselfish
David Golumbia, Cyberlibertarians' Digital Deletion of the Left
Richard Barbrook and Andy Cameron, California Ideology
Eric Hughes, A Cypherpunk's Manifesto
Tim May, The Cryptoanarchist Manifest

Week Six, February 27: The Architecture of Cyberspatial Politics: Loose Data

Lawrence Lessig, Prefaces to the first and second editions of Code
Evgeny Morozov, Connecting the Dots, Missing the Story
Lawrence Joseph Interviews Frank Pasquale about The Black Box Society
My Own, The Inevitable Cruelty of Algorithmic Mediation
Frank Pasquale, Social Science in an Era of Corporate Big Data
danah boyd and Kate Crawford, Critical Questions for Big Data Bruce Sterling, Maneki Neko

Week Seven, March 6: Techno Priesthood

Evgeny Morozov, The Meme Hustler
Jedediah Purdy, God of the Digirati
Jaron Lanier, First Church of Robotics
Jalees Rehman, Is Internet-Centrism A Religion?
Mike Bulajewski, The Cult of Sharing
George Sciaballa Review of David Noble's The Religon of Technology

Week Eight, March 13: Total Digital

Jaron Lanier, One Half of a Manifesto
Vernor Vinge, Technological Singularity
Nathan Pensky, Ray Kurzweil Is Wrong: The Singularity Is Not Near
Aaron Labaree, Our Science Fiction Future: Meet the Scientists Trying to Predict the End of the World
My Own, Very Serious Robocalyptics
Marc Steigler, The Gentle Seduction

Week Nine, March 16-20: Spring Break

Week Ten, March 27: Meet Your Robot God
Screening the film, "Colossus: The Forbin Project"

Week Eleven, April 3: Publicizing Private Goods

Cory Doctorow You Can't Own Knowledge
James Boyle, The Second Enclosure Movement and the Construction of the Public Domain
David Bollier, Reclaiming the Commons
Astra Taylor, Six Questions on the People's Platform

Week Twelve, April 10: Privatizing Public Goods

Nicholas Carr, Sharecropping the Long Tail
Nicholas Carr, The Economics of Digital Sharecropping
Clay Shirky, Why Small Payments Won't Save Publishing
Scott Timberg: It's Not Just David Byrne and Radiohead: Spotify, Pandora, and How Streaming Music Kills Jazz and Classical 
Scott Timberg Interviews Dave Lowery, Here's How Pandora Is Destroying Musicians
Hamilton Nolan, Microlending Isn't All It's Cracked Up To Be

Week Thirteen, April 17: Securing Insecurity

Charles Mann, Homeland Insecurity
David Brin, Three Cheers for the Surveillance Society!
Lawrence Lessig, Insanely Destructive Devices
Glenn Greenwald, Ewan MacAskill, and Laura Poitras, Edward Snowden: The Whistleblower Behind the NSA Surveillance Revelations
Daniel Ellsberg, Edward Snowden: Saving Us from the United Stasi of America

Week Fourteen, April 24: "Hashtag Activism" I

Evgeny Morozov Texting Toward Utopia 
Hillary Crosly Croker, 2013 Was the Year of Black Twitter
Michael Arceneux, Black Twitter's 2013 All Stars
Annalee Newitz, What Happens When Scientists Study Black Twitter
Alicia Garza, A Herstory of the #BlackLivesMatter Movement
Shaquille Bewster, After Ferguson: Is "Hashtag Activism" Spurring Policy Changes?
Jamilah King, When It Comes to Sports Protests, Are T-Shirts Enough?

Week Fifteen, May 1: "Hashtag Activism" II

Paulina Borsook, The Memoirs of a Token: An Aging Berkeley Feminist Examines Wired
Zeynap Tukekci, No, Nate, Brogrammers May Not Be Macho, But That's Not All There Is To It; How French High Theory and Dr. Seuss Can Help Explain Silicon Valley's Gender Blindspots
Sasha Weiss, The Power of #YesAllWomen
Lisa Nakamura, Queer Female of Color: The Highest Difficulty Setting There Is? Gaming Rhetoric as Gender Capital 
Yoonj Kim, #NotYourAsianSidekick Is A Civil Rights Movement for Asian American Women
Jay Hathaway, What Is Gamergate

Week Sixteen, May 8: Digital Humanities, Participatory Aesthetics, and Design Culture

Claire Bishop, The Social Turn and Its Discontents
Adam Kirsch, Technology Is Taking Over English Departments: The False Promise of the Digital Humanities
David Golumbia, Digital Humanities: Two Definitions
Tara McPherson, Why Are Digital Humanities So White?
Roopika Risam, The Race for DigitalityWendy Hui Kyong Chun, The Dark Side of the Digital Humanities
Bruce Sterling, The Spime
Hal Foster, Design and Crime
FINAL PROJECT DUE IN CLASS; HAND IN NOTEBOOKS WITH FINAL PROJECT

Thursday, January 15, 2015

AI Isn't A Thing

People who flutter their hands over the "existential risk" of the theoretically impoverished, serially failed project of good old-fashioned artificial intelligence (GOFAI) or its techno-transcendental amplification into a post-biological super-intelligent Robot God (GOD-AI) think they are worried about a thing. They think they are experts who know stuff about a thing that they are calling "AI." They can get in quite a lather arguing over the technical properties and sociopolitical entailments of this thing with just about anybody who will let them.

But their "AI" does not exist. Their "AI" does not have properties. Their "AI" is not on the way.

Their "AI" is a bunch of fancies bounded by stipulations. Their "AI" stands in the loosest relation to the substance of real code and real networks and their real problems and real people doing real work on them here and now.

"AI" is a discourse, and it serves a primarily ideological function: It creates a frame -- populated with typical conceits, mobilizing customary narratives -- through which real problems and complex phenomena are being miscomprehended by technoscientific illiterates, acquiescent consumers, and wish-fulfillment fantasists. Ultimately, the assumptions and aspirations investing this frame have to do with the promotion and advertizing of commodities, software packages, media devices and the resumes of tech-talkers. At their extremity, these assumptions and aspirations mobilize and substantiate the True Belief of techno-transcendentalists given over to symptomatic fears of mortality, vulnerability, contingency, error, lack of control, but it is worth noting that the appeal to these irrational fears and passions merely amplify (in a kind of living reductio ad absurdum) the drives consumer advertizing and venture-capitalist self-promotion always cater to anyway.

Actually-existing biologically-incarnated consciousness, intelligence, and personhood look little like the feedback mechanisms of early cyberneticists and less still like the computational conceits of later neurocomputationalists. Bruce Sterling said nothing but the obvious when he pointed out that the brain is more like a gland than a computer. Living people don't look any more like the Bayesian calculators of alienated robocultic sociopaths than they look like the monomaniacal maximizers of political economy's no less sociopathic homo economicus.

So, of course, "The Forbin Project" and "War Games" and "The Terminator" and "The Lawnmower Man" and "The Matrix" are movies -- everybody knows that! Of course, our computers are not going to reach critical mass and "wake up" one day, any more than our complex and dynamic biosphere will do. Moore's Law is not spontaneously going to spit out a Robot God any more than an accumulating pile of abacuses would -- not least due to Jeron Lanier's corollary to Moore's Law: "As processors become faster and memory becomes cheaper, software becomes correspondingly slower and more bloated, using up all available resources."

Again, everybody knows all that. But can everybody be expected to talk or act like people who know these things? Sometimes, the exposure of the motives and hyperbole and deception of AI ideology will lead its advocates and enthusiasts to concessions but not to the relinquishment of the ideology itself. Even if we do not need to worry about making Hal our pal, even if AI will not assume the guise of a history-shattering super-parental Robot God... what if, they wonder, somebody codes some mindless mechanism that is satanic by accident or in the aggregate, like a vast robo-runaway bulldozer scraping the earth of its biological infestation, a software glitch that releases an ubergoo waveform transforming the solar system into computronium for crunching out pi for all eternity?

The arrant silliness of such concerns is exposed the moment one grasps that security breaches, brittle code, unfriendly interfaces, mindless algorithms resulting in catastrophic (and probably criminal) public decisions are all happening already, right now. There are people working on these problems, right now. The pet figures and formulations, the personifications, moralisms, reductions and triumphalisms of AI discourse introduce nothing illuminating or new into these efforts. If anything, AI discourse encourages its adherents to assess these developments not in terms of their actual costs, risks, and benefits to the diversity of their actual stakeholders, but to misread them as stepping stones along the road to The Future AI, signs and portents in which is glimpsed the imminence of The Future AI, thus distracting from the present reality of problems to the imagined future into which symptomatic fears and fancies are projected.

So, too, sometimes the exposure of the irrational True Belief of adherents of AI-ideology and the crass self-promotion and parochial profit-taking of its prevalent application in consumer advertizing and the pop-tech journalism will lead its advocates and enthusiasts to different concessions. Sure, it turns out that Peter Thiel and Elon Musk are hucksters who pulled insanely lucrative skim-and-scam operations over on technoscientific illiterates and now want to consolidate and justify their positions by promoting themselves as epochal protagonists of history. And, sure, Ray Kurzweil and Eliezer Yudkowsky are guru-wannabes spouting a lot of pseudo-scientific pseudo-philosophical pseudo-theological nonsense while looking for the next flock to fleece. But what if there are real scientists and entrepreneurs and experts somewhere doing real coding and risking real dangers in their corpoate-military labs, quietly lost in their equations, unaware that they are coding the lightning that will convulse the internet corpse into avid Frankensteinian life?

Of course, the very robocultic nonsense disdained in such recognitions has found its way to the respectability and moneybags of Google, DARPA, Oxford, Stanford, MIT. And so, to imagine some deeper institutional strata where the really serious techno-transcendental engines are stoked actually takes us into conspiratorial territory rather quickly. Indeed, this fancy is a mirror image of the very pining one hears from frustrated Robot Cultists who know all too well in their heart of hearts that nobody is out there materializing their daydreams/nightmares for them, and so one hears time and time again the siren call for separatist enclaves, from taking over tropical islands or building offshore pirate utopias on oil rigs to huddling bubbled under the sea or taking a buckytube space elevator to their private L5 torus or high-tailing it out to their nanobotically furnished treasure cave -slash- mad scientist lab in the asteroid belt to do some serious cosmological engineering.

Again, it is utterly wrong-headed to think there are serious technical types working on "AI" -- because there is nothing for them to be working on. Again, "AI" is just a metaphorization and narrative device that enables some folks to organize all sorts of complex technical and political developments into something that feels like sense but is much more about wishes than working. The people solving real problems with code and technique and policy aren't doing "AI" and to read what they are doing through AI discourse is fatally to misread them. It is only a prior investment in the assumptions and aspirations, figures and frames of AI discourse that would lead anybody to think one should relinquish the scrum of real-world problem solving and ascend instead to some abstract ideality the better to formulate a "roadmap" with which to retroactively imbue technoscientific vicissitudes with Manifest Destiny or to treat as "the real problem" the non-problem of crafting humanist Asimovian injunctions to constrain imaginary robots from imaginary conflicts they cause in speculative fictions.

You don't have to worry about things nobody is working on. You shouldn't pin your hopes or your fears on pseudo-philosophical fancies or pseudo-scientific follies. You don't have to ban things that don't and won't exist anyway, at any rate not in the forms techno-transcendentalists are invested in. There are real things to worry about, among them real problems of security, resilience, user-friendliness, interoperability, surveillance. "AI" talk won't help you there. That should tell you right away it works instead to help you lose your way.

Monday, January 12, 2015

Nourishing Nothingness: Futurists Are Getting Virtually Serious About Food Politics

I'm a lacto-ovo vegetarian now, but obviously in The Future will be a digi-nano vegetarian...
Salon has alerted me to the existence of a new SillyCon Valley startup, Project Nourished, which hopes to use synesthetic cues from a virtual reality helmet, vibrating spork, and whiffs from a perfume atomizer to fool America's obese malnourished gluttons that they are feasting on two-pound steaks and baskets of onion rings and death by chocolate sundaes when in fact they are eating gelatinous cubes of zero-calorie vitamin-fortified goo.

According to the breathless website, this proposal will "solve" the following problems: "anorexia, bulimia, cancer, diabetes, heart disease, obesity, allergies and co2 omissions."

The real problem solved by the project is that it definitively answers a question I have long pondered: Is futurology so utterly idiotic and smarmy that it is actually impossible to distinguish its most earnest expressions from even the most ridiculous parodies of them?

I mean, to literally name your project "nourish" while actually avowing you seek to peddle a product that nourishes no one is pretty breathtaking. It's like the scam of peddling sugary cereals as part of "this complete nutritious breakfast," when all the nourishment derives from the juice and eggs and toast accompanying the bowl in the glossy photo but almost never in the event of an actual breakfast involving the cereal in question. Except now, even the cereal isn't really there, but a bowl of packing cardboard over which is superimposed an image of Fruit Loops with a spritz of grapefruit air-freshener shot in your nostril every time you take a bite.

Why ponder structural factors like the stress of neoliberal precarity or the siting of toxic industries near residences or the lack of grocery stores selling whole foods within walking distances or the punitive mass mediated racist/sexist body norms that yield unhealthy practices, eating disorders, the proliferation of allergies and respiratory diseases and so on? Why concern yourself with public investment in medical research, heathcare access, vegetarian awareness, zoning for walkability, sustainable energy and transportation infrastructure and so on?

The Very Serious futurologists have a much better technofix for all that -- it's kinda sorta like the food pills futurologists have been promising since Gernsback, but now you would eat large empty candy colored polyhedra (you know, like the multisided dice nerds used to use to play D&D in the early 80s) while sticking your head in a virtual reality helmet (you know, like the virching rigs techbros have been masturbating over since the late 80s). Also, too, the stuff would be 3D-printed, because if you are a futurologist you've gotta get 3D-printing in there somewhere. As I said, Very Serious!

Returning to the website, we are told, "the project was inspired by the film Hook, where Peter Pan learns to use his imagination to see food on a table that seemed completely empty at first." Setting aside the aptness of drawing inspiration from a crappy movie rather than the actual book on which it is based -- only Luddites think books have a future, shuh! -- I propose that Project Nourish has a different filmic inspiration:

Saturday, January 10, 2015

Uploading As Reactionary Anti-Body Politics

A reader in the Moot describes some typical transhumanoid versions of "doing radical social criticism... saying something along the lines of, say, gender won't matter anymore when we upload our minds to the noosphere." For transhumanoid radical race critique fill in the blank (and try not to think too much about the history of eugenics, or how transhumanists seem to be a whole lot of white guys), for transhumanoid radical class critique here comes NanoSanta Clause.

Of course, not only is this not "doing radical social criticism" but it seems to me pretty explicitly straightforwardly reactionary, even when accompanied by citations of actual feminist, queer, or anti-racist criticism. Complacent consumers who want to enjoy a little liberal guilt to spice their entertainments will always rationalize the violence and inequity of the present by declaring the debased now better than before or on the road to better still and then grabbing a beer from the fridge, or clicking the buy button, or getting out on the dancefloor.

Plutocrats always naturalize their hierarchies as meritocracies. In much the same way, the whole robocultic uploading schtick is obviously a denigration of materiality of the body, and it is always of course the white body male body straight body cis body healthy body capacious body that can best disavow its materiality because its materiality isn't in question or under threat, right?

It can be a mark more of privilege than perceptiveness to call into question that which won't ever be in question for you whatever. The bodily is always constituted as such through technique (from language to body language to posture to wearability), and the social legibility of every body is of course performatively substantiated. To grasp that point is to trouble or question the prediscursivity of the body or to recognize that prediscursivity is always a discursive effect. But this recognition is at best a point of departure and never the end-point for the interrogation of prevailing normative bodies and their abjection of bodily lifeways otherwise.

The denial or disavowal of differences that make a difference is much more likely effectively to endorse than efface them. Imaginary digi-utopian and medi-utopian circumventions of raced, gendered, abled bodily differences function in the present to deny or disavow rather than critically or imaginatively interrogate their terms. These omissions are all the more egregious when we actually turn our minds even cursorily to the perniciously raced and sexed histories of the medical and the digital as actually-existing practical, normative, professional sites.

Setting aside questions of the utter implausibility and incoherence of the techno-transcendental wish-fulfillment fantasies playing out in all this, why even pretend that recourse to digital dematerialization or to medical enhancement would circumvent rather than express the fraught, inequitable legibility and livability of wanted lifeway diversity? It will surely be the more urgent task to attend closely to the ways in which these very differences, race, sex, ability, shape the distribution of costs, risks, benefits, access and information to actually-available prosthetic possibilities. 

I must say it has always cracked me up that since all information is instantiated on a material carrier, then even on their own terms the spiritualization of digi-info souls is hard to square with the reductionist scientism these folks tend to congratulate themselves over -- not that it would be anything to be proud of even if they managed to be more consistently dumb in that particular way.

What can you really expect from techno-transcendentalists apparently so desperate not to grow old or die that they will pretend a scan of them would be them when no picture ever has been and that computer networks could reliably host their "info-souls" forever when most people long outlive their crufty, unreliable computer networks in reality, and all just so they can day dream they will be immortal cyberangels in Holodeck Heaven? Science!

The Political Problem With Transhumanisms

Upgraded and Expanded from a response of mine to some comments in the Moot: Well, I think probably the key conceptual problem with transhumanisms is that they have an utterly uninterrogated idea of "technology" that pervades their discourses and sub(cult)ures. They attend very little to the politics of naturalization/ de-naturalization, of habituation/ de-familiarization that invest some of techniques/artifacts (but not others, indeed probably not most others) with the force of the "technological." Quite a lot of the status quo gets smuggled in through these evasions and disavowals, de-politicizing what could be done or made otherwise, and hence rationalizing incumbency. Whatever the avowed politics of a transhumanist, their depoliticization of so much of the field of the cultural-qua-prosthetic lends itself to a host of conservative/reactionary naturalizations in my view.

This is all the more difficult for the transhumanists to engage in any thoughtful way, since they are so invested in the self-image of being on the bleeding edge, embracing novelty, disruption, anti-nature, and so on. I daresay this might have been excusable in the irrationally exuberant early days of the home computer and the explosive appearance of the Web (I saw through it at the time, though, so it can't have been that hard, frankly), but what could be more plain these days at least than the realization how much "novelty" is merely profitably repackaged out of the stale, how much "disruption" is just an apologia for all too familiar plutocratic politics dismantling public goods?

Transhumanists turn out to fall for the oldest Madison Avenue trick in the book, mistaking consumer fandoms as avant-gardes. And then they fall for the same sort of phony radicalism as so many New Atheists do: mirroring rather than rejecting religious fundamentalism by recasting politics as moralizing around questions of theology; distorting the science they claim to champion by misapplying its norms and forms to moral, political, aesthetic, cultural domains beyond its proper precinct. (The false radicalism of scientism -- not science, scientism -- prevails more generally in technocratic policy-making practices in corporate-military think-tanks and in elite design discourses, many of which fancy themselves or at any rate peddle themselves as progressive, and transhumanist formulations lean on these tendencies in their bids for legitimacy but also these already prevailing practices and discourses are vulnerable to reframing in transhumanist terms; there are dangerous opportunities for reactionary politics going in both directions here.)

Transhumanists indulge what seems to me an utterly fetishistic discourse of technology -- in both Marxist and Freudian senses -- out of which a host of infantile conceits arrive in tow: Failing to grasp the technical/performative articulation of every socially legible body, cis as much as trans, "optimal" as much as "disabled," they fetishistically identify with cyborg bodies appealing to wish-fulfillment fantasies they seem to have consumed more or less wholesale from advertizing and Hollywood blockbusters. Failing to grasp the collective/interdependent conditions out of which agency emerges, they grasp at prosthetic fetishes to prosthetically barnacle or genetically enhance the delusive sociopathic liberal "rugged/possesive individual" in a cyborg shell, pretty much like any tragic ammosexual or mid-life crisis case does with his big gun or his sad sportscar.

I have found technoprogressives to be untrustworthy progressives (I say this as the one who popularized that very label), making common cause with reactionaries at the drop of a hat, too willing to rationalize inequity and uncritical positions through appeals to eventual or naturalized progress -- progress is always progress toward an end, and its politics are defined by the politics of that end, and the substance of progress is not the logical or teleological unfolding of entailments but an interminable social struggle among a changing diversity of stakeholder -- whatever they call themselves techno-fixated techno-determinisms are no more progressive than any other variation of Manifest Destiny offered up to congratulate and reassure incumbent elites.

Time and time again in my decades long sparring with futurologists both extreme and mainstream I have confronted in my interlocutors curious attitudes of consumer complacency and uncritical techno-fixation, as well as more disturbing confessions of fear and loathing: fear of death and hostility to mortal, aging, vulnerable body, fear of error or humiliation and hostility to the contingency, errancy, boundedness of the biological brains and material histories in which intelligence are incarnated. To say this -- which is to say the obvious, I fear -- usually provokes howls of denial and disavowal, charges of ad hominem and hate speech, and so I will conclude on a different note: Again, I don't think any of these transhumanist susceptibilities to reaction are accidental or incidental, but arise out of the under-interrogated naturalized technological assumptions and techno-transcendental aspirations on which all superlative futurologies/ists so far have definitively depended.

Thursday, January 08, 2015

Robot Gods Are Nowhere So Of Course They Must Be Everywhere

Advocates of Good Old Fashioned Artificial Intelligence (GOFAI) have been predicting that the arrival of intelligent computers is right around the corner more or less every year from the formation of computer science and information science as disciplines, from World War II to Deep Blue to Singularity U. These predictions have always been wrong, though their ritual reiteration remains as strong as ever.

The serial failure of intelligent computers to make their long awaited appearance on the scene has lead many computer scientists and coders to focus their efforts instead on practical questions of computer security, reliability, user-friendliness, and so on. But there remain many GOFAI dead-enders who keep the faith and still imagine the real significance that attaches to the solution of problems with/in computation is that each advance is a stepping stone along the royal road to AI, a kind of burning bush offering up premonitory retroactive encouragement from The Future AI to its present-day acolytes.

In the clarifying extremity of superlative futurology we find techno-transcendentalists who are not only stubborn adherents of GOFAI in the face of its relentless failure, but who double down on their faith and amplify the customary insistence on the inevitable imminence of AI (all appearances to the contrary notwithstanding) and now declare no less inevitable the arrival of SUPER-intelligent artificial intelligence, insisting on the imminence of a history-shattering, possibly apocalyptic, probably paradisical, hopefully parental Robot God.

Rather than pay attention to (let alone learn the lessons of) the pesky failure and probable bankruptcy of the driving assumptions and aspirations of the GOFAI research program-cum-ideology, these techno-transcendentalists want us to treat with utmost seriousness the "existential threat" of the amplification of AI into a superintelligent AI in the wrong hands or with the wrong attitudes. I must say that I for one do not agree with Very Serious Robot Cultists at Oxford University like Nick Bostrom or at Google like Ray Kurzweil or celebrity tech CEOs like Elon Musk that the dumb belief in GOFAI becomes a smart belief rather than an even dumber one when it is amplified into belief in a GOD-AI, or that the useless interest in GOFAI becomes urgently useful rather than even more useless when it is amplified into worry about the existential threat of GOD-AI because it would be so terrible if it did come true. It would be terrible if Godzilla or Voldemort were real, but that is no reason to treat them as real or to treat as Very Serious those who want to talk about what existential threats they would pose if they were real when they are not (especially when there are real things to worry about).

The latest variation of the GOFAI via GOD-AI gambit draws on another theme beloved by superlative futurologists, the so-called Fermi Paradox -- the fact that there are so very many stars in the sky and yet no signs that we can see so far of intelligent life out there. Years ago, I proposed
The answer to the Fermi Paradox may simply be that we aren't invited to the party because so many humans are boring assholes. As evidence, consider that so many humans appear to be so flabbergastingly immodest and immature as to think it a "paradoxical" result to discover the Universe is not an infinitely faceted mirror reflecting back at us on its every face our own incarnations and exhibitions of intelligence.
I for one don't find it particularly paradoxical to suppose life is comparatively rare in the universe, especially intelligent life, and more especially still the kind of intelligent life that would leave traces of a kind human beings here and now would discern as such, given how little we understand about the phenomena of our own lives and intelligence and given the astronomical distances involved. As the Futurological Brickbat quoted above implies, I actually think the use of the word "paradox" here probably indicates human idiocy and egotism more than anything else.

A recent article in Vice's Motherboard collects a handful of proponents of a "new view" on this question that proposes instead that the "dominant intelligence in the cosmos is probably artificial." The use of the word "probable" there may make you think that there is some kind of empirical inquiry afoot here, especially since all sorts of sciency paraphernalia surrounds the assertion, and its proponents are denominated "astronomers, including Seth Shostak, director of NASA’s Search for Extraterrestrial Intelligence, or SETI, program, NASA Astrobiologist Paul Davies, and Library of Congress Chair in Astrobiology Stephen Dick." NASA and the Library of Congress are institutions that have some real heft, but let's just say that typing the word "transhumanist" into a search for any of those names may leave you wondering a bit about the robocultic company they keep.

But what I want to insist you notice is that the use of the term "probability" in these arguments is a logical and not an empirical one at all: What it depends on is the acceptance in advance of the truth of the premise of GOFAI via GOD-AI which is in fact far from obvious at all that anyone would sensibly take for granted. Indeed, I propose that like many arguments offered up by Robot Cultists in more mainstream pop-tech journalism, the real point of the piece is to propagandize for the Robot Cult by indulging in what appear to be harmless blue-sky speculations of science fictional conceits but which entertain as true and so functionally bolster what are actually irrational and usually pernicious articles of futurological faith.

The philosopher Susan Schneider (search "Susan Schneider transhumanist," go ahead, try it) is paraphrased in the article saying "when it comes to alien intelligence... by the time any society learns to transmit radio signals, they’re probably a hop-skip away from upgrading their own biology." This formulation buries the lede in my view, and quite deliberately so. That is to say, what is really interesting here -- one might actually say it is flabbergasting -- is the revelation of a string of techno-transcendental assumptions: [one] that technodevelopmental vicissitudes are not contingently sociopolitical but logically or teleologically determined; [two] that biology could be fundamentally transformed while remaining legible to the transformed (that's the work done by the reassuring phrase "their own"); [three] that jettisoning biological bodies for robot bodies and "uploading" our biological brains into "cyberspace" is not only possible but desirable (make no mistake about it, that is what she is talking about when she talks about "upgrading biology" -- by the way, the reason I scare-quote words like "upload" and "cyberspace" is because those are metaphors not engineering specs, and unpacking those metaphors exposes enough underlying confusion and fact-fudging that you may want to think twice about trusting your "biological upgrade" to folks who talk this way, even if they chirp colloquiually at you that your immortal cyberangel soul-upload into Holodeck Heaven is just a "hop-skip away" from easy peasy radio technology); and [four] that terms like "upgrade," freighted as they are with a host of specific connotations derived from the deceptive hyperbolic parasitic culture of venture-capitalism and tech-talk, are the best way to characterize fraught fundamental changes in human lives to be brought about primarily by corporate-military incumbent-elites seeking parochial profits. Maybe you want to read that last bit again, eh?

Seth Shostak quotes from the same robocultic catechism a paragraph later: “As soon as a civilization invents radio, they’re within fifty years of computers, then, probably, only another fifty to a hundred years from inventing AI... At that point, soft, squishy brains become an outdated model.” Notice the same technological determinism. Notice that the invention of AI is then declared to be probable within a century -- and no actual reasons are offered up in support of this declarations and it is made in defiance of all evidence to the contrary. And then notice suddenly we find ourselves once again in the moral universe of techno-transcendence, where Schneider assumed robot bodies and cyberspatial uploads would be "upgrades" (hop-skipping over the irksome question whether such notions are even coherent or possible on her terms, whether a picture of you could be you, whether fetishized prosthetization would be enhancing to all possible ends or disabling to some we might come to want or immortalizing when no prostheses are eternal, etc etc etc etc) Shostak leaps to the ugly obverse face of the robocultic coin: "soft, squishy brains" are "outdated model[s]." Do you think of your incarnated self as a "model" on the showroom floor, let alone an outdated one? I do not. And refusing such characterizations is indispensable to resisting being treated as one. Maybe you want to read that last bit again, eh?

“I believe the brain is inherently computational -- we already have computational theories that describe aspects of consciousness, including working memory and attention,” Schneider is quoted as saying in the article. "Given a computational brain, I don’t see any good argument that silicon, instead of carbon, can’t be a excellent medium for experience.” Now, I am quite happy to concede that phenomena enough like intelligence and consciousness for us to call them that might in principle take different forms from the ones exhibited by conscious and intelligent people (humans animals and I would argue also some nonhuman animals) and be materialized differently than in the biological brains and bodies and historical struggles that presently incarnate them.

But conceding that logical possibility does not support in the least the assertion that non-biological intelligences are inevitable, that present human theories of intelligence tell us enough to guide us in assessing these possibilities, that human beings are on the road to coding such artificial intelligence, or that current work in computer theory or coding practice shows any sign at all of delivering anything remotely like artificial intelligence any time soon. Certainly there is no good reason to pretend the arrival of artificial intelligence (let alone godlike superintelligence) is so imminent that we should prioritize worrying about it over deliberation about actually real, actually urgent, actually ongoing problems like climate change, wealth concentration, exploited majorities, neglected diseases, abuse of women, arms proliferation, human trafficking, military and police violence.

What if the prior investment in false and facile "computational" metaphors of intelligence and consciousness are evidence of the poverty of the models employed by adherents of GOFAI and are among the problems yielding its serial failure? What if such "computational" frames are symptoms of a sociopathic hostility to actual animal intelligence or simply reveal ideological commitments to the predatory ideology of Silicon Valley's unsustainable skim-and-scam venture capitalism?

Although the proposal of "computational" consciousness is peddled here as a form of modesty, as a true taking-on of the alien otherness of alien intelligence in principle, what if these models of alien consciousness reflect most of all the alienation of their adherents -- the sociopathy of their view of their own superior computational intellects and their self-loathing of the frailties in that intellect's "atavistic" susceptibility to contingency, error, and failure -- rather than any embrace of the radical possibilities of difference?

It is no great surprise that the same desperate dead-enders who thought they could make the GOFAI lemon into GOD-AI lemonade would then go on to find evidence of the ubiquity of that GOD-AI in the complete lack of evidence of GOD-AI anywhere at all. What matters about the proposal of this "new view"on the Fermi Paradox is that it requires us to entertain as possible, so long as we are indulging the speculation at hand, the very notion of GOFAI that we otherwise have absolutely no reason to treat seriously at all.

Exposing the rhetorical shenanigans of faith-based futurologists is a service I am only too happy to render, of course, but I do want to point out that even if there are no good reasons to treat the superlative preoccupations of Robot Cultists seriously on their own terms (no, we don't have to worry about a mean Robot God eating the earth; no, we don't have to worry about clone armies or designer baby armies or human-animal hybrid armies taking over the earth; no, we don't have any reason to expect geo-engineers from Exxon-Mobil to profitably solve climate change for us or gengineers to profitably solve death and disease for us or nanogineers to profitably solve poverty for us) there may be very good reasons to take seriously the fact that futurological frames and figures are taken seriously indeed.

Quite apart from the fact that time spent on futurologists is time wasted in distractions from real problems, the greater danger may be that futurological formulations derange the terns of our deliberation on some of the real problems. Although the genetic and prosthetic interventions techno-triumphalists incessantly crow about have not enhanced or extended human lifespans in anything remotely like radical ways, the view that this enhancement and extension MUST be happening if it is being crowed about so incessantly has real world consequences, making consumers credulous about late-nite snake-oil salesmen in labcoats, making hospital administrators waste inordinate amounts for costly gizmos and ghastly violations for end-of-life care, rationalizing extensions of the retirement age for working majorities broken down by exploitation and neglect. Although the geo-engineering interventions techno-triumphalists incessantly crow about cannot be coherently characterized and seem to depend on the very funding and regulatory apparatuses the necessary failure of which is usually their justification, the view that such geo-engineering MUST be our "plan B" or our "last chance" provides extractive-industrial eco-criminals fresh new justifications to deny any efforts at real world education, organization, legislation to address environmental catastrophe. The very same techno-deterministic accounts of history techno-triumphalists depend on for their faith-based initiatives provided the rationales to justify the indebtedness to their former occupiers -- in the name of vast costly techno-utopian boondoggles like superdams and superhighways and skyscraper skylines -- in nations emerging from colonial occupation and then the imposition of austerity regimes that returned them to conditions of servitude.

Although I regard as nonsensical the prophetic utterances futurologists make about the arrival any time soon, or necessarily ever, of artificial intelligence in the world, I worry that there are many real world consequences of the ever more prevalent deployment of the ideology of artificial life and artificial intelligence by high-profile "technologists" in the popular press. I worry that the attribution of intelligence to smart cards and smart cars and smart phones, none of which exhibit anything like intelligence, confuses our sense of what intelligence actually is and risks denigrating the intelligence of the people with whom we share the world as peers. To fail to recognize the intelligence of humans risks the failure to recognize their humanity and the responsibilities demanded of us inhering in that humanity. Further, I worry that the faithful investment in the ideology of artificial intelligence rationalizes terrible decisions, justifies the outsourcing of human judgments to crappy software that corrects our spelling of words we know but it does not, recommends purchases and selects options for us in defiance of the complexities and dynamism of our taste, decides whether banks should find us credit-worthy whatever our human potential or states should find us target-worthy whatever our human rights.

Futurology rationalizes our practical treatment as robots through an indulgence in what appears to be abstract speculation about robots. The real question to ask of the Robot Cultists, and of the prevailing tech-culture that popularizes their fancies, is not how plausible their prophesies are but just what pathologies do these prophesies symptomize and just what constituencies do they benefit.

Monday, January 05, 2015

State of the Blog

In May 2014 Amor Mundi had its tenth birthday. I can't say that I had such a thing in mind when I started this blog, but Amor Mundi has turned out to be my most sustained and consistent intellectual effort at this point, apart from teaching. It's strange to contemplate the mountain of archive I've accumulated and scaled in all this time, to consider what it amounts to, what it is good for, what it took me from, where it is going.

2013 was the first time in the history of the blog that I had posted less in a year here than the year before, but in 2014 I posted even less than in 2013. I suspect that the greater energy I have devoted to microblogging via twitter accounts for some of this. I have often used twitter as a spring board, sounding board, being bored, and also a promotional space for Amor Mundi and so the relations between my twitter and blogger accounts have been more collaborative than I would have expected.

A few days ago I posted a list of the most widely read pieces here from last year. Although half of the energy I expend here goes to making sense of and venting frustration over politics, as usual almost nobody was the least bit interested in this material (after all there are a million people saying the same sorts of thing generally) and it is pretty much only when I ridicule Robot Cultists or techbro venture capitalist skim-and-scam artists that readers perk up. Not only were most of my most widely read posts skewering techno-utopian scams, but frankly such pieces were almost the only ones that attracted hits in the hundreds. Despite that, I was pleased that stuff I wrote in the midst of what was a personally fraught battle of adjuncts like me and my colleagues for union representation received a lot of attention, too. That was an important part of the year as I lived it, and it was nice to see it registered retrospectively in the life of the blog as well.

The single most widely read piece from last year was the forum page I created in connection with the Existenz volume on Posthumanism to which I contributed an essay myself. Since that remains the piece of anti-futurological critique I am proudest of writing, I was pleased other people have paid some attention to it as well. I was also very pleased that people read my extended elaboration of themes from the short essaylet I was asked to write for the New York Times "When Geeks Rule" forum -- mostly because I thought the piece that appeared in the Times itself ended up being pretty vapid and disappointing, but with more room to roam in I could get at what really mattered to me on the topic.

This year, as with last year, some of my posts were also published at the World Future Society, where somewhere between twice and twenty times as many people gave them a looksee. I should probably post there more, but I always feel a bit uncomfortable doing so, despite the encouragement I have received, because my whole unfuturist schtick feels a little like an asshole move directed at explicitly futurist-identified folks.

One general development I will note by way of conclusion is that while my readership has remained relatively stable over the years -- as an academic I am quite comfortable with the idea that reaching a modest number of minds can make a difference that makes the effort more than worthwhile -- I do get the feeling that my sympathetic readership has grown quite a lot. In the early days of the blog I think the majority of my readership were Robot Cultists who liked to read me to get mad and post endless variations of "I Know You Are But What Am I?" in the Moot -- they always reminded me of those weird liberals who get off on getting mad at Fox News -- whereas nowadays more people who read me seem to be kindred spirits, either because so many people have been burned by techno-utopian scams and plutocratic fever-dreams there are more folks around who just enjoy the release of a good rant on such topics or because they find it clarifying (as I do myself) to connect the ugly prevailing unsustainable and plutocratic tendencies in corporate-military consumer-complacent techno-fetishistic neoliberalism/neoconservatism and the weird pathological extremities of these tendencies playing out in the techno-transcendental sects of the Robot Cult's transhuman eugenicists, singularitarian Robot God warriors, techno-immortalist and nanosantalogical wish-fulfillment fantasists and inane greenwashing geo-engineers.

Friday, January 02, 2015

To Declare Oneself Beyond Left And Right Is Almost Always To Disavow One Is On The Right

Another reddit comment:
I think the important distinction for BI is between authoritarian and anti-authoritarian, not between left and right. The two dichotomies are orthogonal. Anti-authority/pro-liberty types typically require only a brief explanation of BI before enthusiastically signing up; this is true of principled libertarians as much as it is of the counter-culture. As for authoritarians, I don't think the UBI movement has yet had to stare into the howling abyss of left-wing authoritarian hatred of BI and all it stands for. Not everyone who is allergic to individual independence and self-rule is a right winger. Many of them are working class. How can we ask a member of the working class to support SLACK? So yes, I agree with you; but only because I think that right vs. left is not the dimension that counts for BI.
Where to begin?!
The "authoritarian axis" introduced here obscures much more than it reveals, and it comes from a very interested right-wing rhetorical position. Market libertarian ideological proselytizing via the "World's Shortest Political Quiz" and related "Political Compass" (a compass that makes you get lost, how droll!) but also via mainstream pundit commonplaces about "independent" majorities who are presumably "culturally or socially liberal but fiscally conservative" provide the key context here.

These frames seek to obscure the relevance of left-right analysis to certain right-wing politics in order to support the status quo and the incumbent-elite interests aligned with it. Of course, market libertarians like to pretend they are "beyond left and right" (or try to market themselves with distracting neologisms -- independent! upwinger! dynamist!) because they can no more prevail with majorities than conventional Republicans can if they are too explicit about their actual alignment with the interests of plutocratic minorities.

Market libertarianism is a right-wing ideology -- it claims to be anti-authoritarian while endorsing corporate-militarism, and to advocate non-violence while endorsing contractual outcomes as non-violent by fiat whatever the terms of misinformation and duress shaping them. Since "fiscal conservatism" always cashes out in de-regulatory and privatizing schemes dismantling the legal/welfare affordances of social equity and cultural pluralism this means that the "cultural/social liberalism" always proclaimed alongside the "fiscal conservatism" has no real substance.

It is no accident that the anti-authoritarianism of market libertarians always plays out as hostility to almost all government except for armies and police to keep the wage slaves from revolting against their plutocratic masters. It is also no accident that market libertarian arguments only impact actual politics when they provide selective justifications for GOP positions.

People manifestly mean different things by basic income advocacy depending on whether they are coming from left or right, but it isn't exactly surprising that someone who falsely imagines right-wing libertarianism to be beyond left or right would imagine basic income advocacy figured through a libertarian lens to be the same.

The commenter declares "the UBI movement has yet had to stare into the howling abyss of left-wing authoritarian hatred of BI and all it stands for" -- but the reason for this non-event is that the howling left-wing authoritarian abyss conjured here is a classic paranoid fantasy of the reactionary right. In this it is not unlike that slip-up about "working class" folks "allergic" to "independence" -- ooh, just smell the makers-v-takers race/class politics of "liberty"!

I'm sure Stalinist industrial-militarism and Maoist feudalism will leap to libertopian minds at my dismissal of these reactionary fever-dreams, but it really isn't difficult to grasp that the totalitarian impulse is a right-wing one, once you shed the re-mapping demanded by the World's Shortest Political Quiz. If you can grasp that Nazism was a movement of the right despite the word "socialist" in the logo it shouldn't be that complicated after all to trouble too slick an identification of the left with the gulag either. Neither is it so much of a leap to grasp the left impulse is essentially democratizing work toward equity-in-diversity once you set aside market fundamentalists pieties and the GOP corpse-cold Cold War playbook.

Comparably fantastical is the commenter's confident assertion that "pro-liberty types typically require only a brief explanation of BI before enthusiastically signing up." Yeah, except when they don't, which is pretty much always. Sure, a few market fundamentalists have tossed out thought-experiments about basic income when they were looking for a chance to score rhetorical points (what they mean by "signing up") about how awesome it would be to demolish the New Deal once and for all, but they never want to actually do anything (what it should mean to "sign up") to end wage slavery, eliminate the precarity draft, or secure informed non-duressed contracts. When have they "signed up" to do anything so jack-booted socialist as all that otherwise? When have they made their actual cases on such terms anyway? Attributing such motives for the occasional right-wing pseudo-scholarly foray into basic income thought experiments seems pretty far-fetched.

No doubt I am being biased, tribalist, immoderate, unreasonable to ask anybody to face these awkward facts.