Using Technology to Deepen Democracy, Using Democracy to Ensure Technology Benefits Us All

Wednesday, January 28, 2015

The Yearning Annex: Google Commits Millions for Robot Cult Indoctrination in Plutocratic Venture-Capitalist Dystopia

Also posted at the World Future Society.
Their prose was all purple, there were VCs running everywhere, tryin' to profit from destruction, you know we didn't even care.
via Singularity Hub (h/t David Golumbia):
Google, a long-time supporter of Singularity University (SU), has agreed to a two-year, $3 million contribution to SU's flagship Graduate Studies Program (GSP). Google will become the program's title sponsor and ensure all successful direct applicants get the chance to attend free of charge. Held every summer, the GSP's driving goal is to positively impact the lives of a billion people in the next decade using exponential technologies. Participants spend a fast-paced ten weeks learning all they need to know for the final exam—a chance to develop and then pitch a world-changing business plan to a packed house.
"Exponential technologies" is a short hand for the false and facile narrative superlative futurologists spun from Moore's Law -- the observation in 1965 (the year I was born) that the number of transistors on an integrated circuit had been roughly doubling every two years, and the paraphrase of that observation into a law-like generalization that chip performance more or less doubles every two years -- into the faith-based proclamation that this processing power will inevitably eventuate in artificial intelligence, and soon thereafter a history shattering super-intelligence that will control self-replicating programmable nanoscale robots that will provide a magical superabundance on the cheap and deliver near immortality through prosthetic medical enhancement and the digital uploading of "informational soul-selves" into imperishable online paradises.

The arrival of superintelligent artificial intelligence is denominated "the Singularity" by these futurologists, a term drawn from the science fiction of Vernor Vinge, as are the general contours of this techno-transcendental narrative, taken up most famously by one-time inventor and now futurological "Thought Leader" Ray Kurzweil and a coterie of so-called tech multimillionaires like Peter Thiel, Elon Musk, Jaan Tallinn all looking to rationalize their good fortune in the irrational exuberance of the tech boom and secure their self-declared destinies as protagonists of post-human history by proselytizing and investing in transhumanist/singularitarian eugenic/digitopian ideology across the neoliberal institutional landscape at MIT, Stanford, Oxford, Google, and so on.

That most of these figures are skim-and-scam artists with little sense and too much money on their hands goes without saying as does the obvious legibility of their "technoscientific" triumphalism as a conventional marketing strategy for commercial crap (get rich quick! anti-aging! sexy-sexy!) but amplified into a scarcely stealthed fulminating faith re-enacting the theological terms of an omni-predicated godhead delivering True Believers eternal life in absolute bliss with perfect knowledge. Not to put too fine a point it, the serially-failed program of AI doesn't become more plausible by slapping "super" in front of the AI, especially when the same sociopathic body-loathing digi-spiritualizing assumptions remain in force among its adherents; exponential processing power checked by comparable ballooning kruft is on a road to nowhere like transcendence; and since a picture of you isn't you and cyberspace is buggy and noisy and brittle hoping to live there forever as an information spirit is pretty damned stupid even if you call yourself a soopergenius.

Since the super-intelligent and nanotechnological magicks on which techno-transcendentalists pin their real hopes are not remotely in evidence, these futurologists tend to hype the media and computational devices of the day, celebrating algorithmic mediation and Big Data framing and kludgy gaming virtualities like Oculus Rift and surveillance media like the failed Google Glass and venture capitalist "disruption" like airbnb and uber. That this is the world of hyping toxic wage-slave manufactured landfill-destined consumer crap and reactionary plutocratic wealth concentration via the looting and deregulation of public and common goods coupled with ever-amplifying targeted marketing harassment and corporate-military surveillance should give the reader some pause when contemplating the significance of declarations like "GSP's driving goal is to positively impact the lives of a billion people in the next decade using exponential technologies."

The press release suavely reassures us that "Google is, of course, no stranger to moon shot thinking and the value of world-shaking projects." I think it is enormously important to pause and think a bit about what that "of course" is drawing on and standing for. It should be noted what "moon shot thinking" amounts to in a world that hasn't witnessed a moonshot in generations. There are questions to ask, after all, about Google's "world-shaking projects" advertorially curating all available knowledge in the service of parochial profit-taking, all the while handwaving about vaporware like immortality meds and driverless car-culture and geo-engineering greenwash. There are questions to ask about the techno-utopian future brought about by a "grad school" at a "university" for which "the final exam" is "a chance to develop and then pitch a world-changing business plan to a packed house." I will leave delineating the dreary dystopian details to the reader.

Thursday, January 22, 2015

Syllabus for my Digital Democracy, Digital Anti-Democracy Course (Starting Tomorrow)

Digital Democracy, Digital Anti-Democracy (CS-301G-01)

Spring 2015 01/23/2015-05/08/2015 Lecture Friday 09:00AM - 11:45AM, Main Campus Building, Room MCR

Instructor: Dale Carrico; Contact: dcarrico@sfai.edu, ndaleca@gmail.com
Blog: http://digitaldemocracydigitalantdemocracy.blogspot.com/

Grade Roughly Based On: Att/Part 15%, Reading Notebook 25%, Reading 10%, In-Class Report 10%, Final Keywords Map 40%

Course Description:

This course will try to make sense of the impacts of technological change on public life. We will focus our attention on the ongoing transformation of the public sphere from mass-mediated into peer-to-peer networked. Cyberspace isn't a spirit realm. It belches coal smoke. It is accessed on landfill-destined toxic devices made by wretched wage slaves. It has abetted financial fraud and theft around the world. All too often, its purported "openness" and "freedom" have turned out to be personalized marketing harassment, panoptic surveillance, zero comments, and heat signatures for drone targeting software. We will study the history of modern media formations and transformations, considering the role of media critique from the perspective of several different social struggles in the last era of broadcast media, before fixing our attention on the claims being made by media theorists, digital humanities scholars, and activists in our own technoscientific moment.

Provisional Schedule of Meetings

Week One, January 23: What Are We Talking About When We Talk About "Technology" and "Democracy"?

Week Two, January 30: Digital,

Laurie Anderson: The Language of the Future
Martin Heidegger, The Question Concerning Technology 
Evgeny Morozov, The Perils of Perfectionism
Paul D. Miller (DJ Spooky), Material Memories 
POST READING ONLINE BEFORE CLASS MEETING

Week Three, February 6: The Architecture of Cyberspatial Politics

Lawrence Lessig, The Future of Ideas, Chapter Three: Commons on the Wires
Yochai Benkler, Wealth of Networks, Chapter 12: Conclusion
Michel Bauwens, The Political Economy of Peer Production
Saskia Sassen, Interactions of the Technical and the Social: Digital Formations of the Powerful and the Powerless 
My own, p2p Is Either Pay-to-Peer or Peers-to-Precarity 
Jessica Goodman The Digital Divide Is Still Leaving Americans Behind 
American Civil Liberties Union, What Is Net Neutrality
Dan Bobkoff, Is Net Neutrality the Real Issue?

Week Four, February 13: Published Public

Dan Gillmour, We the Media, Chapter One: From Tom Paine to Blogs and Beyond
Digby (Heather Parton) The Netroots Revolution
Clay Shirky, Blogs and the Mass Amateurization of Publishing
Aaron Bady, Julian Assange and the Conspiracy to "Destroy the Invisible Government"
Geert Lovink Blogging: The Nihilist Impulse

Week Five, February 20: Immaterialism

John Perry Barlow, A Declaration of the Independence of Cyberspace
Katherine Hayles, Liberal Subjectivity Imperiled: Norbert Weiner and Cybernetic Anxiety
Paulina Borsook, Cyberselfish
David Golumbia, Cyberlibertarians' Digital Deletion of the Left
Richard Barbrook and Andy Cameron, California Ideology
Eric Hughes, A Cypherpunk's Manifesto
Tim May, The Cryptoanarchist Manifest

Week Six, February 27: The Architecture of Cyberspatial Politics: Loose Data

Lawrence Lessig, Prefaces to the first and second editions of Code
Evgeny Morozov, Connecting the Dots, Missing the Story
Lawrence Joseph Interviews Frank Pasquale about The Black Box Society
My Own, The Inevitable Cruelty of Algorithmic Mediation
Frank Pasquale, Social Science in an Era of Corporate Big Data
danah boyd and Kate Crawford, Critical Questions for Big Data Bruce Sterling, Maneki Neko

Week Seven, March 6: Techno Priesthood

Evgeny Morozov, The Meme Hustler
Jedediah Purdy, God of the Digirati
Jaron Lanier, First Church of Robotics
Jalees Rehman, Is Internet-Centrism A Religion?
Mike Bulajewski, The Cult of Sharing
George Sciaballa Review of David Noble's The Religon of Technology

Week Eight, March 13: Total Digital

Jaron Lanier, One Half of a Manifesto
Vernor Vinge, Technological Singularity
Nathan Pensky, Ray Kurzweil Is Wrong: The Singularity Is Not Near
Aaron Labaree, Our Science Fiction Future: Meet the Scientists Trying to Predict the End of the World
My Own, Very Serious Robocalyptics
Marc Steigler, The Gentle Seduction

Week Nine, March 16-20: Spring Break

Week Ten, March 27: Meet Your Robot God
Screening the film, "Colossus: The Forbin Project"

Week Eleven, April 3: Publicizing Private Goods

Cory Doctorow You Can't Own Knowledge
James Boyle, The Second Enclosure Movement and the Construction of the Public Domain
David Bollier, Reclaiming the Commons
Astra Taylor, Six Questions on the People's Platform

Week Twelve, April 10: Privatizing Public Goods

Nicholas Carr, Sharecropping the Long Tail
Nicholas Carr, The Economics of Digital Sharecropping
Clay Shirky, Why Small Payments Won't Save Publishing
Scott Timberg: It's Not Just David Byrne and Radiohead: Spotify, Pandora, and How Streaming Music Kills Jazz and Classical 
Scott Timberg Interviews Dave Lowery, Here's How Pandora Is Destroying Musicians
Hamilton Nolan, Microlending Isn't All It's Cracked Up To Be

Week Thirteen, April 17: Securing Insecurity

Charles Mann, Homeland Insecurity
David Brin, Three Cheers for the Surveillance Society!
Lawrence Lessig, Insanely Destructive Devices
Glenn Greenwald, Ewan MacAskill, and Laura Poitras, Edward Snowden: The Whistleblower Behind the NSA Surveillance Revelations
Daniel Ellsberg, Edward Snowden: Saving Us from the United Stasi of America

Week Fourteen, April 24: "Hashtag Activism" I

Evgeny Morozov Texting Toward Utopia 
Hillary Crosly Croker, 2013 Was the Year of Black Twitter
Michael Arceneux, Black Twitter's 2013 All Stars
Annalee Newitz, What Happens When Scientists Study Black Twitter
Alicia Garza, A Herstory of the #BlackLivesMatter Movement
Shaquille Bewster, After Ferguson: Is "Hashtag Activism" Spurring Policy Changes?
Jamilah King, When It Comes to Sports Protests, Are T-Shirts Enough?

Week Fifteen, May 1: "Hashtag Activism" II

Paulina Borsook, The Memoirs of a Token: An Aging Berkeley Feminist Examines Wired
Zeynap Tukekci, No, Nate, Brogrammers May Not Be Macho, But That's Not All There Is To It; How French High Theory and Dr. Seuss Can Help Explain Silicon Valley's Gender Blindspots
Sasha Weiss, The Power of #YesAllWomen
Lisa Nakamura, Queer Female of Color: The Highest Difficulty Setting There Is? Gaming Rhetoric as Gender Capital 
Yoonj Kim, #NotYourAsianSidekick Is A Civil Rights Movement for Asian American Women
Jay Hathaway, What Is Gamergate

Week Sixteen, May 8: Digital Humanities, Participatory Aesthetics, and Design Culture

Claire Bishop, The Social Turn and Its Discontents
Adam Kirsch, Technology Is Taking Over English Departments: The False Promise of the Digital Humanities
David Golumbia, Digital Humanities: Two Definitions
Tara McPherson, Why Are Digital Humanities So White?
Roopika Risam, The Race for DigitalityWendy Hui Kyong Chun, The Dark Side of the Digital Humanities
Bruce Sterling, The Spime
Hal Foster, Design and Crime
FINAL PROJECT DUE IN CLASS; HAND IN NOTEBOOKS WITH FINAL PROJECT

Thursday, January 15, 2015

AI Isn't A Thing

People who flutter their hands over the "existential risk" of the theoretically impoverished, serially failed project of good old-fashioned artificial intelligence (GOFAI) or its techno-transcendental amplification into a post-biological super-intelligent Robot God (GOD-AI) think they are worried about a thing. They think they are experts who know stuff about a thing that they are calling "AI." They can get in quite a lather arguing over the technical properties and sociopolitical entailments of this thing with just about anybody who will let them.

But their "AI" does not exist. Their "AI" does not have properties. Their "AI" is not on the way.

Their "AI" is a bunch of fancies bounded by stipulations. Their "AI" stands in the loosest relation to the substance of real code and real networks and their real problems and real people doing real work on them here and now.

"AI" is a discourse, and it serves a primarily ideological function: It creates a frame -- populated with typical conceits, mobilizing customary narratives -- through which real problems and complex phenomena are being miscomprehended by technoscientific illiterates, acquiescent consumers, and wish-fulfillment fantasists. Ultimately, the assumptions and aspirations investing this frame have to do with the promotion and advertizing of commodities, software packages, media devices and the resumes of tech-talkers. At their extremity, these assumptions and aspirations mobilize and substantiate the True Belief of techno-transcendentalists given over to symptomatic fears of mortality, vulnerability, contingency, error, lack of control, but it is worth noting that the appeal to these irrational fears and passions merely amplify (in a kind of living reductio ad absurdum) the drives consumer advertizing and venture-capitalist self-promotion always cater to anyway.

Actually-existing biologically-incarnated consciousness, intelligence, and personhood look little like the feedback mechanisms of early cyberneticists and less still like the computational conceits of later neurocomputationalists. Bruce Sterling said nothing but the obvious when he pointed out that the brain is more like a gland than a computer. Living people don't look any more like the Bayesian calculators of alienated robocultic sociopaths than they look like the monomaniacal maximizers of political economy's no less sociopathic homo economicus.

So, of course, "The Forbin Project" and "War Games" and "The Terminator" and "The Lawnmower Man" and "The Matrix" are movies -- everybody knows that! Of course, our computers are not going to reach critical mass and "wake up" one day, any more than our complex and dynamic biosphere will do. Moore's Law is not spontaneously going to spit out a Robot God any more than an accumulating pile of abacuses would -- not least due to Jeron Lanier's corollary to Moore's Law: "As processors become faster and memory becomes cheaper, software becomes correspondingly slower and more bloated, using up all available resources."

Again, everybody knows all that. But can everybody be expected to talk or act like people who know these things? Sometimes, the exposure of the motives and hyperbole and deception of AI ideology will lead its advocates and enthusiasts to concessions but not to the relinquishment of the ideology itself. Even if we do not need to worry about making Hal our pal, even if AI will not assume the guise of a history-shattering super-parental Robot God... what if, they wonder, somebody codes some mindless mechanism that is satanic by accident or in the aggregate, like a vast robo-runaway bulldozer scraping the earth of its biological infestation, a software glitch that releases an ubergoo waveform transforming the solar system into computronium for crunching out pi for all eternity?

The arrant silliness of such concerns is exposed the moment one grasps that security breaches, brittle code, unfriendly interfaces, mindless algorithms resulting in catastrophic (and probably criminal) public decisions are all happening already, right now. There are people working on these problems, right now. The pet figures and formulations, the personifications, moralisms, reductions and triumphalisms of AI discourse introduce nothing illuminating or new into these efforts. If anything, AI discourse encourages its adherents to assess these developments not in terms of their actual costs, risks, and benefits to the diversity of their actual stakeholders, but to misread them as stepping stones along the road to The Future AI, signs and portents in which is glimpsed the imminence of The Future AI, thus distracting from the present reality of problems to the imagined future into which symptomatic fears and fancies are projected.

So, too, sometimes the exposure of the irrational True Belief of adherents of AI-ideology and the crass self-promotion and parochial profit-taking of its prevalent application in consumer advertizing and the pop-tech journalism will lead its advocates and enthusiasts to different concessions. Sure, it turns out that Peter Thiel and Elon Musk are hucksters who pulled insanely lucrative skim-and-scam operations over on technoscientific illiterates and now want to consolidate and justify their positions by promoting themselves as epochal protagonists of history. And, sure, Ray Kurzweil and Eliezer Yudkowsky are guru-wannabes spouting a lot of pseudo-scientific pseudo-philosophical pseudo-theological nonsense while looking for the next flock to fleece. But what if there are real scientists and entrepreneurs and experts somewhere doing real coding and risking real dangers in their corpoate-military labs, quietly lost in their equations, unaware that they are coding the lightning that will convulse the internet corpse into avid Frankensteinian life?

Of course, the very robocultic nonsense disdained in such recognitions has found its way to the respectability and moneybags of Google, DARPA, Oxford, Stanford, MIT. And so, to imagine some deeper institutional strata where the really serious techno-transcendental engines are stoked actually takes us into conspiratorial territory rather quickly. Indeed, this fancy is a mirror image of the very pining one hears from frustrated Robot Cultists who know all too well in their heart of hearts that nobody is out there materializing their daydreams/nightmares for them, and so one hears time and time again the siren call for separatist enclaves, from taking over tropical islands or building offshore pirate utopias on oil rigs to huddling bubbled under the sea or taking a buckytube space elevator to their private L5 torus or high-tailing it out to their nanobotically furnished treasure cave -slash- mad scientist lab in the asteroid belt to do some serious cosmological engineering.

Again, it is utterly wrong-headed to think there are serious technical types working on "AI" -- because there is nothing for them to be working on. Again, "AI" is just a metaphorization and narrative device that enables some folks to organize all sorts of complex technical and political developments into something that feels like sense but is much more about wishes than working. The people solving real problems with code and technique and policy aren't doing "AI" and to read what they are doing through AI discourse is fatally to misread them. It is only a prior investment in the assumptions and aspirations, figures and frames of AI discourse that would lead anybody to think one should relinquish the scrum of real-world problem solving and ascend instead to some abstract ideality the better to formulate a "roadmap" with which to retroactively imbue technoscientific vicissitudes with Manifest Destiny or to treat as "the real problem" the non-problem of crafting humanist Asimovian injunctions to constrain imaginary robots from imaginary conflicts they cause in speculative fictions.

You don't have to worry about things nobody is working on. You shouldn't pin your hopes or your fears on pseudo-philosophical fancies or pseudo-scientific follies. You don't have to ban things that don't and won't exist anyway, at any rate not in the forms techno-transcendentalists are invested in. There are real things to worry about, among them real problems of security, resilience, user-friendliness, interoperability, surveillance. "AI" talk won't help you there. That should tell you right away it works instead to help you lose your way.

Monday, January 12, 2015

Nourishing Nothingness: Futurists Are Getting Virtually Serious About Food Politics

I'm a lacto-ovo vegetarian now, but obviously in The Future will be a digi-nano vegetarian...
Salon has alerted me to the existence of a new SillyCon Valley startup, Project Nourished, which hopes to use synesthetic cues from a virtual reality helmet, vibrating spork, and whiffs from a perfume atomizer to fool America's obese malnourished gluttons that they are feasting on two-pound steaks and baskets of onion rings and death by chocolate sundaes when in fact they are eating gelatinous cubes of zero-calorie vitamin-fortified goo.

According to the breathless website, this proposal will "solve" the following problems: "anorexia, bulimia, cancer, diabetes, heart disease, obesity, allergies and co2 omissions."

The real problem solved by the project is that it definitively answers a question I have long pondered: Is futurology so utterly idiotic and smarmy that it is actually impossible to distinguish its most earnest expressions from even the most ridiculous parodies of them?

I mean, to literally name your project "nourish" while actually avowing you seek to peddle a product that nourishes no one is pretty breathtaking. It's like the scam of peddling sugary cereals as part of "this complete nutritious breakfast," when all the nourishment derives from the juice and eggs and toast accompanying the bowl in the glossy photo but almost never in the event of an actual breakfast involving the cereal in question. Except now, even the cereal isn't really there, but a bowl of packing cardboard over which is superimposed an image of Fruit Loops with a spritz of grapefruit air-freshener shot in your nostril every time you take a bite.

Why ponder structural factors like the stress of neoliberal precarity or the siting of toxic industries near residences or the lack of grocery stores selling whole foods within walking distances or the punitive mass mediated racist/sexist body norms that yield unhealthy practices, eating disorders, the proliferation of allergies and respiratory diseases and so on? Why concern yourself with public investment in medical research, heathcare access, vegetarian awareness, zoning for walkability, sustainable energy and transportation infrastructure and so on?

The Very Serious futurologists have a much better technofix for all that -- it's kinda sorta like the food pills futurologists have been promising since Gernsback, but now you would eat large empty candy colored polyhedra (you know, like the multisided dice nerds used to use to play D&D in the early 80s) while sticking your head in a virtual reality helmet (you know, like the virching rigs techbros have been masturbating over since the late 80s). Also, too, the stuff would be 3D-printed, because if you are a futurologist you've gotta get 3D-printing in there somewhere. As I said, Very Serious!

Returning to the website, we are told, "the project was inspired by the film Hook, where Peter Pan learns to use his imagination to see food on a table that seemed completely empty at first." Setting aside the aptness of drawing inspiration from a crappy movie rather than the actual book on which it is based -- only Luddites think books have a future, shuh! -- I propose that Project Nourish has a different filmic inspiration:

Saturday, January 10, 2015

Uploading As Reactionary Anti-Body Politics

A reader in the Moot describes some typical transhumanoid versions of "doing radical social criticism... saying something along the lines of, say, gender won't matter anymore when we upload our minds to the noosphere." For transhumanoid radical race critique fill in the blank (and try not to think too much about the history of eugenics, or how transhumanists seem to be a whole lot of white guys), for transhumanoid radical class critique here comes NanoSanta Clause.

Of course, not only is this not "doing radical social criticism" but it seems to me pretty explicitly straightforwardly reactionary, even when accompanied by citations of actual feminist, queer, or anti-racist criticism. Complacent consumers who want to enjoy a little liberal guilt to spice their entertainments will always rationalize the violence and inequity of the present by declaring the debased now better than before or on the road to better still and then grabbing a beer from the fridge, or clicking the buy button, or getting out on the dancefloor.

Plutocrats always naturalize their hierarchies as meritocracies. In much the same way, the whole robocultic uploading schtick is obviously a denigration of materiality of the body, and it is always of course the white body male body straight body cis body healthy body capacious body that can best disavow its materiality because its materiality isn't in question or under threat, right?

It can be a mark more of privilege than perceptiveness to call into question that which won't ever be in question for you whatever. The bodily is always constituted as such through technique (from language to body language to posture to wearability), and the social legibility of every body is of course performatively substantiated. To grasp that point is to trouble or question the prediscursivity of the body or to recognize that prediscursivity is always a discursive effect. But this recognition is at best a point of departure and never the end-point for the interrogation of prevailing normative bodies and their abjection of bodily lifeways otherwise.

The denial or disavowal of differences that make a difference is much more likely effectively to endorse than efface them. Imaginary digi-utopian and medi-utopian circumventions of raced, gendered, abled bodily differences function in the present to deny or disavow rather than critically or imaginatively interrogate their terms. These omissions are all the more egregious when we actually turn our minds even cursorily to the perniciously raced and sexed histories of the medical and the digital as actually-existing practical, normative, professional sites.

Setting aside questions of the utter implausibility and incoherence of the techno-transcendental wish-fulfillment fantasies playing out in all this, why even pretend that recourse to digital dematerialization or to medical enhancement would circumvent rather than express the fraught, inequitable legibility and livability of wanted lifeway diversity? It will surely be the more urgent task to attend closely to the ways in which these very differences, race, sex, ability, shape the distribution of costs, risks, benefits, access and information to actually-available prosthetic possibilities. 

I must say it has always cracked me up that since all information is instantiated on a material carrier, then even on their own terms the spiritualization of digi-info souls is hard to square with the reductionist scientism these folks tend to congratulate themselves over -- not that it would be anything to be proud of even if they managed to be more consistently dumb in that particular way.

What can you really expect from techno-transcendentalists apparently so desperate not to grow old or die that they will pretend a scan of them would be them when no picture ever has been and that computer networks could reliably host their "info-souls" forever when most people long outlive their crufty, unreliable computer networks in reality, and all just so they can day dream they will be immortal cyberangels in Holodeck Heaven? Science!

The Political Problem With Transhumanisms

Upgraded and Expanded from a response of mine to some comments in the Moot: Well, I think probably the key conceptual problem with transhumanisms is that they have an utterly uninterrogated idea of "technology" that pervades their discourses and sub(cult)ures. They attend very little to the politics of naturalization/ de-naturalization, of habituation/ de-familiarization that invest some of techniques/artifacts (but not others, indeed probably not most others) with the force of the "technological." Quite a lot of the status quo gets smuggled in through these evasions and disavowals, de-politicizing what could be done or made otherwise, and hence rationalizing incumbency. Whatever the avowed politics of a transhumanist, their depoliticization of so much of the field of the cultural-qua-prosthetic lends itself to a host of conservative/reactionary naturalizations in my view.

This is all the more difficult for the transhumanists to engage in any thoughtful way, since they are so invested in the self-image of being on the bleeding edge, embracing novelty, disruption, anti-nature, and so on. I daresay this might have been excusable in the irrationally exuberant early days of the home computer and the explosive appearance of the Web (I saw through it at the time, though, so it can't have been that hard, frankly), but what could be more plain these days at least than the realization how much "novelty" is merely profitably repackaged out of the stale, how much "disruption" is just an apologia for all too familiar plutocratic politics dismantling public goods?

Transhumanists turn out to fall for the oldest Madison Avenue trick in the book, mistaking consumer fandoms as avant-gardes. And then they fall for the same sort of phony radicalism as so many New Atheists do: mirroring rather than rejecting religious fundamentalism by recasting politics as moralizing around questions of theology; distorting the science they claim to champion by misapplying its norms and forms to moral, political, aesthetic, cultural domains beyond its proper precinct. (The false radicalism of scientism -- not science, scientism -- prevails more generally in technocratic policy-making practices in corporate-military think-tanks and in elite design discourses, many of which fancy themselves or at any rate peddle themselves as progressive, and transhumanist formulations lean on these tendencies in their bids for legitimacy but also these already prevailing practices and discourses are vulnerable to reframing in transhumanist terms; there are dangerous opportunities for reactionary politics going in both directions here.)

Transhumanists indulge what seems to me an utterly fetishistic discourse of technology -- in both Marxist and Freudian senses -- out of which a host of infantile conceits arrive in tow: Failing to grasp the technical/performative articulation of every socially legible body, cis as much as trans, "optimal" as much as "disabled," they fetishistically identify with cyborg bodies appealing to wish-fulfillment fantasies they seem to have consumed more or less wholesale from advertizing and Hollywood blockbusters. Failing to grasp the collective/interdependent conditions out of which agency emerges, they grasp at prosthetic fetishes to prosthetically barnacle or genetically enhance the delusive sociopathic liberal "rugged/possesive individual" in a cyborg shell, pretty much like any tragic ammosexual or mid-life crisis case does with his big gun or his sad sportscar.

I have found technoprogressives to be untrustworthy progressives (I say this as the one who popularized that very label), making common cause with reactionaries at the drop of a hat, too willing to rationalize inequity and uncritical positions through appeals to eventual or naturalized progress -- progress is always progress toward an end, and its politics are defined by the politics of that end, and the substance of progress is not the logical or teleological unfolding of entailments but an interminable social struggle among a changing diversity of stakeholder -- whatever they call themselves techno-fixated techno-determinisms are no more progressive than any other variation of Manifest Destiny offered up to congratulate and reassure incumbent elites.

Time and time again in my decades long sparring with futurologists both extreme and mainstream I have confronted in my interlocutors curious attitudes of consumer complacency and uncritical techno-fixation, as well as more disturbing confessions of fear and loathing: fear of death and hostility to mortal, aging, vulnerable body, fear of error or humiliation and hostility to the contingency, errancy, boundedness of the biological brains and material histories in which intelligence are incarnated. To say this -- which is to say the obvious, I fear -- usually provokes howls of denial and disavowal, charges of ad hominem and hate speech, and so I will conclude on a different note: Again, I don't think any of these transhumanist susceptibilities to reaction are accidental or incidental, but arise out of the under-interrogated naturalized technological assumptions and techno-transcendental aspirations on which all superlative futurologies/ists so far have definitively depended.

Thursday, January 08, 2015

Robot Gods Are Nowhere So Of Course They Must Be Everywhere

Advocates of Good Old Fashioned Artificial Intelligence (GOFAI) have been predicting that the arrival of intelligent computers is right around the corner more or less every year from the formation of computer science and information science as disciplines, from World War II to Deep Blue to Singularity U. These predictions have always been wrong, though their ritual reiteration remains as strong as ever.

The serial failure of intelligent computers to make their long awaited appearance on the scene has lead many computer scientists and coders to focus their efforts instead on practical questions of computer security, reliability, user-friendliness, and so on. But there remain many GOFAI dead-enders who keep the faith and still imagine the real significance that attaches to the solution of problems with/in computation is that each advance is a stepping stone along the royal road to AI, a kind of burning bush offering up premonitory retroactive encouragement from The Future AI to its present-day acolytes.

In the clarifying extremity of superlative futurology we find techno-transcendentalists who are not only stubborn adherents of GOFAI in the face of its relentless failure, but who double down on their faith and amplify the customary insistence on the inevitable imminence of AI (all appearances to the contrary notwithstanding) and now declare no less inevitable the arrival of SUPER-intelligent artificial intelligence, insisting on the imminence of a history-shattering, possibly apocalyptic, probably paradisical, hopefully parental Robot God.

Rather than pay attention to (let alone learn the lessons of) the pesky failure and probable bankruptcy of the driving assumptions and aspirations of the GOFAI research program-cum-ideology, these techno-transcendentalists want us to treat with utmost seriousness the "existential threat" of the amplification of AI into a superintelligent AI in the wrong hands or with the wrong attitudes. I must say that I for one do not agree with Very Serious Robot Cultists at Oxford University like Nick Bostrom or at Google like Ray Kurzweil or celebrity tech CEOs like Elon Musk that the dumb belief in GOFAI becomes a smart belief rather than an even dumber one when it is amplified into belief in a GOD-AI, or that the useless interest in GOFAI becomes urgently useful rather than even more useless when it is amplified into worry about the existential threat of GOD-AI because it would be so terrible if it did come true. It would be terrible if Godzilla or Voldemort were real, but that is no reason to treat them as real or to treat as Very Serious those who want to talk about what existential threats they would pose if they were real when they are not (especially when there are real things to worry about).

The latest variation of the GOFAI via GOD-AI gambit draws on another theme beloved by superlative futurologists, the so-called Fermi Paradox -- the fact that there are so very many stars in the sky and yet no signs that we can see so far of intelligent life out there. Years ago, I proposed
The answer to the Fermi Paradox may simply be that we aren't invited to the party because so many humans are boring assholes. As evidence, consider that so many humans appear to be so flabbergastingly immodest and immature as to think it a "paradoxical" result to discover the Universe is not an infinitely faceted mirror reflecting back at us on its every face our own incarnations and exhibitions of intelligence.
I for one don't find it particularly paradoxical to suppose life is comparatively rare in the universe, especially intelligent life, and more especially still the kind of intelligent life that would leave traces of a kind human beings here and now would discern as such, given how little we understand about the phenomena of our own lives and intelligence and given the astronomical distances involved. As the Futurological Brickbat quoted above implies, I actually think the use of the word "paradox" here probably indicates human idiocy and egotism more than anything else.

A recent article in Vice's Motherboard collects a handful of proponents of a "new view" on this question that proposes instead that the "dominant intelligence in the cosmos is probably artificial." The use of the word "probable" there may make you think that there is some kind of empirical inquiry afoot here, especially since all sorts of sciency paraphernalia surrounds the assertion, and its proponents are denominated "astronomers, including Seth Shostak, director of NASA’s Search for Extraterrestrial Intelligence, or SETI, program, NASA Astrobiologist Paul Davies, and Library of Congress Chair in Astrobiology Stephen Dick." NASA and the Library of Congress are institutions that have some real heft, but let's just say that typing the word "transhumanist" into a search for any of those names may leave you wondering a bit about the robocultic company they keep.

But what I want to insist you notice is that the use of the term "probability" in these arguments is a logical and not an empirical one at all: What it depends on is the acceptance in advance of the truth of the premise of GOFAI via GOD-AI which is in fact far from obvious at all that anyone would sensibly take for granted. Indeed, I propose that like many arguments offered up by Robot Cultists in more mainstream pop-tech journalism, the real point of the piece is to propagandize for the Robot Cult by indulging in what appear to be harmless blue-sky speculations of science fictional conceits but which entertain as true and so functionally bolster what are actually irrational and usually pernicious articles of futurological faith.

The philosopher Susan Schneider (search "Susan Schneider transhumanist," go ahead, try it) is paraphrased in the article saying "when it comes to alien intelligence... by the time any society learns to transmit radio signals, they’re probably a hop-skip away from upgrading their own biology." This formulation buries the lede in my view, and quite deliberately so. That is to say, what is really interesting here -- one might actually say it is flabbergasting -- is the revelation of a string of techno-transcendental assumptions: [one] that technodevelopmental vicissitudes are not contingently sociopolitical but logically or teleologically determined; [two] that biology could be fundamentally transformed while remaining legible to the transformed (that's the work done by the reassuring phrase "their own"); [three] that jettisoning biological bodies for robot bodies and "uploading" our biological brains into "cyberspace" is not only possible but desirable (make no mistake about it, that is what she is talking about when she talks about "upgrading biology" -- by the way, the reason I scare-quote words like "upload" and "cyberspace" is because those are metaphors not engineering specs, and unpacking those metaphors exposes enough underlying confusion and fact-fudging that you may want to think twice about trusting your "biological upgrade" to folks who talk this way, even if they chirp colloquiually at you that your immortal cyberangel soul-upload into Holodeck Heaven is just a "hop-skip away" from easy peasy radio technology); and [four] that terms like "upgrade," freighted as they are with a host of specific connotations derived from the deceptive hyperbolic parasitic culture of venture-capitalism and tech-talk, are the best way to characterize fraught fundamental changes in human lives to be brought about primarily by corporate-military incumbent-elites seeking parochial profits. Maybe you want to read that last bit again, eh?

Seth Shostak quotes from the same robocultic catechism a paragraph later: “As soon as a civilization invents radio, they’re within fifty years of computers, then, probably, only another fifty to a hundred years from inventing AI... At that point, soft, squishy brains become an outdated model.” Notice the same technological determinism. Notice that the invention of AI is then declared to be probable within a century -- and no actual reasons are offered up in support of this declarations and it is made in defiance of all evidence to the contrary. And then notice suddenly we find ourselves once again in the moral universe of techno-transcendence, where Schneider assumed robot bodies and cyberspatial uploads would be "upgrades" (hop-skipping over the irksome question whether such notions are even coherent or possible on her terms, whether a picture of you could be you, whether fetishized prosthetization would be enhancing to all possible ends or disabling to some we might come to want or immortalizing when no prostheses are eternal, etc etc etc etc) Shostak leaps to the ugly obverse face of the robocultic coin: "soft, squishy brains" are "outdated model[s]." Do you think of your incarnated self as a "model" on the showroom floor, let alone an outdated one? I do not. And refusing such characterizations is indispensable to resisting being treated as one. Maybe you want to read that last bit again, eh?

“I believe the brain is inherently computational -- we already have computational theories that describe aspects of consciousness, including working memory and attention,” Schneider is quoted as saying in the article. "Given a computational brain, I don’t see any good argument that silicon, instead of carbon, can’t be a excellent medium for experience.” Now, I am quite happy to concede that phenomena enough like intelligence and consciousness for us to call them that might in principle take different forms from the ones exhibited by conscious and intelligent people (humans animals and I would argue also some nonhuman animals) and be materialized differently than in the biological brains and bodies and historical struggles that presently incarnate them.

But conceding that logical possibility does not support in the least the assertion that non-biological intelligences are inevitable, that present human theories of intelligence tell us enough to guide us in assessing these possibilities, that human beings are on the road to coding such artificial intelligence, or that current work in computer theory or coding practice shows any sign at all of delivering anything remotely like artificial intelligence any time soon. Certainly there is no good reason to pretend the arrival of artificial intelligence (let alone godlike superintelligence) is so imminent that we should prioritize worrying about it over deliberation about actually real, actually urgent, actually ongoing problems like climate change, wealth concentration, exploited majorities, neglected diseases, abuse of women, arms proliferation, human trafficking, military and police violence.

What if the prior investment in false and facile "computational" metaphors of intelligence and consciousness are evidence of the poverty of the models employed by adherents of GOFAI and are among the problems yielding its serial failure? What if such "computational" frames are symptoms of a sociopathic hostility to actual animal intelligence or simply reveal ideological commitments to the predatory ideology of Silicon Valley's unsustainable skim-and-scam venture capitalism?

Although the proposal of "computational" consciousness is peddled here as a form of modesty, as a true taking-on of the alien otherness of alien intelligence in principle, what if these models of alien consciousness reflect most of all the alienation of their adherents -- the sociopathy of their view of their own superior computational intellects and their self-loathing of the frailties in that intellect's "atavistic" susceptibility to contingency, error, and failure -- rather than any embrace of the radical possibilities of difference?

It is no great surprise that the same desperate dead-enders who thought they could make the GOFAI lemon into GOD-AI lemonade would then go on to find evidence of the ubiquity of that GOD-AI in the complete lack of evidence of GOD-AI anywhere at all. What matters about the proposal of this "new view"on the Fermi Paradox is that it requires us to entertain as possible, so long as we are indulging the speculation at hand, the very notion of GOFAI that we otherwise have absolutely no reason to treat seriously at all.

Exposing the rhetorical shenanigans of faith-based futurologists is a service I am only too happy to render, of course, but I do want to point out that even if there are no good reasons to treat the superlative preoccupations of Robot Cultists seriously on their own terms (no, we don't have to worry about a mean Robot God eating the earth; no, we don't have to worry about clone armies or designer baby armies or human-animal hybrid armies taking over the earth; no, we don't have any reason to expect geo-engineers from Exxon-Mobil to profitably solve climate change for us or gengineers to profitably solve death and disease for us or nanogineers to profitably solve poverty for us) there may be very good reasons to take seriously the fact that futurological frames and figures are taken seriously indeed.

Quite apart from the fact that time spent on futurologists is time wasted in distractions from real problems, the greater danger may be that futurological formulations derange the terns of our deliberation on some of the real problems. Although the genetic and prosthetic interventions techno-triumphalists incessantly crow about have not enhanced or extended human lifespans in anything remotely like radical ways, the view that this enhancement and extension MUST be happening if it is being crowed about so incessantly has real world consequences, making consumers credulous about late-nite snake-oil salesmen in labcoats, making hospital administrators waste inordinate amounts for costly gizmos and ghastly violations for end-of-life care, rationalizing extensions of the retirement age for working majorities broken down by exploitation and neglect. Although the geo-engineering interventions techno-triumphalists incessantly crow about cannot be coherently characterized and seem to depend on the very funding and regulatory apparatuses the necessary failure of which is usually their justification, the view that such geo-engineering MUST be our "plan B" or our "last chance" provides extractive-industrial eco-criminals fresh new justifications to deny any efforts at real world education, organization, legislation to address environmental catastrophe. The very same techno-deterministic accounts of history techno-triumphalists depend on for their faith-based initiatives provided the rationales to justify the indebtedness to their former occupiers -- in the name of vast costly techno-utopian boondoggles like superdams and superhighways and skyscraper skylines -- in nations emerging from colonial occupation and then the imposition of austerity regimes that returned them to conditions of servitude.

Although I regard as nonsensical the prophetic utterances futurologists make about the arrival any time soon, or necessarily ever, of artificial intelligence in the world, I worry that there are many real world consequences of the ever more prevalent deployment of the ideology of artificial life and artificial intelligence by high-profile "technologists" in the popular press. I worry that the attribution of intelligence to smart cards and smart cars and smart phones, none of which exhibit anything like intelligence, confuses our sense of what intelligence actually is and risks denigrating the intelligence of the people with whom we share the world as peers. To fail to recognize the intelligence of humans risks the failure to recognize their humanity and the responsibilities demanded of us inhering in that humanity. Further, I worry that the faithful investment in the ideology of artificial intelligence rationalizes terrible decisions, justifies the outsourcing of human judgments to crappy software that corrects our spelling of words we know but it does not, recommends purchases and selects options for us in defiance of the complexities and dynamism of our taste, decides whether banks should find us credit-worthy whatever our human potential or states should find us target-worthy whatever our human rights.

Futurology rationalizes our practical treatment as robots through an indulgence in what appears to be abstract speculation about robots. The real question to ask of the Robot Cultists, and of the prevailing tech-culture that popularizes their fancies, is not how plausible their prophesies are but just what pathologies do these prophesies symptomize and just what constituencies do they benefit.

Monday, January 05, 2015

State of the Blog

In May 2014 Amor Mundi had its tenth birthday. I can't say that I had such a thing in mind when I started this blog, but Amor Mundi has turned out to be my most sustained and consistent intellectual effort at this point, apart from teaching. It's strange to contemplate the mountain of archive I've accumulated and scaled in all this time, to consider what it amounts to, what it is good for, what it took me from, where it is going.

2013 was the first time in the history of the blog that I had posted less in a year here than the year before, but in 2014 I posted even less than in 2013. I suspect that the greater energy I have devoted to microblogging via twitter accounts for some of this. I have often used twitter as a spring board, sounding board, being bored, and also a promotional space for Amor Mundi and so the relations between my twitter and blogger accounts have been more collaborative than I would have expected.

A few days ago I posted a list of the most widely read pieces here from last year. Although half of the energy I expend here goes to making sense of and venting frustration over politics, as usual almost nobody was the least bit interested in this material (after all there are a million people saying the same sorts of thing generally) and it is pretty much only when I ridicule Robot Cultists or techbro venture capitalist skim-and-scam artists that readers perk up. Not only were most of my most widely read posts skewering techno-utopian scams, but frankly such pieces were almost the only ones that attracted hits in the hundreds. Despite that, I was pleased that stuff I wrote in the midst of what was a personally fraught battle of adjuncts like me and my colleagues for union representation received a lot of attention, too. That was an important part of the year as I lived it, and it was nice to see it registered retrospectively in the life of the blog as well.

The single most widely read piece from last year was the forum page I created in connection with the Existenz volume on Posthumanism to which I contributed an essay myself. Since that remains the piece of anti-futurological critique I am proudest of writing, I was pleased other people have paid some attention to it as well. I was also very pleased that people read my extended elaboration of themes from the short essaylet I was asked to write for the New York Times "When Geeks Rule" forum -- mostly because I thought the piece that appeared in the Times itself ended up being pretty vapid and disappointing, but with more room to roam in I could get at what really mattered to me on the topic.

This year, as with last year, some of my posts were also published at the World Future Society, where somewhere between twice and twenty times as many people gave them a looksee. I should probably post there more, but I always feel a bit uncomfortable doing so, despite the encouragement I have received, because my whole unfuturist schtick feels a little like an asshole move directed at explicitly futurist-identified folks.

One general development I will note by way of conclusion is that while my readership has remained relatively stable over the years -- as an academic I am quite comfortable with the idea that reaching a modest number of minds can make a difference that makes the effort more than worthwhile -- I do get the feeling that my sympathetic readership has grown quite a lot. In the early days of the blog I think the majority of my readership were Robot Cultists who liked to read me to get mad and post endless variations of "I Know You Are But What Am I?" in the Moot -- they always reminded me of those weird liberals who get off on getting mad at Fox News -- whereas nowadays more people who read me seem to be kindred spirits, either because so many people have been burned by techno-utopian scams and plutocratic fever-dreams there are more folks around who just enjoy the release of a good rant on such topics or because they find it clarifying (as I do myself) to connect the ugly prevailing unsustainable and plutocratic tendencies in corporate-military consumer-complacent techno-fetishistic neoliberalism/neoconservatism and the weird pathological extremities of these tendencies playing out in the techno-transcendental sects of the Robot Cult's transhuman eugenicists, singularitarian Robot God warriors, techno-immortalist and nanosantalogical wish-fulfillment fantasists and inane greenwashing geo-engineers.

Friday, January 02, 2015

To Declare Oneself Beyond Left And Right Is Almost Always To Disavow One Is On The Right

Another reddit comment:
I think the important distinction for BI is between authoritarian and anti-authoritarian, not between left and right. The two dichotomies are orthogonal. Anti-authority/pro-liberty types typically require only a brief explanation of BI before enthusiastically signing up; this is true of principled libertarians as much as it is of the counter-culture. As for authoritarians, I don't think the UBI movement has yet had to stare into the howling abyss of left-wing authoritarian hatred of BI and all it stands for. Not everyone who is allergic to individual independence and self-rule is a right winger. Many of them are working class. How can we ask a member of the working class to support SLACK? So yes, I agree with you; but only because I think that right vs. left is not the dimension that counts for BI.
Where to begin?!
The "authoritarian axis" introduced here obscures much more than it reveals, and it comes from a very interested right-wing rhetorical position. Market libertarian ideological proselytizing via the "World's Shortest Political Quiz" and related "Political Compass" (a compass that makes you get lost, how droll!) but also via mainstream pundit commonplaces about "independent" majorities who are presumably "culturally or socially liberal but fiscally conservative" provide the key context here.

These frames seek to obscure the relevance of left-right analysis to certain right-wing politics in order to support the status quo and the incumbent-elite interests aligned with it. Of course, market libertarians like to pretend they are "beyond left and right" (or try to market themselves with distracting neologisms -- independent! upwinger! dynamist!) because they can no more prevail with majorities than conventional Republicans can if they are too explicit about their actual alignment with the interests of plutocratic minorities.

Market libertarianism is a right-wing ideology -- it claims to be anti-authoritarian while endorsing corporate-militarism, and to advocate non-violence while endorsing contractual outcomes as non-violent by fiat whatever the terms of misinformation and duress shaping them. Since "fiscal conservatism" always cashes out in de-regulatory and privatizing schemes dismantling the legal/welfare affordances of social equity and cultural pluralism this means that the "cultural/social liberalism" always proclaimed alongside the "fiscal conservatism" has no real substance.

It is no accident that the anti-authoritarianism of market libertarians always plays out as hostility to almost all government except for armies and police to keep the wage slaves from revolting against their plutocratic masters. It is also no accident that market libertarian arguments only impact actual politics when they provide selective justifications for GOP positions.

People manifestly mean different things by basic income advocacy depending on whether they are coming from left or right, but it isn't exactly surprising that someone who falsely imagines right-wing libertarianism to be beyond left or right would imagine basic income advocacy figured through a libertarian lens to be the same.

The commenter declares "the UBI movement has yet had to stare into the howling abyss of left-wing authoritarian hatred of BI and all it stands for" -- but the reason for this non-event is that the howling left-wing authoritarian abyss conjured here is a classic paranoid fantasy of the reactionary right. In this it is not unlike that slip-up about "working class" folks "allergic" to "independence" -- ooh, just smell the makers-v-takers race/class politics of "liberty"!

I'm sure Stalinist industrial-militarism and Maoist feudalism will leap to libertopian minds at my dismissal of these reactionary fever-dreams, but it really isn't difficult to grasp that the totalitarian impulse is a right-wing one, once you shed the re-mapping demanded by the World's Shortest Political Quiz. If you can grasp that Nazism was a movement of the right despite the word "socialist" in the logo it shouldn't be that complicated after all to trouble too slick an identification of the left with the gulag either. Neither is it so much of a leap to grasp the left impulse is essentially democratizing work toward equity-in-diversity once you set aside market fundamentalists pieties and the GOP corpse-cold Cold War playbook.

Comparably fantastical is the commenter's confident assertion that "pro-liberty types typically require only a brief explanation of BI before enthusiastically signing up." Yeah, except when they don't, which is pretty much always. Sure, a few market fundamentalists have tossed out thought-experiments about basic income when they were looking for a chance to score rhetorical points (what they mean by "signing up") about how awesome it would be to demolish the New Deal once and for all, but they never want to actually do anything (what it should mean to "sign up") to end wage slavery, eliminate the precarity draft, or secure informed non-duressed contracts. When have they "signed up" to do anything so jack-booted socialist as all that otherwise? When have they made their actual cases on such terms anyway? Attributing such motives for the occasional right-wing pseudo-scholarly foray into basic income thought experiments seems pretty far-fetched.

No doubt I am being biased, tribalist, immoderate, unreasonable to ask anybody to face these awkward facts.

Tuesday, December 30, 2014

Top Posts for 2014

14. "Summoning the Demon": Robot Cultist Elon Musk Reads from Robo-Revelations at MIT October 27
13. Gizmuddle: Or, Why the Futuristic Is Always Perverse January 25
12. The Future Is A Hell of a Drug April 7
11. Car Culture Is A Futurological Catastrophe January 14
10. Very Serious Robocalyptics October 5
9. Em Butterfly: Robot Cultists George Dvorsky and Robin Hanson Go Overboard For Robo-Overlords February 24
8. Robot Cultist Martine Rothblatt Is In the News September 9
7. Geek Rule Is Weak Gruel: Why It Matters That Luddites Are Geeks September 19
6. R.U. Sirius on Transhumanism October 19
5. Rachel Haywire: Look At Me! Look At Me! Even If There's Nothing To See! August 18
4. It's Now Or Never: An Adjunct Responds to SFAI's Latest Talking Points May 5
3.Techbro Mythopoetics December 22
2. San Francisco Art Institute Touts Diego Rivera Fresco Celebrating Labor Politics While Engaging in Union Busting May Day
...and number 1. Forum on the Existenz Journal Issue, "The Future of Humanity and the Question of Post-Humanity" March 9

To round the list out to a nice full fifteen, I append not a hit but a miss, a post fewer people got a kick out of the first time around than I expected, given what most people come here to read: Tragic Techbrofashionistas of The Future Put. A. Phone. On. It! from January 6.

Apart from that last addition, these are essentially the most widely read of this year's posts, excluding a few popular but comparatively insubstantial one-liners. I'll share a few observations about these in the annual State of the Blog post to be written hungover from my bunker come the new year. You can compare these to the listicles from the last couple of years if you like: Top Posts for 2012 and Top Posts for 2013.

Friday, December 26, 2014

The Inevitable Cruelty of Algorithmic Mediation

Also posted at the World Future Society.

On Christmas Eve, Eric Meyer posted a devastating personal account reminding us of the extraordinary cruelty of the lived experience of ever more prevailing algorithmic mediation.

Meyer's Facebook feed had confronted him that day with a chirpy headline that trilled, "Your Year in Review. Eric, here's what your year looked like!" Beneath it, there was the image that an algorithm had number-crunched to the retrospective forefront, surrounded by clip-art cartoons of dancing figures with silly flailing arms amidst balloons and swirls of confetti in festive pastels. The image was the face of Eric Meyer's six year old daughter. It was the image that had graced the memorial announcement he had posted upon her death earlier in the year. Describing the moment when his eye alighted on that adored unexpected gaze, now giving voice to that brutally banal headline, Meyer writes: "Yes, my year looked like that. True enough. My year looked like the now-absent face of my little girl. It was still unkind to remind me so forcefully."

Meyer's efforts to come to terms with the impact of this algorithmic unkindness are incomparably more kind than they easily and justifiably might have been. "I know, of course, that this is not a deliberate assault. This inadvertent algorithmic cruelty is the result of code that works in the overwhelming majority of cases." To emphasize the force of this point, "Inadvertent Algorithmic Cruelty" is also the title of Meyer's meditation. "To show me Rebecca’s face and say 'Here’s what your year looked like!' is jarring," writes Meyer. "It feels wrong, and coming from an actual person, it would be wrong. Coming from code, it’s just unfortunate." But just what imaginary scene is being conjured up in this exculpatory rhetoric in which inadvertent cruelty is "coming from code" as opposed to coming from actual persons? Aren't coders actual persons, for example?

Needless to say, Meyers has every right to grieve and to forgive and to make sense of these events in the way that works best for him. And of course I know what he means when he seizes on the idea that none of this was "a deliberate assault." But it occurs to me that it requires the least imaginable measure of thought on the part of those actually responsible for this code to recognize that the cruelty of Meyer's confrontation with their algorithm was the inevitable at least occasional result for no small number of the human beings who use Facebook and who live lives that attest to suffering, defeat, humiliation, and loss as well as to parties and promotions and vacations. I am not so sure the word "inadvertent" quite captures the culpability of those humans who wanted and coded and implemented and promoted this algorithmic cruelty.

And I must say I question the premise of the further declaration that this code "works in the overwhelming majority of cases." While the result may have been less unpleasant for other people, what does it mean to send someone an image of a grimly-grinning, mildly intoxicated prom-date or a child squinting at a llama in a petting zoo surrounded by cartoon characters insisting on our enjoyment and declaring "here's what your year looked like"? Is that what any year looks like or lives like? Why are these results not also "jarring"? Why are these results not also "unfortunate"? Is any of this really a matter of code "working" for most everybody?

What if the conspicuousness of Meyer's experience of algorithmic cruelty indicates less an exceptional circumstance than the clarifying exposure of a more general failure, a more ubiquitous cruelty? Meyer ultimately concludes that his experience is the result of design flaws which demand design fixes. Basically, he proposes that users be provided the ability to opt out of algorithmic applications that may harm them. Given the extent to which social software forms ever more of the indispensable architecture of the world we navigate, this proposal places an extraordinary burden on those who are harmed by carelessly implemented environments they come to take for granted while absolving those who build, maintain, own, and profit from these environments from the harms resulting from their carelessness. And in its emphasis on designing for egregious experienced harms, this proposal disregards costs, risks, harms that are accepted as inevitable when they are merely habitual, or vanish in their diffusion, over the long-term, as lost opportunities hidden behind given actualities.

But what worries me most of all about this sort of "opt out" design-fix is that with each passing day algorithmic mediation is more extensive, more intensive, more constitutive of the world. We all joke about the ridiculous substitutions performed by autocorrect functions, or the laughable recommendations that follow from the odd purchase of a book from Amazon or an outing from Groupon. We should joke, but don't, when people treat a word cloud as an analysis of a speech or an essay. We don't joke so much when a credit score substitutes for the judgment whether a citizen deserves the chance to become a homeowner or start a small business, or when a Big Data profile substitutes for the judgment whether a citizen should become a heat signature for a drone commiting extrajudicial murder in all of our names. Meyer's experience of algorithmic cruelty is extraordinary, but that does not mean it cannot also be a window onto an experience of algorithmic cruelty that is ordinary. The question whether we might still "opt out" from the ordinary cruelty of algorithmic mediation is not a design question at all, but an urgent political one.   

Thursday, December 25, 2014

Contextualizing My Anti-Futurological Critique for Theoryheads

This rather densely allusive sketch contextualizing my anti-futurological critique won't be everybody's cup of tea, but I've upgraded and adapted it from my response to a comment in the Moot for those readers who find this sort of thing useful but who would likely miss it otherwise. I still think probably the best, most concise and yet complete(-ish) formulation of my critique is the contrbution I published in the recent Existenz volume on posthumanism.
You accuse me of indulging in futurism while critiquing it. All the big boys make such moves, I don't think less of you for trying it. I have heard what you have to say so far, and I must say it seems to me you are baldly wrong to say this, and that believing it sends you off-track...

What I mean by futurism has its origins in specific institutional histories and discursive practices: namely, the emergence of fraudulent methodologies/ rationales of speculation in market futures and the extrapolative genre of the scenario in military think-tanks -- all taking place in the wider context of the suffusion of public deliberation and culture with the hyperbolic and deceptive techno-progressive norms and forms of consumer advertizing...
To give you a sense of where I am coming from and to give you a sense of what I am hearing when you say "modernity" and how I might try to take us elsewhere with futurity-against-futurology, I provide this handy sketch:

To the extent that post-modernity (late modernity, a-modernity, neoliberalism, whatever) is the post-WW1/2 inflation of the petrochemical bubble in which other postwar financial bubbles are blown, my anti-futurology is of a piece with Lyotard's (whatever my differences with him, of which I have many, he makes some of the same warnings).
To the extent that futurism markets elite-incumbency as progress, my anti-futurology is also of a piece with some of Debord's critique of the Spectacle, so-called (the parts about "enhanced survival" in particular), specifically to the extent that Debord's tale of "being degraded into having degraded into appearing" derives from Adorno's culture industri(alization) as formula-filling-mistaken-for-judgment and Benjamin's War Machine as the displacment of a revolutionary equity-in-diversity from the epilogue of Art in the Age.
Your emphasis seems more attuned to aesthetic modernities, so the larger context for me is the proposal that between the bookends of Thirty-Years' Wars from Westphalia to Bretton Woods European modernity indulged in a host of quarrels des anciens et des modernes, culture wars presiding over and rationalizing the ongoing organization of social militarization/ administration of nation-states and their competitive internationalism.

"The Future" of futurisms in my sense arises out of those discourses. Design discourses are especially provocative for my critical position, for example, since they are patently futurological -- at once doing and disavowing politics; peddling plutocracy qua meritocracy via the Merely Adequate Yet Advancement through their exemplary anti-democratzing Most Acceptable Yet Advanced MAYA principle -- but still quite modern in what I think is your sense of the term. This matters because futurological global/digital rationality is for me an importantly different phenomenon than the modern that constitutes itself in the repudiation of the ancient: the futurist for me is in between, at once a vestige of modern internationalism and a harbinger of post-nationalist planetarity.

Planetarity is a term I am taking from Spivak, and my sense of where we are headed -- if anywhere -- is informed by queer/critical race/post-colonial/environmental justice theories like hers. In my various theory courses I usually advocate in my final lecture (the one with the final warnings and visions in it) for a polycultural planetarity -- where the "polyculture" term resonates with Paul Gilroy's post-Fanonian convivial multiculturalism as well as with the repudiation of industrial monoculture for companion planting practices in the service of sustainability (but also synecdochic for sustainable political ecology), and then the "planetarity" term marks the failure/ eclipse of nation-state internationalism (say, UN-IMF-World Bank globalization) in digital financialization, fraud, marketing harassment, and surveillance and ecological catastrophe. Polycultural planetarity would build ethics and mobilize democratizations via contingent universalization (that's from my training with Judith Butler no doubt) in the future anterior (a Spivakian understanding of culture as interpretation practices toward practical conviviality). For me, that future anterior is the futurity inhering in the present in the diversity of stakeholders/peers to presence, very much opposed to the closures, reductions, extrapolations, instrumentalizations of "The Future."
Lots of name-dropping there, I know, but almost every phrase here can easily turn into a three-hour lecture, I'm afraid, in one of my contemporary critical theory survey courses. I suspect you might be tempted to assimilate all that feminist/queer/posthuman-criticalrace theoretical complex to the categories you already know -- forgive me if I have jumped to conclusions in so saying -- but I think that would be an error, more an effort to dismiss and hence not have to read the work than think what we are doing as Hannah Arendt enjoined, the call I hear every day that keeps me going.

Monday, December 22, 2014

Techbro Mythopoetics

In an enjoyable rant over at io9 today, Charlie Jane Anders declares herself Tired of "The Smartest Man in the Room" science fiction trope. Her delineation of the stereotype is immediately legible:
The "smartest man in the room" is a kind of wish-fulfillment for reasonably smart people, because he's not just clever but incredibly glib. As popularized by people like Doctor Who/Sherlock writer Steven Moffat and the creators of American shows like House and Scorpion, the "smartest guy in the room" thinks quicker than everybody else but also talks rings around them, too. He's kind of an unholy blend of super-genius and con artist. Thanks to the popularity of Sherlock, House and a slew of other "poorly socialized, supergenius nerd" shows, the "smartest man in the room" has become part of the wallpaper. His contempt for less intelligent people, mixed with adorable social awkwardness, and his magic ability to have the right answer at every turn, have become rote.
Later, she offers up a preliminary hypothesis that the intelligibility and force of the archetype derives from the widespread experience of consumers who feel themselves to be at the mercy of incomprehensible devices and therefore of the helpful nerds in their lives who better understand these things. I actually don't think the world is particularly more technologically incomprehensible now than it has always somewhat been in network-mediated extractive-industrial societies, but tech-talkers like to say otherwise because it consoles them that progress is happening rather than the immiserating unsustainable stasis that actually prevails, but that is a separate discussion. I do think Anders strikes very much the right note when she declares The Smartest Guy in the Room archetype a "wish-fulfillment fantasy," but I am not sure that I agree with her proposal about how the fantasy is operating here.

What is perplexing about the Smartest Guy in the Room archetype, as well as for the more ubiquitous savvy but awkward nerd archetype, is the combination in it of superior knowledge and social ineptitude. Anders proposes that this fantasy space is doubly reassuring -- securing our faith that helpful people will always be around to navigate the incomprehensible technical demands of the world, but that we need not feel inferior in our dependency because these helpful people gained their superior knowledge at the cost of a lack of basic social skills nobody in their right mind would actually choose to pay. The gawky awkward nerd is as obviously inferior as superior, we get to keep our toys with our egos intact, and everybody wins (even the losers).

All this sounds just idiotically American enough to be plausible, but seems to assume that few of her readers -- or anybody, for that matter -- actually identifies with the nerds. Anders seems to have forgotten that she begins her piece with the assertion that The Smartest Guy in the Room is "wish-fulfillment for reasonably smart people," that is to say, the self-image of her entire readership. And of course the truth is that nearly every one of her readers do identify with the archetype, indeed the archetype is a space of aspirational identification in culture more generally, an identification which fuels much of the lucrative popularity and currency of spectacular science fiction and fantasy and geekdom more generally in this moment. That is the real problem that makes the phenomenon Anders has observed worthy of criticism in the first place.

Anders describes the Smartest Guy in the Room as someone who has "contempt for less intelligent people, mixed with adorable social awkwardness, and [a] magic ability to have the right answer at every turn." It is crucial to grasp that what appears as a kind of laundry list here is in fact a set of structurally inter-dependent co-ordinates of the moral universe of The Smartest Guy in the Room. He doesn't happen to be right all the time and socially awkward and contemptuous of almost everybody else, his sociopathic contempt is the essence of his social awkwardness, rationalized by his belief that he is superior to them because he is always right about everything, at least as he sees it.

Before I am chastised for amplifying harmless social awkwardness into sociopathy, let me point out that the adorable nerds of Anders' initial formulations are later conjoined to a discussion of Tony Stark, the cyborgically-ruggedized hyper-individualist bazillionaire tech-CEO hero of the Iron Man blockbusters. Although Anders describes this archetype in terms of its popular currency in pop sf narrative and fandom today, I think it is immediately illuminating to grasp the extent to which Randroidal archetypes Howard Roark, Francisco d'Anconia, Henry Rearden, and John Galt provide the archive from which these sooper-sociopath entrepreneurial mad-scientist cyborg-soldiers are drawn (if you want more connective tissue, recall that Randroidal archetypes are the slightest hop, skip, and jump away from Heinleinian archetypes and now we're off to the races).

The truth is that there is no such thing as the guy who knows all the answers, or who solves all the problems. Problem-solving is a collective process. There is more going on that matters than anybody knows, even the people who know the most. Even the best experts and the luminous prodigies stand on the shoulders of giants, depend on the support of lovers and friends and collaborators and reliable norms and laws and infrastructural affordances, benefit from the perspectives of critics and creative appropriations. Nobody deserves to own it all or run it all, least of all the white guys who happen to own and run most of it at the moment, and this is just as true when elite-incumbency hides its rationalizations for privilege behind a smokescreen of technobabble. 

The sociopathy of the techno-fixated Smartest Guy in the Room is, in a word, ideological. Anders hits upon an enormously resonant phrasing when she declares him "an unholy blend of super-genius and con artist." In fact, his declared super-genius is an effect of con-artistry -- the fraudulent cost- and risk-externalization of digital networked financialization, the venture-capitalist con of upward-failing risk-fakers uselessly duplicating already available services and stale commodities as novelties, the privatization of the "disruptors" and precarization of "crowdsource"-sharecropping -- the "unholy" faith on the part of libertechbrotarian white dudes that they deserve their elite incumbent privileges

Perhaps this is a good time to notice that when Anders says the Smartest Guy in the Room provides "wish-fulfillment for reasonably smart people" her examples go on to demonstrate that by people she happens always to mean only guys and even only white guys. She does notice that the Smartest Guy does seem to be, you know, a guy and provides the beginnings of a gendered accounting of the archetype: "the 'smartest guy' thing confirms all our silliest gender stereotypes, in a way that's like a snuggly dryer-fresh blanket to people who feel threatened by shifting gender roles. In the world of these stories, the smartest person is always a man, and if he meets a smart woman she will wind up acknowledging his superiority."

That seems to me a rather genial take on the threatened bearings of patriarchal masculinity compensated by cyborg fantasizing, but at least it's there. The fact that the Smartest Guy keeps on turning out to be white receives no attention at all. This omission matters not only because it is so glaring, but because the sociopathic denial of the collectivity of intelligence, creativity, progress, and flourishing at the heart of the Smartest Guy in the Room techno-archetype, is quite at home in the racist narrative of modern technological civilization embodied in inherently superior European whiteness against which are arrayed not different but primitive and atavistic cultures and societies that must pay in bloody exploitation and expropriation the price of their inherent inferiority. That is to say, the Smartest Guy in the Room is also the Smartest Guy in History, naturally enough, with a filthy treasure pile to stand on and shout his superiority from.

From the White Man's Burden to Yuppie Scum to Techbro Rulz, the Smartest Guy in the Room is one of the oldest stories in the book. And, yeah, plenty of us are getting "kind of tired" of it.

Sunday, December 21, 2014

Why Our Militant Atheists Are Not Secular Thinkers

Secularity -- from the Latin saecularis, worldly, timely, contingent -- properly so called, is very much a pluralist and not an eliminationist impulse. In naming the distinction of worldly affairs from spiritual devotions, it differentiated the good life of the vita contemplativa of philosophy from that of the vita activa of the statesman or aesthete, but later went on to carve out the distinctions of clerical from government, legal, professional authorities. The Separation of Church and State as pillar of secular thinking and practice is the furthest imaginable thing from sectarian or ethnic strife amplified by the eliminationist imagination into genocidal violence -- and yet the identification of today's militant atheists with a "secular worldview" risks precisely such a collapse.

Secularism has never demanded an anti-religiosity but recognized the legitimacy of non-religiosities. Indeed, in diverse multicultures such as our own secularism becomes indispensable to the continuing life of religious minorities against majority or authoritarian formations of belief, and hence is not only not anti-religious but explicitly facilitative of variously religious lifeways as it is of variously non-religious lifeways.

I have been an atheist since 1983 -- over thirty years by now! after a Roman Catholic upbringing. I am quite happy to live a life a-thiest -- "without god(s)" -- myself, but the primary value of secularism to me has always been its entailment of and insistence on a pluralist practice of reason, in which we recognize that there are many domains of belief distinguished in their concerns, in their cares, and in the manner of their convictions. Our scientific, moral, aesthetic, ethical, professional, political beliefs, and so on, occupy different conceptual and practical domains, incarnate different registers of our lives, are warranted by different criteria. For the pluralist, reason is not properly construed as the monomaniacal reduction of all belief to a single mode, but a matter of recognizing what manner of concern, care, and conviction belief is rightly occasioned for and then applying the right criteria of warrant appropriate to that mode.

Pluralism is not a relativism or nihilism, as threatened bearings of fundamentalist belief would have it, but a rigorous reasonableness equal to the complexity, dynamism, and multifaceted character of existence and of the personalities beset by its demands and possibilities. For one thing, pluralism allows us to grasp and reconcile the aspirational force of the contingent universalism of ethics without which we could not conceive let alone work toward progress or the Beloved Community of the we in which all are reconciled, while at once doing justice to the fierce demands and rewards in dignity and belonging deriving from our (inevitably plural, usually partial) inhabitation of moral communities that build the "we" from exclusions of various construals of the "they." Pluralism allows us to reconcile as well our pursuit of the private perfections of morality and sublimity (my appreciation of the aesthetical forms of which requires my admission of the validity for others, whatever my atheism, of its faithly forms) with the public works of scientific, political, legal, professional progress.

It is crucial to grasp that the refusal of pluralism is reductionism, and that reductionism is an irrationalism. It is a form of insensitivity, a form of unintelligence -- and usually a testimony to and inept compensation for insecurity. In Nietzsche's critique of the fetish (Marx's commodity fetishism and Freud's sexual fetishes are surface scratches in comparison) this reductionism is the ressentimental attutude of the life of fear over the lives of love, the philosophical imposture of deception and self-deception peddled as truth-telling. To impose the criteria of warrant proper to scientific belief to moral belief, say, or to aesthetic judgement, or to legal adjudication is to be irrational not rational. Also, crucially, it is to violate and not celebrate science.

To call the celebrated (or at any rate noisy) militant atheistic boy warriors of today "secular thinkers" is a profound error. To misconstrue as the sins of religious faith as such the moralizing misapplication of faithly norms to political practices is to misunderstand the problem at hand -- and usually in a way that multiplies errors: Hence, our militant atheists become bigots tarring innocent majorities with the crimes of violent minorities, they lose the capacity to recognize differences that make a difference in cultures, societies, individuals all the while crowing about their superior discernment.

Those who commit crimes and administer tyrannies in the name of faith irrationally and catastrophically misapply the substantiation of aesthetic sublimities and parochial mores connected to some among indefinitely many forms of religiosity to domains of ethical aspiration and political progress to which they are utterly unsuited. Fascism and moralizing are already-available terms for these too familiar irrational misapplications. Meanwhile those who attribute these crimes and tyrannies to the aesthetic and the moral as such, as practiced in variously faithful forms, are inevitably indulging in reductionism. This reductionism in its everyday stupidity is usually a form of ethnocentric subcultural parochialism, but the militant atheists prefer their stupidity in the form of scientism, usually assuming the imaginary vantage of a superior scientificity the terms of which presumably adjudicate the unethical in moralizing and the tyrannical in progressivity because it subsumes ethical and political domains within its own scientific terms. In this, scientism first distorts science into a morality which it then, flabbergastingly, distorts into a moralism itself, thus mirroring the very fundamentalism it seeks to critique.

Secularism is a theoretical and practical responsiveness to the plurality of a world in which there is always more going on that matters in the present than any of us can know and in which the diversity of stakeholders to the shared present interminably reopens history to struggle. It is bad enough that today's militant atheists get so much of the substance and value of science, taste, and faith wrong in their disordering rage for order, but in calling their reductionist irrationality "secular thinking" we risk losing the sense and significance of the secular altogether, that accomplishment of reason without which we can never be equal to the demands and promises of reality and history in the plurality of their actual presence.