Using Technology to Deepen Democracy, Using Democracy to Ensure Technology Benefits Us All
Tuesday, April 28, 2015
Calls For Nonviolence Can Be Calls for Violence
Not every reaction and testimony to the suffering of violence is a form of organized non-violence, but neither is every call to "non-violence" a form of organized non-violence. As an advocate and, to this day, a teacher of nonviolent civil resistance and nonviolent revolutionary organizing, and as an activist trained in nonviolence in the 1990s as part of Queer Nation Atlanta with the King Center, I am disgusted by the superficial appropriation of nonviolent terminology to police false and facile respectability politics and enable privileged beneficiaries of systematic violence to indulge in self-congratulatory castigations of sufferers of that violence. Nonviolent critique and resistance exposes normalized violence and creates and transforms crises to elicit collaboration from the sufferers and beneficiaries of violence alike to overcome that violence. Collusion in the violence of the status quo is the furthest thing from non-violence, it is violence. White supremacy itself is a riot that has come to be mistaken in the long centuries of its relentless and catastrophic life for law and order.
Sunday, April 26, 2015
Smart Or Spy?
Whenever a gadget is peddled to you as "smart," substitute the word "spy" and ask yourself if you still want the dumb thing.
Tuesday, April 07, 2015
Richard Jones on My Critique of Transhumanist/Singularitarian Futurisms
Richard Jones is a Professor of Physics and the Pro-Vice Chancellor for Research and Innovation at the University of Sheffield in the UK. He writes quite a lot about public science policy, and these days he has been elaborating a forceful perspective on public investment in innovation, but he is surely best known by readers of my blog as the author of Soft Machines: Nanotechnology and Life, a book about both the reality and some of the speculative projections that attach to nanoscale technoscience.
Jones has written several contrarian pieces over the years about the hyperbolic expectations that freight the popular imagination of nanotechnology resulting from what I would call superlative futurological handwaving by the likes of Ray Kurzweil and Eric Drexler, and lately he has taken on another superlative futurological proposal (one to which I have devoted no small of attention myself), so-called, mind uploading.
Because Jones criticizes these imaginary techniques from a technical perspective attuned to the actual scientific consensus in the relevant fields his writing are different from my own, but like the writings of Athena Andreadis -- who is also a working scientist ferociously critical of techno-immortalist hyperbole and evo-psycho reductionism -- Jones is also aware of the cultural and rhetorical dimensions that play out in transhumanist and singularitarian and nano-cornucopiast discourses and takes seriously that much of the seeming force and plausibility of futurological belief does not ultimately derive from its technoscientific claims and hence neither is it effectively engaged simply by exposing the deficiencies in these claims.
I am happy to say that just as I have learned quite a bit by reading Jones' technical criticisms of futurological fancies, he has often seemed to appreciate my own rhetorical criticism (which is not to imply that he agrees with me on particulars), and in his most recent piece Does Transhumanism Matter? Jones has done me the extraordinary compliment of summarizing in a scrupulous and sympathetic way some of the key themes of a piece of mine Futurological Discourses and Posthuman Terrains in a way that reveals the complementarity of our critical vantages. I strongly recommend Jones' piece to those who find my critique congenial but who may find my way of writing -- emerging out of a lifetime love of paradoxical literature exacerbated by my training in dense critical theory -- a chore: Richard Jones, again like Athena Andreadis, may be the graceful and also more concise and clear writer you are pining for.
I cannot say that I found much if anything to disagree with in Jones' reading. And so I will simply mention a few things I was especially pleased to see in Jones piece. The first of these was that Jones takes seriously the political thrust of my critique of futurology, which I would not necessarily have expected and was enormously gratified to see revealed:
What I specifically insist on is that progress is always [1] progress toward a specified end, and that [2] politically speaking democratic progress is progress in the direction of equity-in-diversity, and that [3] technoscientific vicisstitudes, to be progressive in my sense, must equitably distribute the costs, risks, and benefits of change to the diversity of their stakeholders by their lights. Historically speaking, the chief beneficiaries of technoscientific developments have only rarely been the same as the ones who have borne the brunt of their costs and risks, and I refuse to describe such outcomes as progressive -- even if generations later I must count myself among the beneficiaries of the compulsory and unnecessary sacrifice of multitudes myself. What should be clear about such a perspective is that it is scarcely a comment on "technology" at all, but on the reactionary plutocratic politics that governs these injustices.
That I address my concerns and pin my hopes for progressive change to the hearing of an audience that shows every sign of reluctance in the main to be distracted in their pleasures by awareness of their real costs in the long term and to majorities of fellow earthlings seems to me to be the surest evidence of my optimism, if anything. I actually don't think Jones fails to recognize this in my work or disagrees with the conviction particularly -- I just think he likes to strike a balance of cheer with his denunciations and has more patience than I do with coddling readily alienated potential allies prone to defensiveness about their complicity in any too sweeping a critique of the status quo the amplification of which is so much of what passes for "The Future" of the futurologists. As I said, I lack the patient temperament to sustain such an approach for long, but I happily concede its force and consider myself lucky to have such a reader and ally as Jones who does.
Jones has written several contrarian pieces over the years about the hyperbolic expectations that freight the popular imagination of nanotechnology resulting from what I would call superlative futurological handwaving by the likes of Ray Kurzweil and Eric Drexler, and lately he has taken on another superlative futurological proposal (one to which I have devoted no small of attention myself), so-called, mind uploading.
Because Jones criticizes these imaginary techniques from a technical perspective attuned to the actual scientific consensus in the relevant fields his writing are different from my own, but like the writings of Athena Andreadis -- who is also a working scientist ferociously critical of techno-immortalist hyperbole and evo-psycho reductionism -- Jones is also aware of the cultural and rhetorical dimensions that play out in transhumanist and singularitarian and nano-cornucopiast discourses and takes seriously that much of the seeming force and plausibility of futurological belief does not ultimately derive from its technoscientific claims and hence neither is it effectively engaged simply by exposing the deficiencies in these claims.
I am happy to say that just as I have learned quite a bit by reading Jones' technical criticisms of futurological fancies, he has often seemed to appreciate my own rhetorical criticism (which is not to imply that he agrees with me on particulars), and in his most recent piece Does Transhumanism Matter? Jones has done me the extraordinary compliment of summarizing in a scrupulous and sympathetic way some of the key themes of a piece of mine Futurological Discourses and Posthuman Terrains in a way that reveals the complementarity of our critical vantages. I strongly recommend Jones' piece to those who find my critique congenial but who may find my way of writing -- emerging out of a lifetime love of paradoxical literature exacerbated by my training in dense critical theory -- a chore: Richard Jones, again like Athena Andreadis, may be the graceful and also more concise and clear writer you are pining for.
I cannot say that I found much if anything to disagree with in Jones' reading. And so I will simply mention a few things I was especially pleased to see in Jones piece. The first of these was that Jones takes seriously the political thrust of my critique of futurology, which I would not necessarily have expected and was enormously gratified to see revealed:
I was also pleased that Jones emphasized my proposal that transhumanist futurisms are not so much opposed to their most conspicuous critics, the various "bioconservative naturalists," as complementary to and co-dependent on them:To Carrico, there is a continuity between the mainstream futurologists – “the quintessential intellectuals propping up the neoliberal order” – and the “superlative” futurology of the transhumanists, with its promises of material abundance through nanotechnology, perfect wisdom through artificial intelligence, and eternal life through radical life extension. The respect with which these transhumanist claims are treated by the super-rich elite of Silicon Valley provides the link. One can make a good living telling rich and powerful people what they want to hear, which is generally that it’s right that they’re rich and powerful, and that in the future they will become more so (and perhaps will live for ever into the bargain)... One could argue that tranhumanism/singularitarianism constitutes the state religion of Californian techno-neoliberalism, and like all state religions its purpose is to justify the power of the incumbents.
Another prominent critique of transhumanism comes from the conservative, often religious, strand of thought sometimes labelled “bioconservatives”. Carrico strongly dissociates himself from this point of view, and indeed regards these two apparently contending points of view, not as polar opposites, but as “a longstanding clash of reactionary eugenic parochialisms”. Bioconservatives regard the “natural” as a moral category, and look back to an ideal past which never existed, just as the ideal future that the transhumanists look forward to will never exist either. Carrico sees a eugenic streak in both mindsets, as well as an intolerance of diversity and an unwillingness to allow people to choose what they actually want. It’s this diversity that Carrico wants to keep hold of, as we talk, not of The Future, but of the many possible futures that could emerge from the proper way democracy should balance the different desires and wishes of many different people.If I have the least quibble with Jones' understanding of my critique it comes when he distinguishes his own optimism from my skepticism:
One can certainly construct... lists of regrets for previous technologies didn’t live up to their promises, and one should certainly try and learn from them. I would want to sound more optimistic, and point out that what this list illustrates is not that we shouldn’t have set out to develop those technologies, but that we should have steered them down more congenial roads, and perhaps that we could have done so had we created better political and economic circumstances. Ultimately, I think I do believe that there has been progress.Of course, I quite agree that wonderful scientific discoveries and clever useful inventions have been made that are worthy of celebration, even in the midst of a generation of tech bubbles and irrationally exuberant libertechbrotarian con-artisty. I am, after all, as big a NASA and renewable energy/agriculture/tramsportation and universal healthcare geek as anybody I have ever met.
What I specifically insist on is that progress is always [1] progress toward a specified end, and that [2] politically speaking democratic progress is progress in the direction of equity-in-diversity, and that [3] technoscientific vicisstitudes, to be progressive in my sense, must equitably distribute the costs, risks, and benefits of change to the diversity of their stakeholders by their lights. Historically speaking, the chief beneficiaries of technoscientific developments have only rarely been the same as the ones who have borne the brunt of their costs and risks, and I refuse to describe such outcomes as progressive -- even if generations later I must count myself among the beneficiaries of the compulsory and unnecessary sacrifice of multitudes myself. What should be clear about such a perspective is that it is scarcely a comment on "technology" at all, but on the reactionary plutocratic politics that governs these injustices.
That I address my concerns and pin my hopes for progressive change to the hearing of an audience that shows every sign of reluctance in the main to be distracted in their pleasures by awareness of their real costs in the long term and to majorities of fellow earthlings seems to me to be the surest evidence of my optimism, if anything. I actually don't think Jones fails to recognize this in my work or disagrees with the conviction particularly -- I just think he likes to strike a balance of cheer with his denunciations and has more patience than I do with coddling readily alienated potential allies prone to defensiveness about their complicity in any too sweeping a critique of the status quo the amplification of which is so much of what passes for "The Future" of the futurologists. As I said, I lack the patient temperament to sustain such an approach for long, but I happily concede its force and consider myself lucky to have such a reader and ally as Jones who does.
Wednesday, April 01, 2015
Thursday, March 19, 2015
(Trickle-)Down and (Middle-)Out in US Bourgeois Political Economy
For anyone who really cares about either justice or prosperity, trickle down is a lie, middle-out is a fudge, and bottom up is an imperative. While it is undeniably true that neoliberal policies anchored by trickle-down pieties has presided over two generations of wealth concentration, plutocratic consolidation, burgeoning precarization, unsustainable exploitation it is never right to lionize the two generations of mid-century post-war American economic expansion in framing an alternative to neoliberalism given that epoch's structural exclusion and exploitation of Black Americans and immigrant labor and also given the undeniable unsustainability of its wasteful, polluting, demoralizing motor of mass consumption driven by the suffusion of public life with deceptive, hyperbolic, denialist marketing norms and forms.
"Bottom Up" political economy, to the contrary, must be grounded in the public investment for the provision of basic income, healthcare, education, and equal recourse to law and government which secure a legible scene of informed, nonduressed consent to the terms of everyday commerce as well as for the accountable administration of the commonwealth of public goods and common resources. When equity-in-diversity (of which sustainabillity is an indispensable part, since the costs and risks of unsustainable formations are always disproportionately borne by the marginalized and the poor) are secured via steeply progressive taxation and public investment -- via tax revenue, bond issues, countercyclical deficit spending, and so on -- a democratic bottom-up political economy of ramifying creative expressivity, civic participation, shared problem-solving, personal volunteerism, social services, organized labor, local entrepreneurship without fetishized mass consumption and plutocratic celebrities has a chance to emerge.
Only a bottom-up political economy is compatible with nonviolence (for those on the right who would howl about the "violence" of taxation, recall that all fortune is a collective accomplishment, that the progressive re-distribution of wealth by the state via taxation compensates a regressive pre-distribution of wealth by the state via legal/infrastructural affordances, and that from those to whom much is given much is rightly expected), and that only a system committed to nonviolence is compatible with democracy and universal law, even as interminable aspirational projects.
"Bottom Up" political economy, to the contrary, must be grounded in the public investment for the provision of basic income, healthcare, education, and equal recourse to law and government which secure a legible scene of informed, nonduressed consent to the terms of everyday commerce as well as for the accountable administration of the commonwealth of public goods and common resources. When equity-in-diversity (of which sustainabillity is an indispensable part, since the costs and risks of unsustainable formations are always disproportionately borne by the marginalized and the poor) are secured via steeply progressive taxation and public investment -- via tax revenue, bond issues, countercyclical deficit spending, and so on -- a democratic bottom-up political economy of ramifying creative expressivity, civic participation, shared problem-solving, personal volunteerism, social services, organized labor, local entrepreneurship without fetishized mass consumption and plutocratic celebrities has a chance to emerge.
Only a bottom-up political economy is compatible with nonviolence (for those on the right who would howl about the "violence" of taxation, recall that all fortune is a collective accomplishment, that the progressive re-distribution of wealth by the state via taxation compensates a regressive pre-distribution of wealth by the state via legal/infrastructural affordances, and that from those to whom much is given much is rightly expected), and that only a system committed to nonviolence is compatible with democracy and universal law, even as interminable aspirational projects.
Wednesday, March 11, 2015
The Future Is A Fraud
Sometimes it seems that professional futurologists engage in two essential activities: making predictions and scolding people for expecting their predictions to come true.
It has gotten so bad that at least one "professional futurist" -- Jamais Cascio -- is now declaring that the value of futurism is in what it gets "usefully wrong." At this point Cascio has poked so many holes exposing the fraud of conventional futurism (many of which I quite agree with) he really risks exposing the fraud of his own ongoing demand for attention and paychecks as a professional futurist himself.
Of course it is true that we do learn from mistakes -- think how earnestly Popper took Wilde's quip that "Experience is the name we give our mistakes" -- but can you imagine any other legitimate empirical discipline demanding to be taken seriously by concerned citizens and policy makers that would claim its models are all wrong in "interesting" ways? Setting aside the fact that few futurists would admit that they are wrong about everything as Cascio does (or at any rate would be consistent about such an admission), why should we care about the way futurists of all people get things wrong than the ways actual scientists and scholars, say, get things wrong -- especially when at least they aspire and occasionally manage to get things right?
That is to say, Cascio does not seem to be making the useful pragmatic point that all true propositions are never more than the best but still falsifiable propositions on offer for warranted reasons. I would sympathize with such a point, but it would simply change our expectations about the force and security of models and methods that get things right by our lights. Such a recognition would hardly provide grounds to distinguish futurism as a legitimate discipline from other legitimate disciplines. Like Cascio I do also make such a distinction, of course, but for me it is the distinction of con-artistry from policy-making (I leave to the side futurology's occasional inept forays into cultural criticism or -- Angels and ministers of grace defend us! -- philosophy).
To elaborate my point a bit more: No doubt all disciplines along the road to getting things as right as they can for now do also get things wrong in ways the study of which is interesting and useful, but it is the effort to get things right that earns their keep and provides the context in which usefully to assess the ways they err. Every legitimate discipline has a foresight dimension: one solicits agreements from potential collaborators, one insists on accounting for certain expectations, one makes provisional plans in light of one's understanding of the relevant forces and stakeholders at hand on the basis of the warranted descriptions provided by disciplines devoted to understanding them.
The problem is that futurism, futurology, future studies, or what have you, seeks legitimacy as a professional and scholarly discipline while every single method and model and analytic mode it deploys in the service of this goal originates in and is deployed by other social sciences and humanities scholars in an incomparably more rigorous and accountable way. Few futurists have degrees in these legitimate disciplines or could pass muster within their ranks. Futurists proceed instead by pretending their superficial appropriations are an interdisciplinarity when they amount in fact to an anti-disciplinarity.
As for the "methods" that are more characteristic of futurists in particular, few stand up to sustained scrutiny. Not to put too fine a point on it: "The Future" futurists pretend to study does not exist, the openness inhering in diversity of stakeholders to the present is -- if anything -- foreclosed by the parochial projections futurists denominate "The Future." (Futurology's characteristic extrapolations from the necessarily partially imperfectly understood present onto radically contingent developmental dynamisms are just an obvious instance.) The "trends" futurists pretend to discern do not exist -- if anything these are narrative constructions imposed retroactively on contingent vicissitudes to conjure an apparent momentum that can be opportunistically exploited by incumbents for profits. The futurological trend-spotter and the fashion trend-spotter are revealed to be perfectly continuous, then: deceptive hype profitably peddled as objective discovery. The "technology" futurists pretend to be their focus does not exist, the constellation of historical, existing, imagined techniques and artifacts only some of which are corralled together under the heading of "technology" do not in fact share any one characteristic or capacity or developmental trajectory, and their costs, risks, and benefits will also be different to the diversity of their stakeholders -- if anything the futurological pretense that the technological names a dimension of historical change different or separate from social, cultural, or political struggles is a focus that performs an insistent obfuscation of the reality at hand.
The conspicuous embrace of brainstorming and free association by some futurists takes up exercises from acting improvisation workshops which do indeed seem to me to be useful for inculcating habits of creative and flexible thinking for students -- but this is hardly a critical or testable method on its own, and its connection in futurism to corporate workshop cultures of compulsory managerial optimism and self-esteem promotion for bored plutocratic functionaries is hard to miss. So too the frankly ludicrous penchant among futurists for the endless promotion of neologisms might indeed seem to connect to occasionally useful rhetorical and philosophical proposals of novel and useful distinctions to relieve intractable conceptual impasses -- but this practice is hardly the end in itself it seems in futurological circles forever buzzing with buzzwords, and its connection in futurism to corporate advertizing practices of repackaging stale goods as breathless novelties is, again, hard to miss.
In this, the professional patina of futurologists tracks closely the antics of so much contemporary pop-tech journalism, which indulges in technoscientifically illiterate hyperbole about technology That! Will! Change! Everything! and advertorial promotion of the latest crappy consumer goods and schlocky hagiography for clueless bazillionaire celebrity tech CEOs eager to be told they are the Protagonists of History. The common denominator here is the production of facile and falsifying discourse about technoscientific change paid for by plutocrats who are either flattered or profit by it. That many so-called "tech writers" indulge in this reactionary pseudo-science while congratulating themselves as champions of democracy (as vacuous "openness," predatory "sharing," indifferent "participation," and so on) and science (as unspecified "innovation," anti-democratic "technocracy," and unaccountable "design," and so on) just adds insult to injury. More of the same... but as "The Future"!
As I have said many times, futurology is the quintessential discourse of neoliberalism: a set of essentially promotional promises and rationalizations for plutocracy offered up in the form of science-like predictions. These forms suffuse global corporate-military developmental discourse, across think-tanks and corporatized academic departments and official media outlets, but also the promises of scientistic and techno-fetishistic advertizing imagery, and also the norms and forms of competitive individualism and self-help and relentless "positivity." As I wrote in Futurological Discourses and Posthuman Terrains:
Of course, yet another way to look at futurism is to regard it is a rather inept genre of science fiction literature, in which plots, themes, characterizations, are all sacrificed for endless scene-setting descriptions (yes, scenery, and hence, the definitive futurological scenario which, even when -- especially when? -- it is offered up as "multiple menu options" is inevitably reductive, mostly distortive, and usually amounts to special pleading on behalf of sponsors) in which hackneyed conceits from the Gernsbackian Golden Age play out (AI, genetic supermen, immortality medicine, cheap gizmo-abundance, reality as a simulation, I'm sorry to say) which are then peddled as if they were Very Serious philosophical thought-experiments or even scientific hypotheses. Speculative fiction has stunningly rich antecedents and ramifying branches, of course, but there is something to be said for the suggestion that futurology and "hard" science fiction as these are currently construed are co-constitutive imaginaries originating in the work of H.G. Wells. I daresay the rampant mistreatment of literary science fiction by the corporate-military mindset as an exploitable prophetic glimpse of the future market/battlefield rather than a critical/figurative engagement with the present (as all literature actually is, very much including sf) was a factor in the emergence as much as a result of popular futurology as the saddest, most impoverished literary genre of all time.
It has gotten so bad that at least one "professional futurist" -- Jamais Cascio -- is now declaring that the value of futurism is in what it gets "usefully wrong." At this point Cascio has poked so many holes exposing the fraud of conventional futurism (many of which I quite agree with) he really risks exposing the fraud of his own ongoing demand for attention and paychecks as a professional futurist himself.
Of course it is true that we do learn from mistakes -- think how earnestly Popper took Wilde's quip that "Experience is the name we give our mistakes" -- but can you imagine any other legitimate empirical discipline demanding to be taken seriously by concerned citizens and policy makers that would claim its models are all wrong in "interesting" ways? Setting aside the fact that few futurists would admit that they are wrong about everything as Cascio does (or at any rate would be consistent about such an admission), why should we care about the way futurists of all people get things wrong than the ways actual scientists and scholars, say, get things wrong -- especially when at least they aspire and occasionally manage to get things right?
That is to say, Cascio does not seem to be making the useful pragmatic point that all true propositions are never more than the best but still falsifiable propositions on offer for warranted reasons. I would sympathize with such a point, but it would simply change our expectations about the force and security of models and methods that get things right by our lights. Such a recognition would hardly provide grounds to distinguish futurism as a legitimate discipline from other legitimate disciplines. Like Cascio I do also make such a distinction, of course, but for me it is the distinction of con-artistry from policy-making (I leave to the side futurology's occasional inept forays into cultural criticism or -- Angels and ministers of grace defend us! -- philosophy).
To elaborate my point a bit more: No doubt all disciplines along the road to getting things as right as they can for now do also get things wrong in ways the study of which is interesting and useful, but it is the effort to get things right that earns their keep and provides the context in which usefully to assess the ways they err. Every legitimate discipline has a foresight dimension: one solicits agreements from potential collaborators, one insists on accounting for certain expectations, one makes provisional plans in light of one's understanding of the relevant forces and stakeholders at hand on the basis of the warranted descriptions provided by disciplines devoted to understanding them.
The problem is that futurism, futurology, future studies, or what have you, seeks legitimacy as a professional and scholarly discipline while every single method and model and analytic mode it deploys in the service of this goal originates in and is deployed by other social sciences and humanities scholars in an incomparably more rigorous and accountable way. Few futurists have degrees in these legitimate disciplines or could pass muster within their ranks. Futurists proceed instead by pretending their superficial appropriations are an interdisciplinarity when they amount in fact to an anti-disciplinarity.
As for the "methods" that are more characteristic of futurists in particular, few stand up to sustained scrutiny. Not to put too fine a point on it: "The Future" futurists pretend to study does not exist, the openness inhering in diversity of stakeholders to the present is -- if anything -- foreclosed by the parochial projections futurists denominate "The Future." (Futurology's characteristic extrapolations from the necessarily partially imperfectly understood present onto radically contingent developmental dynamisms are just an obvious instance.) The "trends" futurists pretend to discern do not exist -- if anything these are narrative constructions imposed retroactively on contingent vicissitudes to conjure an apparent momentum that can be opportunistically exploited by incumbents for profits. The futurological trend-spotter and the fashion trend-spotter are revealed to be perfectly continuous, then: deceptive hype profitably peddled as objective discovery. The "technology" futurists pretend to be their focus does not exist, the constellation of historical, existing, imagined techniques and artifacts only some of which are corralled together under the heading of "technology" do not in fact share any one characteristic or capacity or developmental trajectory, and their costs, risks, and benefits will also be different to the diversity of their stakeholders -- if anything the futurological pretense that the technological names a dimension of historical change different or separate from social, cultural, or political struggles is a focus that performs an insistent obfuscation of the reality at hand.
The conspicuous embrace of brainstorming and free association by some futurists takes up exercises from acting improvisation workshops which do indeed seem to me to be useful for inculcating habits of creative and flexible thinking for students -- but this is hardly a critical or testable method on its own, and its connection in futurism to corporate workshop cultures of compulsory managerial optimism and self-esteem promotion for bored plutocratic functionaries is hard to miss. So too the frankly ludicrous penchant among futurists for the endless promotion of neologisms might indeed seem to connect to occasionally useful rhetorical and philosophical proposals of novel and useful distinctions to relieve intractable conceptual impasses -- but this practice is hardly the end in itself it seems in futurological circles forever buzzing with buzzwords, and its connection in futurism to corporate advertizing practices of repackaging stale goods as breathless novelties is, again, hard to miss.
In this, the professional patina of futurologists tracks closely the antics of so much contemporary pop-tech journalism, which indulges in technoscientifically illiterate hyperbole about technology That! Will! Change! Everything! and advertorial promotion of the latest crappy consumer goods and schlocky hagiography for clueless bazillionaire celebrity tech CEOs eager to be told they are the Protagonists of History. The common denominator here is the production of facile and falsifying discourse about technoscientific change paid for by plutocrats who are either flattered or profit by it. That many so-called "tech writers" indulge in this reactionary pseudo-science while congratulating themselves as champions of democracy (as vacuous "openness," predatory "sharing," indifferent "participation," and so on) and science (as unspecified "innovation," anti-democratic "technocracy," and unaccountable "design," and so on) just adds insult to injury. More of the same... but as "The Future"!
As I have said many times, futurology is the quintessential discourse of neoliberalism: a set of essentially promotional promises and rationalizations for plutocracy offered up in the form of science-like predictions. These forms suffuse global corporate-military developmental discourse, across think-tanks and corporatized academic departments and official media outlets, but also the promises of scientistic and techno-fetishistic advertizing imagery, and also the norms and forms of competitive individualism and self-help and relentless "positivity." As I wrote in Futurological Discourses and Posthuman Terrains:
Futurology is caught up in and constitutive of the logic of techno-fixated market futures, while futurisms are technoscience fandoms and sub(cult)ures materializing imagined futures in the fervency of shared belief. Successful mainstream futurology amplifies irrational consumption through marketing hyperbole and makes profitable short term predictions for the benefit of investors, the only finally reliable source for which is insider information. Successful superlative futurism [exemplary versions of which include transhumanism, singularitarianism, techno-immortalism, digital-utopianism, nano-cornucopianism which I often lampoon here and elsewhere] amplifies irrational terror of finitude and mortality through the conjuration of a techno-transcendent vision of The Future peddled as long-term predictions the faithful in which provide unearned attention and money for the benefit of gurus and pseudo-experts, the source for which is science fiction mistaken for science practice and science policy. Something suspiciously akin to fraud would appear to be the common denominator of futurology in both its mainstream and superlative modes. [Emphasis added --d] As against the dreary dream-engineering ad-men of mainstream futurology the adherents of superlative futurism are indulging in outright, and often organized, faith-based initiatives. More than consumers eating up the usual pastry-puff progress, they are infantile wish-fulfillment fantasists who fancy that they will quite literally arrive at a personally techno-transcendentalizing destination denominated The Future.Although I am stressing the difference between extreme techno-transcendental subcultures of futurism and the more prevalent corporate-militarism of everyday advertizing and elite think-tank discourse, I think it is also right to discern a deranging transcendentalizing denialist aspiration suffusing neoliberal marketing imagery and neoliberal rationalizations for forced global development. One finds in both the same disdain for the aging vulnerable error-prone body of the privileged target of consumer advertizing and the precarious target of violent exploitation alike, certainly.
Of course, yet another way to look at futurism is to regard it is a rather inept genre of science fiction literature, in which plots, themes, characterizations, are all sacrificed for endless scene-setting descriptions (yes, scenery, and hence, the definitive futurological scenario which, even when -- especially when? -- it is offered up as "multiple menu options" is inevitably reductive, mostly distortive, and usually amounts to special pleading on behalf of sponsors) in which hackneyed conceits from the Gernsbackian Golden Age play out (AI, genetic supermen, immortality medicine, cheap gizmo-abundance, reality as a simulation, I'm sorry to say) which are then peddled as if they were Very Serious philosophical thought-experiments or even scientific hypotheses. Speculative fiction has stunningly rich antecedents and ramifying branches, of course, but there is something to be said for the suggestion that futurology and "hard" science fiction as these are currently construed are co-constitutive imaginaries originating in the work of H.G. Wells. I daresay the rampant mistreatment of literary science fiction by the corporate-military mindset as an exploitable prophetic glimpse of the future market/battlefield rather than a critical/figurative engagement with the present (as all literature actually is, very much including sf) was a factor in the emergence as much as a result of popular futurology as the saddest, most impoverished literary genre of all time.
"The Future" is tech bubbles all the way down.
That not just meant to be a bit of snark, by the way: I regard unsustainable extractive-industrial-consumer petrochemical Modernity as the tech meta-bubble within which all subsequent tech bubbles froth their serial variations of "The Future."
Sunday, March 08, 2015
Very Serious Future! A Modest Recommendation
When a futurist predicts as imminent some incoherent or non-proximate outcome (superintelligent-AI, profitable geo-engineering techno-fixes, medical breakthroughs promising eternal youth, uploading info-souls into holodeck heaven, outer-space diaspora as an escape hatch, nano-abundance on the cheap, faster-than-light travel, and so on), the serious response is not to consider its consequences as if the outcome were plausible and proximate after all (what would the hidden costs be? who would benefit most?), but instead to consider what these nonsense predictions symptomize in the way of present fears and desires and to consider what present constituencies stand to benefit from the threats and promises these predictions imply.
Friday, March 06, 2015
My Sondheim Top 5:
1 Sweeney ToddAnybody else?
2 Assassins
3 Follies
4 A Little Night Music
5 Pacific Overtures
Monday, March 02, 2015
Choices
I really get it when Paul Krugman says he became an economist because he wanted to be a Foundation psychohistorian, since I'm pretty sure I became a rhetorician because I always wanted to be a Bene Gesserit witch.
Wednesday, February 25, 2015
Artificial Intelligence As Alien Intelligence
Also posted at the World Future Society.Science fiction is a genre of literature in which artifacts and techniques humans devise as exemplary expressions of our intelligence result in problems that perplex our intelligence or even bring it into existential crisis. It is scarcely surprising that a genre so preoccupied with the status and scope of intelligence would provide endless variations on the conceits of either the construction of artificial intelligences or contact with alien intelligences.
Of course, both the making of artificial intelligence and making contact with alien intelligence are organized efforts to which many humans are actually devoted, and not simply imaginative sites in which writers spin their allegories and exhibit their symptoms. It is interesting that after generations of failure the practical efforts to construct artificial intelligence or contact alien intelligence have often shunted their adherents to the margins of scientific consensus and invested these efforts with the coloration of scientific subcultures: While computer science and the search for extraterrestrial intelligence both remain legitimate fields of research, both AI and aliens also attract subcultural enthusiasms and resonate with cultic theology, each attracts its consumer fandoms and public Cons, each has its True Believers and even its UFO cults and Robot cults at the extremities.
Champions of artificial intelligence in particular have coped in many ways with the serial failure of their project to achieve its desired end (which is not to deny that the project has borne fruit) whatever the confidence with which generation after generation of these champions have insisted that desired end is near: Some have turned to more modest computational ambitions, making useful software or mischievous algorithms in which sad vestiges of the older dreams can still be seen to cling. Some are simply stubborn dead-enders for Good Old Fashioned AI's expected eventual and even imminent vindication, all appearances to the contrary notwithstanding. And still others have doubled-down, distracting attention from the failures and problems bedeviling AI discourse simply by raising its pitch and stakes, no longer promising that artificial intelligence is around the corner but warning that artificial super-intelligence is coming soon to end human history.
Another strategy for coping with the failure of artificial intelligence on its conventional terms has assumed a higher profile among its champions lately, drawing support for the real plausibility of one science-fictional conceit -- construction of artificial intelligence -- by appealing to another science-fictional conceit, contact with alien intelligence. This rhetorical gambit has often been conjoined to the compensation of failed AI with its hyperbolic amplification into super-AI which I have already mentioned, and it is in that context that I have written about it before myself. But in a piece published a few days ago in The New York Times, Outing A.I.: Beyond the Turing Test, Benjamin Bratton, a professor of visual arts at U.C. San Diego and Director of a design think-tank, has elaborated a comparatively sophisticated case for treating artificial intelligence as alien intelligence with which we can productively grapple. Near the conclusion of his piece Bratton declares that "Musk, Gates and Hawking made headlines by speaking to the dangers that A.I. may pose. Their points are important, but I fear were largely misunderstood by many readers." Of course these figures made their headlines by making the arguments about super-intelligence I have already disdained, and mentioning them seems to indicate Bratton's sympathy with their gambit and even suggests that his argument aims to help us to understand them better on their own terms. Nevertheless, I take Bratton's argument seriously not because of but in spite of this connection. Ultimately, Bratton makes a case for understanding AI as alien that does not depend on the deranging hyperbole and marketing of robocalypse or robo-rapture for its force.
In the piece, Bratton claims "Our popular conception of artificial intelligence is distorted by an anthropocentric fallacy." The point is, of course, well taken, and the litany he rehearses to illustrate it is enormously familiar by now as he proceeds to survey popular images from Kubrick's HAL to Jonze's Her and to document public deliberation about the significance of computation articulated through such imagery as the "rise of the machines" in the Terminator franchise or the need for Asimov's famous fictional "Three Laws." It is easy -- and may nonetheless be quite important -- to agree with Bratton's observation that our computational/media devices lack cruel intentions and are not susceptible to Asimovian consciences, and hence thinking about the threats and promises and meanings of these devices through such frames and figures is not particularly helpful to us even though we habitually recur to them by now. As I say, it would be easy and important to agree with such a claim, but Bratton's proposal is in fact somewhat a different one:
[A] mature A.I. is not necessarily a humanlike intelligence, or one that is at our disposal. If we look for A.I. in the wrong ways, it may emerge in forms that are needlessly difficult to recognize, amplifying its risks and retarding its benefits. This is not just a concern for the future. A.I. is already out of the lab and deep into the fabric of things. “Soft A.I.,” such as Apple’s Siri and Amazon recommendation engines, along with infrastructural A.I., such as high-speed algorithmic trading, smart vehicles and industrial robotics, are increasingly a part of everyday life.Here the serial failure of the program of artificial intelligence is redeemed simply by declaring victory. Bratton demonstrates that crying uncle does not preclude one from still crying wolf. It's not that Siri is some sickly premonition of the AI-daydream still endlessly deferred, but represents the real rise of what robot cultist Hans Moravec once promised would be our "mind children" but here and now as elfen aliens with an intelligence unto themselves. It's not that calling a dumb car a "smart" car is simply a hilarious bit of obvious marketing hyperbole, but represents the recognition of a new order of intelligent machines among us. Rather than criticize the way we may be "amplifying its risks and retarding its benefits" by reading computation through the inapt lens of intelligence at all, he proposes that we should resist holding machine intelligence to the standards that have hitherto defined it for fear of making its recognition "too difficult."
The kernel of legitimacy in Bratton's inquiry is its recognition that "intelligence is notoriously difficult to define and human intelligence simply can't exhaust the possibilities." To deny these modest reminders is to indulge in what he calls "the pretentious folklore" of anthropocentrism. I agree that anthropocentrism in our attributions of intelligence has facilitated great violence and exploitation in the world, denying the dignity and standing of Cetaceans and Great Apes, but has also facilitated racist, sexist, xenophobic travesties by denigrating humans as beastly and unintelligent objects at the disposal of "intelligent" masters. "Some philosophers write about the possible ethical 'rights' of A.I. as sentient entities, but," Bratton is quick to insist, "that’s not my point here." Given his insistence that the "advent of robust inhuman A.I." will force a "reality-based" "disenchantment" to "abolish the false centrality and absolute specialness of human thought and species-being" which he blames in his concluding paragraph with providing "theological and legislative comfort to chattel slavery" it is not entirely clear to me that emancipating artificial aliens is not finally among the stakes that move his argument whatever his protestations to the contrary. But one can forgive him for not dwelling on such concerns: the denial of an intelligence and sensitivity provoking responsiveness and demanding responsibilities in us all to women, people of color, foreigners, children, the different, the suffering, nonhuman animals compels defensive and evasive circumlocutions that are simply not needed to deny intelligence and standing to an abacus or a desk lamp. It is one thing to warn of the anthropocentric fallacy but another to indulge in the pathetic fallacy.
Bratton insists to the contrary that his primary concern is that anthropocentrism skews our assessment of real risks and benefits. "Unfortunately, the popular conception of A.I., at least as depicted in countless movies, games and books, still seems to assume that humanlike characteristics (anger, jealousy, confusion, avarice, pride, desire, not to mention cold alienation) are the most important ones to be on the lookout for." And of course he is right. The champions of AI have been more than complicit in this popular conception, eager to attract attention and funds for their project among technoscientific illiterates drawn to such dramatic narratives. But we are distracted from the real risks of computation so long as we expect risks to arise from a machinic malevolence that has never been on offer nor even in the offing. Writes Bratton: "Perhaps what we really fear, even more than a Big Machine that wants to kill us, is one that sees us as irrelevant. Worse than being seen as an enemy is not being seen at all."
But surely the inevitable question posed by Bratton's disenchanting expose at this point should be: Why, once we have set aside the pretentious folklore of machines with diabolical malevolence, do we not set aside as no less pretentiously folkloric the attribution of diabolical indifference to machines? Why, once we have set aside the delusive confusion of machine behavior with (actual or eventual) human intelligence, do we not set aside as no less delusive the confusion of machine behavior with intelligence altogether? There is no question were a gigantic bulldozer with an incapacitated driver to swerve from a construction site onto a crowded city thoroughfare this would represent a considerable threat, but however tempting it might be in the fraught moment or reflective aftermath poetically to invest that bulldozer with either agency or intellect it is clear that nothing would be gained in the practical comprehension of the threat it poses by so doing. It is no more helpful now in an epoch of Greenhouse storms than it was for pre-scientific storytellers to invest thunder and whirlwinds with intelligence. Although Bratton makes great play over the need to overcome folkloric anthropocentrism in our figuration of and deliberation over computation, mystifying agencies and mythical personages linger on in his accounting however he insists on the alienness of "their" intelligence.
Bratton warns us about the "infrastructural A.I." of high-speed financial trading algorithms, Google and Amazon search algorithms, "smart" vehicles (and no doubt weaponized drones and autonomous weapons systems would count among these), and corporate-military profiling programs that oppress us with surveillance and harass us with targeted ads. I share all of these concerns, of course, but personally insist that our critical engagement with infrastructural coding is profoundly undermined when it is invested with insinuations of autonomous intelligence. In Art in the Age of Mechanical Reproducibility, Walter Benjamin pointed out that when philosophers talk about the historical force of art they do so with the prejudices of philosophers: they tend to write about those narrative and visual forms of art that might seem argumentative in allegorical and iconic forms that appear analogous to the concentrated modes of thought demanded by philosophy itself. Benjamin proposed that perhaps the more diffuse and distracted ways we are shaped in our assumptions and aspirations by the durable affordances and constraints of the made world of architecture and agriculture might turn out to drive history as much or even more than the pet artforms of philosophers do. Lawrence Lessig made much the same point when he declared at the turn of the millennium that Code Is Law.
It is well known that special interests with rich patrons shape the legislative process and sometimes even explicitly craft legislation word for word in ways that benefit them to the cost and risk of majorities. It is hard to see how our assessment of this ongoing crime and danger would be helped and not hindered by pretending legislation is an autonomous force exhibiting an alien intelligence, rather than a constellation of practices, norms, laws, institutions, ritual and material artifice, the legacy of the historical play of intelligent actors and the site for the ongoing contention of intelligent actors here and now. To figure legislation as a beast or alien with a will of its own would amount to a fetishistic displacement of intelligence away from the actual actors actually responsible for the forms that legislation actually takes. It is easy to see why such a displacement is attractive: it profitably abets the abuses of majorities by minorities while it absolves majorities from conscious complicity in the terms of their own exploitation by laws made, after all, in our names. But while these consoling fantasies have an obvious allure this hardly justifies our endorsement of them.
I have already written in the past about those who want to propose, as Bratton seems inclined to do in the present, that the collapse of global finance in 2008 represented the working of inscrutable artificial intelligences facilitating rapid transactions and supporting novel financial instruments of what was called by Long Boom digirati the "new economy." I wrote: "It is not computers and programs and autonomous techno-agents who are the protagonists of the still unfolding crime of predatory plutocratic wealth-concentration and anti-democratizing austerity. The villains of this bloodsoaked epic are the bankers and auditors and captured-regulators and neoliberal ministers who employed these programs and instruments for parochial gain and who then exonerated and rationalized and still enable their crimes. Our financial markets are not so complex we no longer understand them. In fact everybody knows exactly what is going on. Everybody understands everything. Fraudsters [are] engaged in very conventional, very recognizable, very straightforward but unprecedentedly massive acts of fraud and theft under the cover of lies."
I have already written in the past about those who want to propose, as Bratton seems inclined to do in the present, that our discomfiture in the setting of ubiquitous algorithmic mediation results from an autonomous force over which humans intentions are secondary considerations. I wrote: "[W]hat imaginary scene is being conjured up in this exculpatory rhetoric in which inadvertent cruelty is 'coming from code' as opposed to coming from actual persons? Aren't coders actual persons, for example? ... [O]f course I know what [is] mean[t by the insistence...] that none of this was 'a deliberate assault.' But it occurs to me that it requires the least imaginable measure of thought on the part of those actually responsible for this code to recognize that the cruelty of [one user's] confrontation with their algorithm was the inevitable at least occasional result for no small number of the human beings who use Facebook and who live lives that attest to suffering, defeat, humiliation, and loss as well as to parties and promotions and vacations... What if the conspicuousness of [this] experience of algorithmic cruelty indicates less an exceptional circumstance than the clarifying exposure of a more general failure, a more ubiquitous cruelty? ... We all joke about the ridiculous substitutions performed by autocorrect functions, or the laughable recommendations that follow from the odd purchase of a book from Amazon or an outing from Groupon. We should joke, but don't, when people treat a word cloud as an analysis of a speech or an essay. We don't joke so much when a credit score substitutes for the judgment whether a citizen deserves the chance to become a homeowner or start a small business, or when a Big Data profile substitutes for the judgment whether a citizen should become a heat signature for a drone committing extrajudicial murder in all of our names. [An] experience of algorithmic cruelty [may be] extraordinary, but that does not mean it cannot also be a window onto an experience of algorithmic cruelty that is ordinary. The question whether we might still 'opt out' from the ordinary cruelty of algorithmic mediation is not a design question at all, but an urgent political one."
I have already written in the past about those who want to propose, as Bratton seems inclined to do in the present, that so-called Killer Robots are a threat that must be engaged by resisting or banning "them" in their alterity rather than by assigning moral and criminal responsibility on those who code, manufacture, fund, and deploy them. I wrote: "Well-meaning opponents of war atrocities and engines of war would do well to think how tech companies stand to benefit from military contracts for 'smarter' software and bleeding-edge gizmos when terrorized and technoscientifically illiterate majorities and public officials take SillyCon Valley's warnings seriously about our 'complacency' in the face of truly autonomous weapons and artificial super-intelligence that do not exist. It is crucial that necessary regulation and even banning of dangerous 'autonomous weapons' proceeds in a way that does not abet the mis-attribution of agency, and hence accountability, to devices. Every 'autonomous' weapons system expresses and mediates decisions by responsible humans usually all too eager to disavow the blood on their hands. Every legitimate fear of 'killer robots' is best addressed by making their coders, designers, manufacturers, officials, and operators accountable for criminal and unethical tools and uses of tools... There simply is no such thing as a smart bomb. Every bomb is stupid. There is no such thing as an autonomous weapon. Every weapon is deployed. The only killer robots that actually exist are human beings waging and profiting from war."
"Arguably," argues Bratton, "the Anthropocene itself is due less to technology run amok than to the humanist legacy that understands the world as having been given for our needs and created in our image. We hear this in the words of thought leaders who evangelize the superiority of a world where machines are subservient to the needs and wishes of humanity... This is the sentiment -- this philosophy of technology exactly -- that is the basic algorithm of the Anthropocenic predicament, and consenting to it would also foreclose adequate encounters with A.I." The Anthropocene in this formulation names the emergence of environmental or planetary consciousness, an emergence sometimes coupled to the global circulation of the image of the fragility and interdependence of the whole earth as seen by humans from outer space. It is the recognition that the world in which we evolved to flourish might be impacted by our collective actions in ways that threaten us all. Notice, by the way, that multiculture and historical struggle are figured as just another "algorithm" here.
I do not agree that planetary catastrophe inevitably followed from the conception of the earth as a gift besetting us to sustain us, indeed this premise understood in terms of stewardship or commonwealth would go far in correcting and preventing such careless destruction in my opinion. It is the false and facile (indeed infantile) conception of a finite world somehow equal to infinite human desires that has landed us and keeps us delusive ignoramuses lodged in this genocidal and suicidal predicament. Certainly I agree with Bratton that it would be wrong to attribute the waste and pollution and depletion of our common resources by extractive-industrial-consumer societies indifferent to ecosystemic limits to "technology run amok." The problem of so saying is not that to do so disrespects "technology" -- as presumably in his view no longer treating machines as properly "subservient to the needs and wishes of humanity" would more wholesomely respect "technology," whatever that is supposed to mean -- since of course technology does not exist in this general or abstract way to be respected or disrespected.
The reality at hand is that humans are running amok in ways that are facilitated and mediated by certain technologies. What is demanded in this moment by our predicament is the clear-eyed assessment of the long-term costs, risks, and benefits of technoscientific interventions into finite ecosystems to the actual diversity of their stakeholders and the distribution of these costs, risks, and benefits in an equitable way. Quite a lot of unsustainable extractive and industrial production as well as mass consumption and waste would be rendered unprofitable and unappealing were its costs and risks widely recognized and equitably distributed. Such an understanding suggests that what is wanted is to insist on the culpability and situation of actually intelligent human actors, mediated and facilitated as they are in enormously complicated and demanding ways by technique and artifice. The last thing we need to do is invest technology-in-general or environmental-forces with alien intelligence or agency apart from ourselves.
I am beginning to wonder whether the unavoidable and in many ways humbling recognition (unavoidable not least because of environmental catastrophe and global neoliberal precarization) that human agency emerges out of enormously complex and dynamic ensembles of interdependent/prostheticized actors gives rise to compensatory investments of some artifacts -- especially digital networks, weapons of mass destruction, pandemic diseases, environmental forces -- with the sovereign aspect of agency we no longer believe in for ourselves? It is strangely consoling to pretend our technologies in some fancied monolithic construal represent the rise of "alien intelligences," even threatening ones, other than and apart from ourselves, not least because our own intelligence is an alienated one and prostheticized through and through. Consider the indispensability of pedagogical techniques of rote memorization, the metaphorization and narrativization of rhetoric in songs and stories and craft, the technique of the memory palace, the technologies of writing and reading, the articulation of metabolism and duration by timepieces, the shaping of both the body and its bearing by habit and by athletic training, the lifelong interplay of infrastructure and consciousness: all human intellect is already technique. All culture is prosthetic and all prostheses are culture.
Bratton wants to narrate as a kind of progressive enlightenment the mystification he recommends that would invest computation with alien intelligence and agency while at once divesting intelligent human actors, coders, funders, users of computation of responsibility for the violations and abuses of other humans enabled and mediated by that computation. This investment with intelligence and divestment of responsibility he likens to the Copernican Revolution in which humans sustained the momentary humiliation of realizing that they were not the center of the universe but received in exchange the eventual compensation of incredible powers of prediction and control. One might wonder whether the exchange of the faith that humanity was the apple of God's eye for a new technoscientific faith in which we aspired toward godlike powers ourselves was really so much a humiliation as the exchange of one megalomania for another. But what I want to recall by way of conclusion instead is that the trope of a Copernican humiliation of the intelligent human subject is already quite a familiar one:
In his Introductory Lectures on Psychoanalysis Sigmund Freud notoriously proposed that
In the course of centuries the naive self-love of men has had to submit to two major blows at the hands of science. The first was when they learnt that our earth was not the center of the universe but only a tiny fragment of a cosmic system of scarcely imaginable vastness. This is associated in our minds with the name of Copernicus... The second blow fell when biological research destroyed man’s supposedly privileged place in creation and proved his descent from the animal kingdom and his ineradicable animal nature. This revaluation has been accomplished in our own days by Darwin... though not without the most violent contemporary opposition. But human megalomania will have suffered its third and most wounding blow from the psychological research of the present time which seeks to prove to the ego that it is not even master in its own house, but must content itself with scanty information of what is going on unconsciously in the mind.However we may feel about psychoanalysis as a pseudo-scientific enterprise that did more therapeutic harm than good, Freud's works considered instead as contributions to moral philosophy and cultural theory have few modern equals. The idea that human consciousness is split from the beginning as the very condition of its constitution, the creative if self-destructive result of an impulse of rational self-preservation beset by the overabundant irrationality of humanity and history, imposed a modesty incomparably more demanding than Bratton's wan proposal in the same name. Indeed, to the extent that the irrational drives of the dynamic unconscious are often figured as a brute machinic automatism, one is tempted to suggest that Bratton's modest proposal of alien artifactual intelligence is a fetishistic disavowal of the greater modesty demanded by the alienating recognition of the stratification of human intelligence by unconscious forces (and his moniker a symptomatic citation). What is striking about the language of psychoanalysis is the way it has been taken up to provide resources for imaginative empathy across the gulf of differences: whether in the extraordinary work of recent generations of feminist, queer, and postcolonial scholars re-orienting the project of the conspicuously sexist, heterosexist, cissexist, racist, imperialist, bourgeois thinker who was Freud to emancipatory ends, or in the stunning leaps in which Freud identified with neurotic others through psychoanalytic reading, going so far as to find in the paranoid system-building of the psychotic Dr. Schreber an exemplar of human science and civilization and a mirror in which he could see reflected both himself and psychoanalysis itself. Freud's Copernican humiliation opened up new possibilities of responsiveness in difference out of which could be built urgently necessary responsibilities otherwise. I worry that Bratton's Copernican modesty opens up new occasions for techno-fetishistic fables of history and disavowals of responsibility for its actual human protagonists.
Thursday, February 19, 2015
The Libertechbrotarianization of Basic Income Discourse
It has always been true that basic income advocates across the political right -- Milton Friedman, famously -- have coupled the payment of basic incomes with the "simplification" and "streamlining" and "targeted elimination" of welfare programs. It can't be that surprising to find those who pine for the dismantlement of social support to unleash the austere, obliterative liberties of spontaneous orders (in which plutocratic orders are obeyed spontaneously) would propose basic income in the service of that same aim as they do everything else.
But I must say I am intrigued to find how often recent pieces on basic income are framing it first and most of all as the solution to the problem of unemployment "caused" by automation. Such discussions seem to regard unemployment as a logical effect of technological development rather than resulting from plutocratic attacks on and dismantlement of organized labor -- which would ensure a more equitable distribution of productivity gains from automation. And so we have basic income proposed as a pretext for welfare dismantlement and as a panacea for unemployment as unions are busted? I am starting to think that there is a neoliberalization of basic income advocacy taking place that qualifies my initial thrill discovering this pet topic's recent and unexpected new prominence.
From Thomas Paine to Martin Luther King, Jr., to Erik Olin Wright, basic income has been proposed as a direct solution to the scourge of poverty. There is no question that a basic income guarantee together with single payer healthcare, nutritional assistance, free public education, public housing programs, equal recourse to law and franchise and office-holding, freedom of expression and public assembly, and accountable administration of commons for the public good would provide the abiding substance and occasion for radical democratization.
But it would seem that basic income can be proposed either in the service of emancipatory equity-in-diversity or as a plutocratic ploy. If so, it is obviously important to pay attention to the assumptions and aspirations driving its various advocates. To hear that someone supports basic income is not yet enough to know they support what you mean by basic income or to accept them as an ally.
It is too easy for glib celebrations of basic income in the abstract to function as distractions from urgent, ongoing, and ever-more-successful struggles organizing workers in fast food, health care, education, and service sectors and in raising the minimum wage to approach a living wage. Just as some mouthpieces for Republican politics would evade association with the ugly racism of the contemporary GOP by declaring themselves civil libertarians (a masquerade enabled by those who know better and yet do not call states rights "minarchists" on the history of racist dog whistling in such positions) I wonder if basic income advocacy on the right will likewise work to conceal a host of plutocratic commitments.
Ask right-wing advocates of basic income whether a person who has already spent their basic income but who suddenly confronts the prohibitive costs of a medical emergency or the need for legal representation has a right to that healthcare or that lawyer even if they cannot afford the expense? If the answer is yes, then we're back to the mainstream legible social democratic discourse in which basic income supplements rather than replaces general welfare; and if the answer is no, we're inevitably back to the war of all against all in which the unworthy poor pay for their misfortunes with their lives or their freedom. Free To Lose, er, Choose, amirite?
Right-wing forms of "basic" income advocacy reduce all too readily to visions of bare life without the rights, standards, and supports to ensure an actually legible scene of consent to the terms of everyday relations for the majority of the people. Game the minimum "sufficient" basic income into a state of near-precarity without recourse to any other pillars of equity-in-diversity and you've peddled feudalism as a universally emancipatory scheme -- in the drearily predictable right-wing manner.
It is necessary to emphasize how obvious are the fingerprints of the right in such basic income state-dismantlement assumptions, aspirations, rhetorics, schemes. Because it is also becoming more and more conspicuous how many recent converts to basic income advocacy seem to want to advocate it as a technocratic technofix "beyond the politics of left and right." It is important to grasp first of all that no technique is politically neutral, that every artifact mediates social relations, that the funding, testing, publication, regulation, application of technique is ineradicably political and that the costs, risks, and benefits of technoscientific change are as diverse as the diversity of their stakeholders. This means that it is always only in the political distribution of these costs, risks, and benefits that we determine the progressive or emancipatory force of technoscientific change, not by reading technical specifications or, worse, advertorial corporate-military press releases and pop-tech gossip column journalism. The denial or pretended overcoming of these political realities does not eliminate them but merely renders them opaque to scrutiny and criticism. This is a gesture that inevitably conduces to the benefit of elite incumbents already empowered by and in the status quo. That is to say, the stance of a-politicism or anti-politicism is profoundly political in fact, and the politics it supports are right-wing politics most of all.
It is no wonder, then, that right-wing politics from mid-century fascism to late-century market fundamentalism often actively promoted itself with slogans promising to be "beyond left and right" or "a new beginning overcoming left/right categories" or "a third way." Every single person who declares themselves to be "beyond left and right" is either a secret shill for the right or a perfect dupe for the right. It is no surprise that the tech-talkers of predatory venture capitalism and tech-hype marketeers of stale crap as worldshattering novelties accept so many of the assumptions and aspirations of market fundamentalist corporate-militarism including the slogan of offering "design solutions" and "technofixes" beyond politics -- and that these reactionaries throng the chat rooms and conferences of recent basic income advocacy.
This post originally referred to a "neoliberalization" of basic income discourse, but that term is at best verging on vacuity from over use and at worst coming to be associated with fauxvolutionary preening about the choice of purity cabaret over pragmatic progressivism, which is worse than vacuous but manages to be actually reactionary in consequence.
See also p2p Is EITHER Pay-to-Peer or Peers-to-Precarity.
But I must say I am intrigued to find how often recent pieces on basic income are framing it first and most of all as the solution to the problem of unemployment "caused" by automation. Such discussions seem to regard unemployment as a logical effect of technological development rather than resulting from plutocratic attacks on and dismantlement of organized labor -- which would ensure a more equitable distribution of productivity gains from automation. And so we have basic income proposed as a pretext for welfare dismantlement and as a panacea for unemployment as unions are busted? I am starting to think that there is a neoliberalization of basic income advocacy taking place that qualifies my initial thrill discovering this pet topic's recent and unexpected new prominence.
From Thomas Paine to Martin Luther King, Jr., to Erik Olin Wright, basic income has been proposed as a direct solution to the scourge of poverty. There is no question that a basic income guarantee together with single payer healthcare, nutritional assistance, free public education, public housing programs, equal recourse to law and franchise and office-holding, freedom of expression and public assembly, and accountable administration of commons for the public good would provide the abiding substance and occasion for radical democratization.
But it would seem that basic income can be proposed either in the service of emancipatory equity-in-diversity or as a plutocratic ploy. If so, it is obviously important to pay attention to the assumptions and aspirations driving its various advocates. To hear that someone supports basic income is not yet enough to know they support what you mean by basic income or to accept them as an ally.
It is too easy for glib celebrations of basic income in the abstract to function as distractions from urgent, ongoing, and ever-more-successful struggles organizing workers in fast food, health care, education, and service sectors and in raising the minimum wage to approach a living wage. Just as some mouthpieces for Republican politics would evade association with the ugly racism of the contemporary GOP by declaring themselves civil libertarians (a masquerade enabled by those who know better and yet do not call states rights "minarchists" on the history of racist dog whistling in such positions) I wonder if basic income advocacy on the right will likewise work to conceal a host of plutocratic commitments.
Ask right-wing advocates of basic income whether a person who has already spent their basic income but who suddenly confronts the prohibitive costs of a medical emergency or the need for legal representation has a right to that healthcare or that lawyer even if they cannot afford the expense? If the answer is yes, then we're back to the mainstream legible social democratic discourse in which basic income supplements rather than replaces general welfare; and if the answer is no, we're inevitably back to the war of all against all in which the unworthy poor pay for their misfortunes with their lives or their freedom. Free To Lose, er, Choose, amirite?
Right-wing forms of "basic" income advocacy reduce all too readily to visions of bare life without the rights, standards, and supports to ensure an actually legible scene of consent to the terms of everyday relations for the majority of the people. Game the minimum "sufficient" basic income into a state of near-precarity without recourse to any other pillars of equity-in-diversity and you've peddled feudalism as a universally emancipatory scheme -- in the drearily predictable right-wing manner.
It is necessary to emphasize how obvious are the fingerprints of the right in such basic income state-dismantlement assumptions, aspirations, rhetorics, schemes. Because it is also becoming more and more conspicuous how many recent converts to basic income advocacy seem to want to advocate it as a technocratic technofix "beyond the politics of left and right." It is important to grasp first of all that no technique is politically neutral, that every artifact mediates social relations, that the funding, testing, publication, regulation, application of technique is ineradicably political and that the costs, risks, and benefits of technoscientific change are as diverse as the diversity of their stakeholders. This means that it is always only in the political distribution of these costs, risks, and benefits that we determine the progressive or emancipatory force of technoscientific change, not by reading technical specifications or, worse, advertorial corporate-military press releases and pop-tech gossip column journalism. The denial or pretended overcoming of these political realities does not eliminate them but merely renders them opaque to scrutiny and criticism. This is a gesture that inevitably conduces to the benefit of elite incumbents already empowered by and in the status quo. That is to say, the stance of a-politicism or anti-politicism is profoundly political in fact, and the politics it supports are right-wing politics most of all.
It is no wonder, then, that right-wing politics from mid-century fascism to late-century market fundamentalism often actively promoted itself with slogans promising to be "beyond left and right" or "a new beginning overcoming left/right categories" or "a third way." Every single person who declares themselves to be "beyond left and right" is either a secret shill for the right or a perfect dupe for the right. It is no surprise that the tech-talkers of predatory venture capitalism and tech-hype marketeers of stale crap as worldshattering novelties accept so many of the assumptions and aspirations of market fundamentalist corporate-militarism including the slogan of offering "design solutions" and "technofixes" beyond politics -- and that these reactionaries throng the chat rooms and conferences of recent basic income advocacy.
This post originally referred to a "neoliberalization" of basic income discourse, but that term is at best verging on vacuity from over use and at worst coming to be associated with fauxvolutionary preening about the choice of purity cabaret over pragmatic progressivism, which is worse than vacuous but manages to be actually reactionary in consequence.
See also p2p Is EITHER Pay-to-Peer or Peers-to-Precarity.
Sunday, February 15, 2015
Returning to the Arendtian "Turn" on Judgment
We do, therefore I am; I think, therefore we've done.In the essay Ronald Beiner appended to his edited volume of Arendt's Lectures on Kant's Political Philosophy, he proposes that "[h]er writings on… judgment fall into two… phases: early and late… Arendt offers two distinct conceptions of judgment… the first relating to the world of praxis, the second to that of contemplation." When he connects the first conception to the vita activa, the subject and even an alternate title of her Human Condition and then locates the second in a concern with "the life of the mind" (the title of her final, if unfinished, published work) one cannot help but wonder whether the distinction may amount to little more than the fact that Arendt did not repeat herself in writing her two most philosophically substantial works, separated by two decades of original, provocative, critical writings. Although Beiner is careful to resist the implication that these two conceptions represent an absolute break, I think it is actually important to emphasize the contrary point, that a concern with judgment spans Arendt's writing and, further, that it would be wrong to assume the differences in her formulations indicate a turn away from the earlier for the latter one, rather than revealing two dimensions of a phenomenon that she emphasized in different accounts but which may be indispensably connected in Arendt's full understanding of the task of "thinking what we are doing."
I do not accept the implication of Beier's narrative, then, when he writes: "The more she reflected on the faculty of judgment, the more inclined she was to regard it as the prerogative of the solitary (though public-spirited) contemplator as opposed to the actor (whose activity is necessarily non-solitary). One acts with others; one judges by oneself (even though one does so by making present in one's imagination those who are absent)." I do not deny that Arendt's formulations changed with time, but these explorations need not indicate that she jettisoned preceding formulations rather than supplementing them, and I suspect the parenthetic qualifications Beier appends to his thesis already reflect awareness of the trouble in trying to force the turn he is considering too intently. For me the force of both of the different accounts of judgment in The Human Condition and The Life of the Mind finally depend on their relation to one another.
And so, for example, when Beier rightly points out that "[i]n judging, as understood by Arendt, one weighs the possible judgments of an imagined Other, not the actual judgments of real interlocutors," I do not accept at all his implication that this is more relevant to the vita contemplativa than to the vita activa. There is in my view a crucial continuity in the accounts of power offered up by Hannah Arendt and Michel Foucault -- and I would add, Frantz Fanon -- not only in their separate insistence on power as productive rather than repressive (probably most conspicuous in Arendt's "On Violence," which, given that piece's discussion of Fanon introduces a host of provocative questions into the account I make of a shared Arendtian-Fanonian-Foucauldian bio-political critical theory, some of which I begin to respond to here), but also in the proposal of an essentially rhetorical characterization of the politics which is power's domain.
Power in Foucault arises when one assumes a calculative disposition toward the other from whom one would solicit agreement or collaboration in one's ends, all the while understanding the risk of reversibility arising in any situation with another who knows and wants differently from oneself. It is ultimately from this situation that arises the famous Foucauldian slogans "no power without resistance," "wherever power emerges, resistance arises" and so on. Although Arendt would not likely be thrilled with Foucault's choice of the word "calculation" to capture it, I would say that it is also ultimately from this situation that arises the famous Arendtian proposition that every act re-enacts natality, the beginning in birth into the world of a new generation with who knows what problems and promises, the release in action into the world of forces that will inevitably have unintended consequences and unexpected impacts.
What is crucial to the point I am making here, however, is to insist that every act both offers up a judgment to the hearing of the world that will have its way with it and render its own judgments unto it, but also that each act begins in a translation of subjective experience and aspiration into terms that one imagines will be most legible and conducive to the audience in the occasion into whose hearing it is offered, an act of imagination that is also a matter of judgment. That one is sometimes forced (or able) to adapt one's imagined anticipation of the other on the fly in face to face political encounters while the pace of the give-and-take in the publication of considered judgments is a more slow-moving affair, even in the age of public intellectuals on social media, the experience of these differences is not to be denied but neither does it seem to align with a philosophical distinction of worldly deliberation from the free play of reflection that need not ever find its way to voice to enrich the life of the thinker devoted to its pleasures and provocations. Every testament abides only in the collaborative writing of its readership, every deed endures only in being appropriated by the wider world: I think, therefore we've done. One might wish Arendt's mastery of the colloquialism "when the chips are down" were matched by that of "thinking on your feet."
When Beier raises the possibility that the actor exhibits judgment as much as the thinker later in his essay, this has become a problem mostly because he is committed to the thesis that the account of the thinker's judgment in The Life of the Mind has replaced the more fledgling account of the actor's judgment in The Human Condition. Like Beier's puzzled reaction to Arendt's neglect of Aristotle's treatment of political judgment as phronesis/ prudentia in the later works on judgment, this seem to me little more than a matter of shifted emphasis. Far from neglecting Aristotle in her full accounting of judgment, it seems to me she split the difference with a more Aristotelian account in The Human Condition and a more Kantian one to come in Life of the Mind.
Thus, while it is true that Arendt conjoins judgment to understanding in her later work, it is no less true that the same judgment is conjoined to the action which preoccupies her earlier work. Recall that in The Human Condition Arendt proposed that the self is unavailable to reflection but is disclosed in and through public appearance, an absence made present in the legible responsiveness to the self's proffered acts/ judgments toward others in the politics of the vita activa. We depend for our existence not only on the sociality of practical collaboration but of inter-personal recognition: We do, therefore I am. This seems to me a complement to the making-present of absent others on which understandings/ judgments depend in the solitude of the vita contemplativa which Beier mentions to such effect. But to me, again, this gives us reasons instead to think the "early" and "late" characterizations of judgment actually make a coherent case together the force of which is completely undermined by treating them as the supplanting of one by another.
Judgment substantiates the effort to understand the world and materializes the performance of the act in the world. This is not to deny the differences in the indispensable work of judgment in the registers of thought and action, but to insist that their relation matters more than their distinction. While the distinction suggests itself to analysis, the relation impresses upon us as it is lived. There is no doubt that there was a difference between the Arendtian judgment of totalitarian criminality that impelled her early on into responsible activism and the isolating firestorm of judgment occasioned by her effort to understand an exemplary totalitarian criminal Eichmann later on, but it was the lived continuity of judgment's indispensability to the reconciliations of plural stakeholders in the world she shared as well as her reconciliation to the world so made that matters in Arendt's full accounting of the political and her place in it.
In section thirty-three of The Human Condition, "Irreversibility and the Power to Forgive," Arendt provides a rather stunning and never-repeated map of the conceptual terrain of the political, in which she proposes that the products of worldly work redeem the impasse of meaningless metabolic cyclicality in labor, and then the interminable, unpredictable release of actions into the made world redeem the impasse of meaningless instrumental/ causal cyclicality in work, and then that miraculous acts of forgiveness may redeem the impasse of meaningless revengeful cyclicality arising from the risks and costs of action's unpredictability. The apparent shift in emphasis accorded judgment in Arendt's later thinking included an elaboration of the idea that in the extreme impasse of totalitarian tyranny the reflection of the solitary thinker of the vita contemplativa might come to assume in its non-conformism the character of an action of resistance in the vita activa, that a present public might be re-opened to futurity in the making-present of retrospective reflection itself.
Beier does remark on Arendt's later thesis that the thinker may redeem the actor undone by the deeds of tyranny, but his writing in these passages are strangely ambivalent. He suggests that Arendt never quite "faces up" to the radical contingency implied in her account of redemptions that Hans Jonas exposed in a public exchange Beier recounts, and much the same point recurs when he brings up Habermas' criticism a few pages later that Arendt defends opinion to the cost of reason. For me, all this is simply confirmation that the rhetorical account of judgment in the early Arendt is not jettisoned for the later formulations of contemplative judgment in the first place. Although Arendt's writing is full of portentious pronouncements about the rupture of tradition, the dying of the light of the past to illuminate the present, the breaking of Ariadne's thread, and so on, it honestly seems to me that Arendt assumes an almost Rortyan insouciance at the End of Philosophy, altogether untroubled (or at any rate refusing all ressentiment) by the resumption of rhetoric in the eclipse of philosophical pretensions, happy to take up instead a Nietzschean Gay Science as its successor. It is well known that Arendt insistently refused the label "philosopher" and preferred to be known as a political theorist or political thinker instead, after all.
My reference to Nietzsche here is far from idle. Although I am not sure that Beier (or for that matter Arendt herself) read Nietzsche quite the same way I do, I agree that the work of Nietzsche resonates in Arendt's eventual accounting of the political quite as much or even more than Aristotle or Kant do. What Beier seems to me to treat as the almost incidental politically redemptive work of thinking-judgment in emergencies, I would describe instead as synecdochic of the work of judgment in the abiding emergency of history.
In his early, conspicuously sophistical On Truth and the Lie in an Extra-Moral Sense, Nietzsche distinguishes the rational one of prudential affairs and the intuitive one of speculative artistry, and declares that the rational one who disdains the intuitive risks inelegance to the point of stupidity while the intuitive one who disdains the rational risks insanity. Both are deceived. He recommends a re-enchantment of the world, a polytheistic investment of the literal furniture of the world with their ineradicable susceptibility to re-figuration as the terrain on which the co-construction and re-negotiations of the rational and the intuitive are facilitated. And of course such a polytheism demands the death of the monotheistic judeochrislamic God of the Book -- the God that Jonas and the Book that Habermas would trouble Arendt with -- indeed the putrefying corpse of such a dead god could be the most fertile field in which polytheistic poiesis might flourish.
The general contours of this Nietzschean proposal recur throughout his work right up to Ecce Homo, culminating in the formulation of the eternal return as the abiding sublimity of the slippage of world and word demanding a truth-telling as tragic affirmation and in stylish self-creation. The ineradicable ontology of refiguration imposes the inescapable responsibility of resignification for human beings. Arendt's latter formulations on judgment complete (or, even in their incomplete form, enormously enrich) the account of the vita activa elaborated in The Human Condition and re-affirm even in a work entitled The Life of the Mind a life-long emphasis on the active life of worldly affairs and judgments over the philosophical contemptus mundi. If we recall that original title of the early Vita Activa and recall the proper translation of the later Vita Contemplativa, then we might think the true work for the title The Human Condition subsumes both these early and later volumes. The redemptions of labor in work, work in action, action in forgiveness (itself an action), like the redemptions of thought in agency, will in judgment, judgment in historical struggle all materialize dimensions of freedom as pleasures necessary to the life proper to humanity. These pleasures vouchsafe Arendt's own Nietzschean project of post-philosophical truth-telling as affirmations of meaning in the tragic face of finitude. For Arendt, all judgment is beset by emergency.
But Arendt's amor mundi is not quite Nietzsche's amor fati, hers is not his perverse declaration of love for the condition of contingency itself but for the world in which we would make a home in the scrum of history. Politics is the domain of both freedom and responsibility, and the redemptive pleasures of freedom delineated in Arendt's early accounts of doing and later of accounts of thinking are incomplete until we recall the injunction with which the Prologue to The Human Condition ends (which remains apt even when we treat this as the title encompassing the projects of both the early and later volumes): that we also "think what we are doing." The pleasurably emancipatory responsivenesses to our peers and to the world we are making and have made in doing and thinking open onto the responsibilities to our peers and to the world we are making and have made in thinking what we are doing.
To understand the uniquely isolating, de-politicizing, world-destroying character of totalitarian tyranny was the point of departure and abiding touchstone for the thinking of both Hannah Arendt and Michel Foucault, as the organized criminality of colonial occupation and administration understood on much the same terms was the point of departure and abiding touchstone for the thinking of Frantz Fanon. The original published title of The Origins of Totalitarianism was The Burden of Our Time, and that "our" included her in a way that it no longer can for us. Our burdens are different ones, the emergence of the planet from the ruins of the postwar globe is neither the "earth" as Arendt understood it, exactly, nor the world from which she distinguished it, but a different world. What Arendt understood as "The Crisis in Culture" seems to us instead the occasion for a necessary critique of patriarchal and plutocratic violence as we assume the new worldly responsiveness and responsibilities of polyculture. Indeed, the putrefying corpse of such a dead culture could be the most fertile field in which the sustainable democratizing worldmaking of planetary polyculture might flourish.
Tuesday, February 03, 2015
Of Natal Politics
So often we are called to mourn the loss of a life to which we have been indifferent, the loss of differing lives to war or exploitation in the world, to bigotry, neglect, or violence in our midst, called to the belated recognition that their loss is our loss, too. And while I am moved by these gestures, and feel their desperate urgency and hope, it seems crucial to grasp that our indifference to the differing loss is born in the prior indifference to the differing appearance.
That (some) we do not celebrate the appearance in the world of a child in Gaza or Lagos or Ferguson sets the stage for (some) our's indifference to their leaving it: Not to grasp how our world is enriched by the promise of the arrival of a child into our shared present who can love and think and create and collaborate in the making of our next-present all but ensures that we cannot grasp how her expulsion from our present damages and diminishes our world.
Before we can be lost or missed from the world we must first make an appearance in the world. Like a birth bringing life into the world, there is nothing more fragile than an act offered up to the reception of the world. Will the act be apprehended, will it come to fruition, will it make a difference, will it be exposed as an error or derided as a folly? To act or to re-act, to offer up a judgment (right! beautiful! true!) in which another's act is invigorated in its life, is to release novel forces into the world in which the world is re-made or which the world will re-make: And so to act is always to re-enact that first appearance, that primordial novelty, that birth in which we were first released into the world to who knows what ends.
Presence is passage and past and future. History changes with the changes we make in the present, futurity inheres in the openness of the diversity of sharers in the present: Who are the "we" with a share in the making of our now, and who and what are the "they" we consign to the past, to whose present differences we are indifferent, who are denied their measure of futurity?
To mourn is never simply to mourn the loss of another but to mourn the loss of self occasioned by the loss of another on whom the self has depended. To truly mourn is always to mourn the end of the world: it is to mourn the loss of the world that was shared by and made with the one who has gone from it. To mourn is to die as the self that was shared and to be born as a new self that will be differently shared, in a different world. To take up the world-making of politics is to court the loss of selves in which selves are made free, it is to embrace the world-making world-ending of losses mourned and lives re-made.
The freedom substantiated by politics -- so different from, so much more promising and more threatening than, the brute freedom we settle for from instrumental amplifications of our given capacities -- demands we risk openness to the difference of others. That we are mortal means this is a risk of our lives; it is the risk, sure, of violence or humiliation, but more crucially it is the risk that in the open encounter with difference we will die in the lives we have lived, that we will be interrogated out of our assumptions, persuaded to new beliefs, convinced to alien conduct, reconciled to loving otherwise. To live free is to risk the death in life in which we are changed by difference at the very least into being otherwise, becoming strangers from the selves we are or even want to be now.
It is because we are afraid to risk that death in life that re-makes free selves in the mourning occasioned by futurity's openness to difference that we collaborate instead in the deadly indifference to the appearance of differing lives that makes their loss unmournable and leads us looking for surrogate freedom in the futurology of our tools.
That (some) we do not celebrate the appearance in the world of a child in Gaza or Lagos or Ferguson sets the stage for (some) our's indifference to their leaving it: Not to grasp how our world is enriched by the promise of the arrival of a child into our shared present who can love and think and create and collaborate in the making of our next-present all but ensures that we cannot grasp how her expulsion from our present damages and diminishes our world.
Before we can be lost or missed from the world we must first make an appearance in the world. Like a birth bringing life into the world, there is nothing more fragile than an act offered up to the reception of the world. Will the act be apprehended, will it come to fruition, will it make a difference, will it be exposed as an error or derided as a folly? To act or to re-act, to offer up a judgment (right! beautiful! true!) in which another's act is invigorated in its life, is to release novel forces into the world in which the world is re-made or which the world will re-make: And so to act is always to re-enact that first appearance, that primordial novelty, that birth in which we were first released into the world to who knows what ends.
Presence is passage and past and future. History changes with the changes we make in the present, futurity inheres in the openness of the diversity of sharers in the present: Who are the "we" with a share in the making of our now, and who and what are the "they" we consign to the past, to whose present differences we are indifferent, who are denied their measure of futurity?
To mourn is never simply to mourn the loss of another but to mourn the loss of self occasioned by the loss of another on whom the self has depended. To truly mourn is always to mourn the end of the world: it is to mourn the loss of the world that was shared by and made with the one who has gone from it. To mourn is to die as the self that was shared and to be born as a new self that will be differently shared, in a different world. To take up the world-making of politics is to court the loss of selves in which selves are made free, it is to embrace the world-making world-ending of losses mourned and lives re-made.
The freedom substantiated by politics -- so different from, so much more promising and more threatening than, the brute freedom we settle for from instrumental amplifications of our given capacities -- demands we risk openness to the difference of others. That we are mortal means this is a risk of our lives; it is the risk, sure, of violence or humiliation, but more crucially it is the risk that in the open encounter with difference we will die in the lives we have lived, that we will be interrogated out of our assumptions, persuaded to new beliefs, convinced to alien conduct, reconciled to loving otherwise. To live free is to risk the death in life in which we are changed by difference at the very least into being otherwise, becoming strangers from the selves we are or even want to be now.
It is because we are afraid to risk that death in life that re-makes free selves in the mourning occasioned by futurity's openness to difference that we collaborate instead in the deadly indifference to the appearance of differing lives that makes their loss unmournable and leads us looking for surrogate freedom in the futurology of our tools.
Sunday, February 01, 2015
We Are the Killer Robots
The harms and crimes of automation are done by humans to humans. And framing these harms and crimes in terms of killer robots or out of control automation inevitably distorts the issues and the stakes at hand.
While clear deliberation about and regulation of military artifice does need to account for specificities, I simply do not agree that the differences introduced by contemporary military drones or what passes for autonomous weapons systems today are sufficiently different from the quandaries posed by balloons, carrier pigeons, time-bombs, land-mines, guided munitions, and remotely operated weapons systems of years past to justify dramatic, deranging talk of unprecedented transformations and revolutionary robocalypse. Let me be clear: It is because I take the threat of programmed drones and weapons so seriously that I worry about the inflated science-fictional narratives increasingly framing their stakes. The futurological repudiation of available analogies, all-too-familiar issues, and perennial quandaries of war function very readily as a pretext for distractions and deceptions to the cost of hopes for accountability and sanity in this time of world war without end.
There is, after all, nothing more commonplace nowadays than the application of the terms "smart" and "intelligent" to palpably unintelligent devices and inept software. Hyperbole is the argot of digital culture, and the phony investment of dumb tech commodities with agency and intelligence may encourage users to forgive the dysfunction of their computational "companions" while at once this false investment answers to what appears a widely shared ideology or even faith among many of the designers and peddlers of these devices that they are taking humanity step by step, handheld by handheld, landfill by landfill, along the road to techno-transcendental salvation via the serially failed, fatally-flawed program of AI.
Recently, many of the super-rich salesmen (Bill Gates, Elon Musk, Peter Thiel) and so-called "Thought Leaders" (Ray Kurzweil, Stephen Hawking, Nick Bostrom) of our celebrated VC tech culture have been raising alarms about the urgent existential threat of satanic super-intelligent AI. This talk represents the extreme form of the now long-standing and utterly prevalent robo-fixated public imagery and discourse of popular science fiction, commercial advertizing, and corporate-military think tanks full of pronouncements about the wonders of Big Data and smart cards, and the horrors of robot armies and smart drones.
Well-meaning opponents of war atrocities and engines of war would do well to think how tech companies stand to benefit from military contracts for "smarter" software and bleeding-edge gizmos when terrorized and technoscientifically illiterate majorities and public officials take SillyCon Valley's warnings seriously about our "complacency" in the face of truly autonomous weapons and artificial super-intelligence that do not exist.
It is crucial that necessary regulation and even banning of dangerous "autonomous weapons" proceeds in a way that does not abet the mis-attribution of agency, and hence accountability, to devices. Every "autonomous" weapons system expresses and mediates decisions by responsible humans usually all too eager to disavow the blood on their hands. Every legitimate fear of "killer robots" is best addressed by making their coders, designers, manufacturers, officials, and operators accountable for criminal and unethical tools and uses of tools.
Let us take up the point for a different but related issue. Automation has not created and sustained our ongoing unemployment crisis or lowered the earning power of generations of workers -- the decline of collective bargaining to demand an equitable share in profits from productivity gains as well as social support amidst dislocations in labor markets have made automation deployed by plutocrats the occasion for a general crisis of unemployment and wealth concentration. Just so, killer robots don't kill people, people kill people with killer robots -- and they regularly do so in our names, and often as war crimes.
I am always baffled by gun zealots who like to crow that guns don't kill people. I have never understood why the recognition that people kill people with guns provides less reason to ban especially dangerous guns, or restrict their purchase and use, or demand rigorous licensing standards, or require safety measures to protect citizens from accidents and criminal misuse of guns, or impose liabilities on their manufacturers and retailers. Thermonuclear weapons don't kill people, either, people kill people with thermonuclear weapons. That recognition hardly recommends their private sale or use.
There simply is no such thing as a smart bomb. Every bomb is stupid. There is no such thing as an autonomous weapon. Every weapon is deployed.
The only killer robots that actually exist are human beings waging and profiting from war.
While clear deliberation about and regulation of military artifice does need to account for specificities, I simply do not agree that the differences introduced by contemporary military drones or what passes for autonomous weapons systems today are sufficiently different from the quandaries posed by balloons, carrier pigeons, time-bombs, land-mines, guided munitions, and remotely operated weapons systems of years past to justify dramatic, deranging talk of unprecedented transformations and revolutionary robocalypse. Let me be clear: It is because I take the threat of programmed drones and weapons so seriously that I worry about the inflated science-fictional narratives increasingly framing their stakes. The futurological repudiation of available analogies, all-too-familiar issues, and perennial quandaries of war function very readily as a pretext for distractions and deceptions to the cost of hopes for accountability and sanity in this time of world war without end.
There is, after all, nothing more commonplace nowadays than the application of the terms "smart" and "intelligent" to palpably unintelligent devices and inept software. Hyperbole is the argot of digital culture, and the phony investment of dumb tech commodities with agency and intelligence may encourage users to forgive the dysfunction of their computational "companions" while at once this false investment answers to what appears a widely shared ideology or even faith among many of the designers and peddlers of these devices that they are taking humanity step by step, handheld by handheld, landfill by landfill, along the road to techno-transcendental salvation via the serially failed, fatally-flawed program of AI.
Recently, many of the super-rich salesmen (Bill Gates, Elon Musk, Peter Thiel) and so-called "Thought Leaders" (Ray Kurzweil, Stephen Hawking, Nick Bostrom) of our celebrated VC tech culture have been raising alarms about the urgent existential threat of satanic super-intelligent AI. This talk represents the extreme form of the now long-standing and utterly prevalent robo-fixated public imagery and discourse of popular science fiction, commercial advertizing, and corporate-military think tanks full of pronouncements about the wonders of Big Data and smart cards, and the horrors of robot armies and smart drones.
Well-meaning opponents of war atrocities and engines of war would do well to think how tech companies stand to benefit from military contracts for "smarter" software and bleeding-edge gizmos when terrorized and technoscientifically illiterate majorities and public officials take SillyCon Valley's warnings seriously about our "complacency" in the face of truly autonomous weapons and artificial super-intelligence that do not exist.
It is crucial that necessary regulation and even banning of dangerous "autonomous weapons" proceeds in a way that does not abet the mis-attribution of agency, and hence accountability, to devices. Every "autonomous" weapons system expresses and mediates decisions by responsible humans usually all too eager to disavow the blood on their hands. Every legitimate fear of "killer robots" is best addressed by making their coders, designers, manufacturers, officials, and operators accountable for criminal and unethical tools and uses of tools.
Let us take up the point for a different but related issue. Automation has not created and sustained our ongoing unemployment crisis or lowered the earning power of generations of workers -- the decline of collective bargaining to demand an equitable share in profits from productivity gains as well as social support amidst dislocations in labor markets have made automation deployed by plutocrats the occasion for a general crisis of unemployment and wealth concentration. Just so, killer robots don't kill people, people kill people with killer robots -- and they regularly do so in our names, and often as war crimes.
I am always baffled by gun zealots who like to crow that guns don't kill people. I have never understood why the recognition that people kill people with guns provides less reason to ban especially dangerous guns, or restrict their purchase and use, or demand rigorous licensing standards, or require safety measures to protect citizens from accidents and criminal misuse of guns, or impose liabilities on their manufacturers and retailers. Thermonuclear weapons don't kill people, either, people kill people with thermonuclear weapons. That recognition hardly recommends their private sale or use.
There simply is no such thing as a smart bomb. Every bomb is stupid. There is no such thing as an autonomous weapon. Every weapon is deployed.
The only killer robots that actually exist are human beings waging and profiting from war.
Wednesday, January 28, 2015
The Yearning Annex: Google Commits Millions for Robot Cult Indoctrination in Plutocratic Venture-Capitalist Dystopia
Also posted at the World Future Society.
The arrival of superintelligent artificial intelligence is denominated "the Singularity" by these futurologists, a term drawn from the science fiction of Vernor Vinge, as are the general contours of this techno-transcendental narrative, taken up most famously by one-time inventor and now futurological "Thought Leader" Ray Kurzweil and a coterie of so-called tech multimillionaires like Peter Thiel, Elon Musk, Jaan Tallinn all looking to rationalize their good fortune in the irrational exuberance of the tech boom and secure their self-declared destinies as protagonists of post-human history by proselytizing and investing in transhumanist/singularitarian eugenic/digitopian ideology across the neoliberal institutional landscape at MIT, Stanford, Oxford, Google, and so on.
That most of these figures are skim-and-scam artists with little sense and too much money on their hands goes without saying as does the obvious legibility of their "technoscientific" triumphalism as a conventional marketing strategy for commercial crap (get rich quick! anti-aging! sexy-sexy!) but amplified into a scarcely stealthed fulminating faith re-enacting the theological terms of an omni-predicated godhead delivering True Believers eternal life in absolute bliss with perfect knowledge. Not to put too fine a point it, the serially-failed program of AI doesn't become more plausible by slapping "super" in front of the AI, especially when the same sociopathic body-loathing digi-spiritualizing assumptions remain in force among its adherents; exponential processing power checked by comparable ballooning kruft is on a road to nowhere like transcendence; and since a picture of you isn't you and cyberspace is buggy and noisy and brittle hoping to live there forever as an information spirit is pretty damned stupid even if you call yourself a soopergenius.
Since the super-intelligent and nanotechnological magicks on which techno-transcendentalists pin their real hopes are not remotely in evidence, these futurologists tend to hype the media and computational devices of the day, celebrating algorithmic mediation and Big Data framing and kludgy gaming virtualities like Oculus Rift and surveillance media like the failed Google Glass and venture capitalist "disruption" like airbnb and uber. That this is the world of hyping toxic wage-slave manufactured landfill-destined consumer crap and reactionary plutocratic wealth concentration via the looting and deregulation of public and common goods coupled with ever-amplifying targeted marketing harassment and corporate-military surveillance should give the reader some pause when contemplating the significance of declarations like "GSP's driving goal is to positively impact the lives of a billion people in the next decade using exponential technologies."
The press release suavely reassures us that "Google is, of course, no stranger to moon shot thinking and the value of world-shaking projects." I think it is enormously important to pause and think a bit about what that "of course" is drawing on and standing for. It should be noted what "moon shot thinking" amounts to in a world that hasn't witnessed a moonshot in generations. There are questions to ask, after all, about Google's "world-shaking projects" advertorially curating all available knowledge in the service of parochial profit-taking, all the while handwaving about vaporware like immortality meds and driverless car-culture and geo-engineering greenwash. There are questions to ask about the techno-utopian future brought about by a "grad school" at a "university" for which "the final exam" is "a chance to develop and then pitch a world-changing business plan to a packed house." I will leave delineating the dreary dystopian details to the reader.
Their prose was all purple, there were VCs running everywhere, tryin' to profit from destruction, you know we didn't even care.via Singularity Hub (h/t David Golumbia):
Google, a long-time supporter of Singularity University (SU), has agreed to a two-year, $3 million contribution to SU's flagship Graduate Studies Program (GSP). Google will become the program's title sponsor and ensure all successful direct applicants get the chance to attend free of charge. Held every summer, the GSP's driving goal is to positively impact the lives of a billion people in the next decade using exponential technologies. Participants spend a fast-paced ten weeks learning all they need to know for the final exam—a chance to develop and then pitch a world-changing business plan to a packed house."Exponential technologies" is a short hand for the false and facile narrative superlative futurologists spun from Moore's Law -- the observation in 1965 (the year I was born) that the number of transistors on an integrated circuit had been roughly doubling every two years, and the paraphrase of that observation into a law-like generalization that chip performance more or less doubles every two years -- into the faith-based proclamation that this processing power will inevitably eventuate in artificial intelligence, and soon thereafter a history shattering super-intelligence that will control self-replicating programmable nanoscale robots that will provide a magical superabundance on the cheap and deliver near immortality through prosthetic medical enhancement and the digital uploading of "informational soul-selves" into imperishable online paradises.
The arrival of superintelligent artificial intelligence is denominated "the Singularity" by these futurologists, a term drawn from the science fiction of Vernor Vinge, as are the general contours of this techno-transcendental narrative, taken up most famously by one-time inventor and now futurological "Thought Leader" Ray Kurzweil and a coterie of so-called tech multimillionaires like Peter Thiel, Elon Musk, Jaan Tallinn all looking to rationalize their good fortune in the irrational exuberance of the tech boom and secure their self-declared destinies as protagonists of post-human history by proselytizing and investing in transhumanist/singularitarian eugenic/digitopian ideology across the neoliberal institutional landscape at MIT, Stanford, Oxford, Google, and so on.
That most of these figures are skim-and-scam artists with little sense and too much money on their hands goes without saying as does the obvious legibility of their "technoscientific" triumphalism as a conventional marketing strategy for commercial crap (get rich quick! anti-aging! sexy-sexy!) but amplified into a scarcely stealthed fulminating faith re-enacting the theological terms of an omni-predicated godhead delivering True Believers eternal life in absolute bliss with perfect knowledge. Not to put too fine a point it, the serially-failed program of AI doesn't become more plausible by slapping "super" in front of the AI, especially when the same sociopathic body-loathing digi-spiritualizing assumptions remain in force among its adherents; exponential processing power checked by comparable ballooning kruft is on a road to nowhere like transcendence; and since a picture of you isn't you and cyberspace is buggy and noisy and brittle hoping to live there forever as an information spirit is pretty damned stupid even if you call yourself a soopergenius.
Since the super-intelligent and nanotechnological magicks on which techno-transcendentalists pin their real hopes are not remotely in evidence, these futurologists tend to hype the media and computational devices of the day, celebrating algorithmic mediation and Big Data framing and kludgy gaming virtualities like Oculus Rift and surveillance media like the failed Google Glass and venture capitalist "disruption" like airbnb and uber. That this is the world of hyping toxic wage-slave manufactured landfill-destined consumer crap and reactionary plutocratic wealth concentration via the looting and deregulation of public and common goods coupled with ever-amplifying targeted marketing harassment and corporate-military surveillance should give the reader some pause when contemplating the significance of declarations like "GSP's driving goal is to positively impact the lives of a billion people in the next decade using exponential technologies."
The press release suavely reassures us that "Google is, of course, no stranger to moon shot thinking and the value of world-shaking projects." I think it is enormously important to pause and think a bit about what that "of course" is drawing on and standing for. It should be noted what "moon shot thinking" amounts to in a world that hasn't witnessed a moonshot in generations. There are questions to ask, after all, about Google's "world-shaking projects" advertorially curating all available knowledge in the service of parochial profit-taking, all the while handwaving about vaporware like immortality meds and driverless car-culture and geo-engineering greenwash. There are questions to ask about the techno-utopian future brought about by a "grad school" at a "university" for which "the final exam" is "a chance to develop and then pitch a world-changing business plan to a packed house." I will leave delineating the dreary dystopian details to the reader.
Thursday, January 22, 2015
Syllabus for my Digital Democracy, Digital Anti-Democracy Course (Starting Tomorrow)
Digital Democracy, Digital Anti-Democracy (CS-301G-01)
Spring 2015 01/23/2015-05/08/2015 Lecture Friday 09:00AM - 11:45AM, Main Campus Building, Room MCR
Instructor: Dale Carrico; Contact: dcarrico@sfai.edu, ndaleca@gmail.com
Blog: http://digitaldemocracydigitalantdemocracy.blogspot.com/
Grade Roughly Based On: Att/Part 15%, Reading Notebook 25%, Reading 10%, In-Class Report 10%, Final Keywords Map 40%
Course Description:
This course will try to make sense of the impacts of technological change on public life. We will focus our attention on the ongoing transformation of the public sphere from mass-mediated into peer-to-peer networked. Cyberspace isn't a spirit realm. It belches coal smoke. It is accessed on landfill-destined toxic devices made by wretched wage slaves. It has abetted financial fraud and theft around the world. All too often, its purported "openness" and "freedom" have turned out to be personalized marketing harassment, panoptic surveillance, zero comments, and heat signatures for drone targeting software. We will study the history of modern media formations and transformations, considering the role of media critique from the perspective of several different social struggles in the last era of broadcast media, before fixing our attention on the claims being made by media theorists, digital humanities scholars, and activists in our own technoscientific moment.
Provisional Schedule of Meetings
Week One, January 23: What Are We Talking About When We Talk About "Technology" and "Democracy"?
Week Two, January 30: Digital,
Laurie Anderson: The Language of the Future
Martin Heidegger, The Question Concerning Technology
Evgeny Morozov, The Perils of Perfectionism
Paul D. Miller (DJ Spooky), Material Memories
POST READING ONLINE BEFORE CLASS MEETING
Week Three, February 6: The Architecture of Cyberspatial Politics
Lawrence Lessig, The Future of Ideas, Chapter Three: Commons on the Wires
Yochai Benkler, Wealth of Networks, Chapter 12: Conclusion
Michel Bauwens, The Political Economy of Peer Production
Saskia Sassen, Interactions of the Technical and the Social: Digital Formations of the Powerful and the Powerless
My own, p2p Is Either Pay-to-Peer or Peers-to-Precarity
Jessica Goodman The Digital Divide Is Still Leaving Americans Behind
American Civil Liberties Union, What Is Net Neutrality
Dan Bobkoff, Is Net Neutrality the Real Issue?
Week Four, February 13: Published Public
Dan Gillmour, We the Media, Chapter One: From Tom Paine to Blogs and Beyond
Digby (Heather Parton) The Netroots Revolution
Clay Shirky, Blogs and the Mass Amateurization of Publishing
Aaron Bady, Julian Assange and the Conspiracy to "Destroy the Invisible Government"
Geert Lovink Blogging: The Nihilist Impulse
Week Five, February 20: Immaterialism
John Perry Barlow, A Declaration of the Independence of Cyberspace
Katherine Hayles, Liberal Subjectivity Imperiled: Norbert Weiner and Cybernetic Anxiety
Paulina Borsook, Cyberselfish
David Golumbia, Cyberlibertarians' Digital Deletion of the Left
Richard Barbrook and Andy Cameron, California Ideology
Eric Hughes, A Cypherpunk's Manifesto
Tim May, The Cryptoanarchist Manifest
Week Six, February 27: The Architecture of Cyberspatial Politics: Loose Data
Lawrence Lessig, Prefaces to the first and second editions of Code
Evgeny Morozov, Connecting the Dots, Missing the Story
Lawrence Joseph Interviews Frank Pasquale about The Black Box Society
My Own, The Inevitable Cruelty of Algorithmic Mediation
Frank Pasquale, Social Science in an Era of Corporate Big Data
danah boyd and Kate Crawford, Critical Questions for Big Data Bruce Sterling, Maneki Neko
Week Seven, March 6: Techno Priesthood
Evgeny Morozov, The Meme Hustler
Jedediah Purdy, God of the Digirati
Jaron Lanier, First Church of Robotics
Jalees Rehman, Is Internet-Centrism A Religion?
Mike Bulajewski, The Cult of Sharing
George Sciaballa Review of David Noble's The Religon of Technology
Week Eight, March 13: Total Digital
Jaron Lanier, One Half of a Manifesto
Vernor Vinge, Technological Singularity
Nathan Pensky, Ray Kurzweil Is Wrong: The Singularity Is Not Near
Aaron Labaree, Our Science Fiction Future: Meet the Scientists Trying to Predict the End of the World
My Own, Very Serious Robocalyptics
Marc Steigler, The Gentle Seduction
Week Nine, March 16-20: Spring Break
Week Ten, March 27: Meet Your Robot God
Screening the film, "Colossus: The Forbin Project"
Week Eleven, April 3: Publicizing Private Goods
Cory Doctorow You Can't Own Knowledge
James Boyle, The Second Enclosure Movement and the Construction of the Public Domain
David Bollier, Reclaiming the Commons
Astra Taylor, Six Questions on the People's Platform
Week Twelve, April 10: Privatizing Public Goods
Nicholas Carr, Sharecropping the Long Tail
Nicholas Carr, The Economics of Digital Sharecropping
Clay Shirky, Why Small Payments Won't Save Publishing
Scott Timberg: It's Not Just David Byrne and Radiohead: Spotify, Pandora, and How Streaming Music Kills Jazz and Classical
Scott Timberg Interviews Dave Lowery, Here's How Pandora Is Destroying Musicians
Hamilton Nolan, Microlending Isn't All It's Cracked Up To Be
Week Thirteen, April 17: Securing Insecurity
Charles Mann, Homeland Insecurity
David Brin, Three Cheers for the Surveillance Society!
Lawrence Lessig, Insanely Destructive Devices
Glenn Greenwald, Ewan MacAskill, and Laura Poitras, Edward Snowden: The Whistleblower Behind the NSA Surveillance Revelations
Daniel Ellsberg, Edward Snowden: Saving Us from the United Stasi of America
Week Fourteen, April 24: "Hashtag Activism" I
Evgeny Morozov Texting Toward Utopia
Hillary Crosly Croker, 2013 Was the Year of Black Twitter
Michael Arceneux, Black Twitter's 2013 All Stars
Annalee Newitz, What Happens When Scientists Study Black Twitter
Alicia Garza, A Herstory of the #BlackLivesMatter Movement
Shaquille Bewster, After Ferguson: Is "Hashtag Activism" Spurring Policy Changes?
Jamilah King, When It Comes to Sports Protests, Are T-Shirts Enough?
Week Fifteen, May 1: "Hashtag Activism" II
Paulina Borsook, The Memoirs of a Token: An Aging Berkeley Feminist Examines Wired
Zeynap Tukekci, No, Nate, Brogrammers May Not Be Macho, But That's Not All There Is To It; How French High Theory and Dr. Seuss Can Help Explain Silicon Valley's Gender Blindspots
Sasha Weiss, The Power of #YesAllWomen
Lisa Nakamura, Queer Female of Color: The Highest Difficulty Setting There Is? Gaming Rhetoric as Gender Capital
Yoonj Kim, #NotYourAsianSidekick Is A Civil Rights Movement for Asian American Women
Jay Hathaway, What Is Gamergate
Week Sixteen, May 8: Digital Humanities, Participatory Aesthetics, and Design Culture
Claire Bishop, The Social Turn and Its Discontents
Adam Kirsch, Technology Is Taking Over English Departments: The False Promise of the Digital Humanities
David Golumbia, Digital Humanities: Two Definitions
Tara McPherson, Why Are Digital Humanities So White?
Roopika Risam, The Race for DigitalityWendy Hui Kyong Chun, The Dark Side of the Digital Humanities
Bruce Sterling, The Spime
Hal Foster, Design and Crime
FINAL PROJECT DUE IN CLASS; HAND IN NOTEBOOKS WITH FINAL PROJECT
Spring 2015 01/23/2015-05/08/2015 Lecture Friday 09:00AM - 11:45AM, Main Campus Building, Room MCR
Instructor: Dale Carrico; Contact: dcarrico@sfai.edu, ndaleca@gmail.com
Blog: http://digitaldemocracydigitalantdemocracy.blogspot.com/
Grade Roughly Based On: Att/Part 15%, Reading Notebook 25%, Reading 10%, In-Class Report 10%, Final Keywords Map 40%
Course Description:
This course will try to make sense of the impacts of technological change on public life. We will focus our attention on the ongoing transformation of the public sphere from mass-mediated into peer-to-peer networked. Cyberspace isn't a spirit realm. It belches coal smoke. It is accessed on landfill-destined toxic devices made by wretched wage slaves. It has abetted financial fraud and theft around the world. All too often, its purported "openness" and "freedom" have turned out to be personalized marketing harassment, panoptic surveillance, zero comments, and heat signatures for drone targeting software. We will study the history of modern media formations and transformations, considering the role of media critique from the perspective of several different social struggles in the last era of broadcast media, before fixing our attention on the claims being made by media theorists, digital humanities scholars, and activists in our own technoscientific moment.
Provisional Schedule of Meetings
Week One, January 23: What Are We Talking About When We Talk About "Technology" and "Democracy"?
Week Two, January 30: Digital,
Laurie Anderson: The Language of the Future
Martin Heidegger, The Question Concerning Technology
Evgeny Morozov, The Perils of Perfectionism
Paul D. Miller (DJ Spooky), Material Memories
POST READING ONLINE BEFORE CLASS MEETING
Week Three, February 6: The Architecture of Cyberspatial Politics
Lawrence Lessig, The Future of Ideas, Chapter Three: Commons on the Wires
Yochai Benkler, Wealth of Networks, Chapter 12: Conclusion
Michel Bauwens, The Political Economy of Peer Production
Saskia Sassen, Interactions of the Technical and the Social: Digital Formations of the Powerful and the Powerless
My own, p2p Is Either Pay-to-Peer or Peers-to-Precarity
Jessica Goodman The Digital Divide Is Still Leaving Americans Behind
American Civil Liberties Union, What Is Net Neutrality
Dan Bobkoff, Is Net Neutrality the Real Issue?
Week Four, February 13: Published Public
Dan Gillmour, We the Media, Chapter One: From Tom Paine to Blogs and Beyond
Digby (Heather Parton) The Netroots Revolution
Clay Shirky, Blogs and the Mass Amateurization of Publishing
Aaron Bady, Julian Assange and the Conspiracy to "Destroy the Invisible Government"
Geert Lovink Blogging: The Nihilist Impulse
Week Five, February 20: Immaterialism
John Perry Barlow, A Declaration of the Independence of Cyberspace
Katherine Hayles, Liberal Subjectivity Imperiled: Norbert Weiner and Cybernetic Anxiety
Paulina Borsook, Cyberselfish
David Golumbia, Cyberlibertarians' Digital Deletion of the Left
Richard Barbrook and Andy Cameron, California Ideology
Eric Hughes, A Cypherpunk's Manifesto
Tim May, The Cryptoanarchist Manifest
Week Six, February 27: The Architecture of Cyberspatial Politics: Loose Data
Lawrence Lessig, Prefaces to the first and second editions of Code
Evgeny Morozov, Connecting the Dots, Missing the Story
Lawrence Joseph Interviews Frank Pasquale about The Black Box Society
My Own, The Inevitable Cruelty of Algorithmic Mediation
Frank Pasquale, Social Science in an Era of Corporate Big Data
danah boyd and Kate Crawford, Critical Questions for Big Data Bruce Sterling, Maneki Neko
Week Seven, March 6: Techno Priesthood
Evgeny Morozov, The Meme Hustler
Jedediah Purdy, God of the Digirati
Jaron Lanier, First Church of Robotics
Jalees Rehman, Is Internet-Centrism A Religion?
Mike Bulajewski, The Cult of Sharing
George Sciaballa Review of David Noble's The Religon of Technology
Week Eight, March 13: Total Digital
Jaron Lanier, One Half of a Manifesto
Vernor Vinge, Technological Singularity
Nathan Pensky, Ray Kurzweil Is Wrong: The Singularity Is Not Near
Aaron Labaree, Our Science Fiction Future: Meet the Scientists Trying to Predict the End of the World
My Own, Very Serious Robocalyptics
Marc Steigler, The Gentle Seduction
Week Nine, March 16-20: Spring Break
Week Ten, March 27: Meet Your Robot God
Screening the film, "Colossus: The Forbin Project"
Week Eleven, April 3: Publicizing Private Goods
Cory Doctorow You Can't Own Knowledge
James Boyle, The Second Enclosure Movement and the Construction of the Public Domain
David Bollier, Reclaiming the Commons
Astra Taylor, Six Questions on the People's Platform
Week Twelve, April 10: Privatizing Public Goods
Nicholas Carr, Sharecropping the Long Tail
Nicholas Carr, The Economics of Digital Sharecropping
Clay Shirky, Why Small Payments Won't Save Publishing
Scott Timberg: It's Not Just David Byrne and Radiohead: Spotify, Pandora, and How Streaming Music Kills Jazz and Classical
Scott Timberg Interviews Dave Lowery, Here's How Pandora Is Destroying Musicians
Hamilton Nolan, Microlending Isn't All It's Cracked Up To Be
Week Thirteen, April 17: Securing Insecurity
Charles Mann, Homeland Insecurity
David Brin, Three Cheers for the Surveillance Society!
Lawrence Lessig, Insanely Destructive Devices
Glenn Greenwald, Ewan MacAskill, and Laura Poitras, Edward Snowden: The Whistleblower Behind the NSA Surveillance Revelations
Daniel Ellsberg, Edward Snowden: Saving Us from the United Stasi of America
Week Fourteen, April 24: "Hashtag Activism" I
Evgeny Morozov Texting Toward Utopia
Hillary Crosly Croker, 2013 Was the Year of Black Twitter
Michael Arceneux, Black Twitter's 2013 All Stars
Annalee Newitz, What Happens When Scientists Study Black Twitter
Alicia Garza, A Herstory of the #BlackLivesMatter Movement
Shaquille Bewster, After Ferguson: Is "Hashtag Activism" Spurring Policy Changes?
Jamilah King, When It Comes to Sports Protests, Are T-Shirts Enough?
Week Fifteen, May 1: "Hashtag Activism" II
Paulina Borsook, The Memoirs of a Token: An Aging Berkeley Feminist Examines Wired
Zeynap Tukekci, No, Nate, Brogrammers May Not Be Macho, But That's Not All There Is To It; How French High Theory and Dr. Seuss Can Help Explain Silicon Valley's Gender Blindspots
Sasha Weiss, The Power of #YesAllWomen
Lisa Nakamura, Queer Female of Color: The Highest Difficulty Setting There Is? Gaming Rhetoric as Gender Capital
Yoonj Kim, #NotYourAsianSidekick Is A Civil Rights Movement for Asian American Women
Jay Hathaway, What Is Gamergate
Week Sixteen, May 8: Digital Humanities, Participatory Aesthetics, and Design Culture
Claire Bishop, The Social Turn and Its Discontents
Adam Kirsch, Technology Is Taking Over English Departments: The False Promise of the Digital Humanities
David Golumbia, Digital Humanities: Two Definitions
Tara McPherson, Why Are Digital Humanities So White?
Roopika Risam, The Race for DigitalityWendy Hui Kyong Chun, The Dark Side of the Digital Humanities
Bruce Sterling, The Spime
Hal Foster, Design and Crime
FINAL PROJECT DUE IN CLASS; HAND IN NOTEBOOKS WITH FINAL PROJECT
Subscribe to:
Posts (Atom)