Using Technology to Deepen Democracy, Using Democracy to Ensure Technology Benefits Us All

Thursday, February 12, 2009

Argumentative Writing Handouts and Guidelines

Over the years teaching argumentative writing classes for undergraduates at Berkeley and elsewhere I have accumulated a number of general guidelines, workshopping templates, peer-editing worksheets and so on, some of which I have presented in teaching seminars, or which have circulated informally in bits and pieces, some online, others as smudged samizdat. Occasionally (and weirdly often lately) I get requests for copies of this material from former students who have gone on to teach themselves or who think they might apply in different contexts (organizational mediation and facilitation, that sort of thing), or from colleagues who remember some presentation I delivered who knows when, or what have you. So, anyway, I am publishing some of the material I get requests for here, just so that they are readily available for anybody who asks me for them. Another, slightly longer piece is forthcoming, but I want to edit it a bit first, and since I'm off to teach it will have to wait, come to think of it. But, anyway, the last few posts have been different enough from the usual in tone and substance that I thought I should explain why I put them here. Perhaps some will interest you despite the change of pace, but come what may, I'll be back to the usual technodevelopmental social struggle, corporate-militarist critique, and centaur softcore you have come to expect soon enough.

THESIS WORKSHOP

Every argumentative paper you write for our course must have a thesis. A thesis is a claim. It is a statement of the thing your paper is trying to show your own readers about a text you have read. Very often, the claim will be simple enough to express in a single sentence, and it will usually appear early on in the paper to give your readers a clear sense of the project of your paper. A good thesis is a claim that is strong. For our purposes, the best way to define a strong claim is to say it is a claim for which you can imagine an intelligent opposition. It is a claim that you actually feel you need to argue for, rather than a very obvious sort of claim or a report of your own reactions to a text (which you don't have to argue for at all). Remember, when you are producing a reading about a complex literary text like a novel, a poem, or a film the object of your argument will be to illuminate the text, to draw attention to some aspect of the work you think that the text is accomplishing.

Once you have determined the detail or problem or element in a text that you want to draw your reader's attention to and argue about, your opposition will likely consist of those who would focus elsewhere because they don't grasp the importance of your focus, or who would draw different conclusions than you do from your own focus.

The thesis names your paper's task, its project, its object, its focus. As you write your papers, it is a very good idea to ask yourself these questions, from time to time: Does this quotation, does this argument, does this paragraph directly support my thesis in some way? If it doesn't you should probably delete it, because this probably means you have gotten off track. If you are drawn repeatedly away from what you have chosen as your thesis, ask yourself whether or not this signals that you really want to argue for some different thesis.

THESIS WORKSHOP EXERCISE ONE:

Brainstorm. Take a sheet of paper and in roughly ten minutes write down a dozen or so claims you can make about your chosen text. Don't worry about whether these claims are "deep" or whether they are "interesting," just write down claims that you think are true about the text.

1.
2.
3.
4.
5.
6.
7.
8.
9.
10.

Either in individual consultation with your instructor or in small peer groups: Once the time is up, read over your claims. Eliminate claims that are not really about the text at all. For example, eliminate claims that say the text is "good," or "correct," or "effective" -- since these are really claims about the way you react to the text rather than claims about the text itself. Eliminate claims that say the text is "wrong," or "incorrect," or "ineffective" since, again, these are really claims about you, or they are claims that will lead you to discuss some more general topic (like politics, or history, or philosophy) rather than remaining focused on the text itself.

How many claims are left? Do any of these claims seem especially interesting to you? Can you imagine how you might argue for some of them in a conversation with somebody who disagreed with you about them? Do some of the claims really say the same thing in different ways? Do they suggest some other claims that might express your actual interests more closely?

THESIS WORKSHOP EXERCISE TWO:

You should now have a couple candidates or so for your thesis remaining. Now, for each of these possible thesis claims come up with the strongest or most obvious opposition to each thesis. For example, what would the opposite claim be to the one you are making?

Either in individual consultation with your instructor or in small peer groups: Read over these oppositions. Of course, you are likely to disagree with these claims since they are opposed to the ones you want to make yourself -- but can you imagine anyone actually making these oppositional claims about the text you have read?

1.
2.
3.
4.
5.

If the opposition you have come up with seems vague or unintelligent or highly implausible this probably indicates that you need to sharpen up your own initial thesis. Is there a version of your thesis that is more focused and specific that retains the spirit of your claim but which provokes a more interesting opposition?

If the opposition you have written suddenly seems more compelling than the thesis itself this probably indicates that the stakes of your project, or possibly your whole take on the text itself, is different than you initially thought it was. Perhaps what you thought of as opposition to your thesis actually provides you with a stronger thesis and a new direction for your own paper.

Peer Editing Worksheet

A peer edit is not an itemized list of broad impressions, problems, or compliments, but should represent a sustained and sympathetic argumentative engagement with the text you are reading.

Editors, you should provide comments in the form of a short essay that clearly answers all or most of the following questions:

1. What is your own name?
2. What is the name of the papers author?
3. What is the title of the paper?
4. Did the paper satisfy the expectations raised in its title? **
5. In your own words, state what you think to be the thesis of the paper in one or two sentences.**
6. Was this thesis expressed clearly in the paper itself?
7. Is this a strong thesis?
8. Why or why not?
9. Can you imagine an intelligent opposition to this thesis?
10. What might this be?
11. Does the author remain true to this thesis through the paper? **
12. Were there important terms that needed stronger or clearer definitions? **
13. If yes, what were they?
14. Did the author use quotations from the text effectively to justify and illustrate their interpretations?
15. Did the author anticipate relevant objections to their various claims? **
16. Name an objection that either should have been addressed or which warranted a deeper exploration than the paper presently provides.
17. Did the author’s address of possible objections contribute to the strength of the case the paper is making, or distract from that case as you understood it?
18. Comment on the papers line of argument (its overall clarity, the smoothness of its transitions and substantiations, the order in which it developed its points, etc.). **
19. Comment on the papers prose (style, grammar, sentence construction, punctuation, etc.).
20. What qualities did you like best about the paper? **
21. What is the single most important aspect of the paper that the author should work on before handing it in?

Things to consider as you read the comments of your editors:

1. What were the problems or concerns that most preoccupied you about your paper before beginning this peer editing process?
2. Were those concerns addressed by your editors? [If not, demand that they are.]
3. For each editor, which comments were most helpful to you?
4. Which comments would be more helpful if they were clarified or amplified somewhat? [Ask for clarifications or examples or suggestions on these issues.]
You should note that these are the questions which guide my own readings of your papers, and that my marginal comments and concluding discussion will tend to register my preoccupation with these same questions.

**These are questions you should make a habit of asking of any text at all that you are reading critically.

Writing a Precis

One of the key requirements for our course involves the writing of a précis. Think of this précis as a basic paraphrase of the argumentative content of a text (or of a key chapter or section of a longer or especially complex text).

Here is a broad and informal guide for a précis, but it also provides a pretty good guide for the sorts of questions you should always ask of a text as you are reading it critically, and again after you have finished reading it. Don't treat this as an ironclad template, but as a rough approach to producing a précis -- a truly fine and useful précis need not necessarily satisfy all of these interventions.

A précis should provide answers to fairly basic questions such as:

1. What, in your own words, is the basic gist of the argument?

2. To what audience is it pitched primarily? (Do you see yourself as part of that intended audience, and how does your answer impact your reading of the argument?) Does it anticipate and respond to possible objections?

3. What do you think are the argument's stakes in general? To what end is the argument made? How has this end shaped the argument in your view?
a. To call assumptions into question?
b. To change convictions?
c. To alter conduct?
d. To find acceptable compromises between contending positions?
4. Does it have an explicit thesis? If not, could you provide one in your own words for it?

5. What are the reasons and evidence offered up in the argument to support what you take to be its primary end? What crucial or questionable warrants (unstated assumptions the argument takes to be shared by its audience, often general attitudes of a political, moral, social, cultural nature) does the argument seem to depend on? Are any of these reasons, evidences, or warrants questionable in your view? Do they support one another or introduce tensions under closer scrutiny? Do these implicit assumptions clash with explicit claims made elsewhere in the text?

6. What, if any, kind of argumentative work is being done by metaphors and other figurative language in the piece? Do the metaphors collaborate to paint a consistent picture, or do they clash with one another? What impact does this have on their argumentative force?

7. Are there key terms in the piece that seem to have idiosyncratic definitions, or whose usages seem to change over the course of the argument?

As you see, a piece that interrogates a text from these angles of view will yield something between a general book report and a close reading, but one that focuses on the argumentative force of a text. For the purposes of our class, such a précis succeeds if it manages

(1) to convey the basic flavor of the argument of the text and
(2) provides a good point of departure for a rich public discussion of the text.

Four Habits of Argumentative Writing

In this course you will be producing argumentative writing based on close textual readings. We will spend a good deal of time talking together about what it means to write persuasively and read closely, what sorts of things can usefully be considered texts in the first place, and under what circumstances, and so on, but as a first approximation of what I mean I am offering you four general habits of attention and writing practice, guidelines I will want you to apply to your writing this term. If you can incorporate these four writing practices into your future work you will have mastered the task of producing a competent argumentative paper for just about any discipline in the humanities that would ask you for one. Incidentally, I will also say that taking these habits truly to heart goes a long way in my view toward inculcating the critical temper indispensable for good citizenship in functioning democracies in a world of diverse and contentious stakeholders with urgent shared problems.

A First Habit

An argumentative paper will have a thesis. A thesis is a claim. It is a statement of the thing your paper is trying to show. Very often, the claim will be straightforward enough to express in a single sentence or so, and it will usually appear early on in the paper to give your readers a clear sense of the project of the paper. A thesis is a claim that is strong. A strong claim is a claim for which you can imagine an intelligent opposition. It is a claim that you feel a need to argue for. Close readings and research papers may seem very different as writing projects, but a thesis is the key to both. Remember, when you are producing a reading about a complex literary text like a novel, a poem, or a film the object of your argument will be to illuminate the text, to draw attention to some aspect of the wider work the text is accomplishing. Once you have determined the dimension or element in a text that you want to argue about, your opposition might consist of those who would focus elsewhere or who would draw different conclusions from your own focus. When you are writing a research paper, remember that you are not simply exploring a topic, you are seeking an answer to a question. That question (sometimes in the form of an hypothesis that would answer the question) directs your research, though sometimes the research process itself can change your question. Your answer to your research question is your research paper's thesis, the claim you support with the evidence you gathered in your research and present in the body of the paper itself. Your thesis is your paper's spine, your paper's task. As you write your papers, it is a good idea to ask yourself the question, from time to time, Does this quotation, does this argument, does this paragraph support my thesis in some way? If it doesn’t, delete it. If you are drawn repeatedly away from what you have chosen as your thesis, ask yourself whether or not this signals that you really want to argue for some different thesis.


A Second Habit

You should define your central terms, especially the ones you may be using in an idiosyncratic way. Your definitions can be casual ones, they don’t have to sound like dictionary definitions. But it is crucial that once you have defined a term you will stick to the meaning you have assigned it yourself. Never simply assume that your readers know what you mean or what you are talking about. Never hesitate to explain yourself for fear of belaboring the obvious. Clarity never appears unintelligent.

A Third Habit

You should support your claims about the text with actual quotations from the text itself. In this course you will always be analyzing texts (broadly defined) and whatever text you are working on should probably be a major presence on nearly every page of your papers. A page without quotations is often a page that has lost track of its point, or one that is stuck in abstract generalizations. This doesn't mean that your paper should consist of mostly huge block quotes. On the contrary, a block quote is usually a quote that needs to be broken up and read more closely and carefully. If you see fit to include a lengthy quotation filled with provocative details, I will expect you to contextualize and discuss all of those details. If you are unprepared to do this, or fear that doing so will introduce digressions from your argument, this signals that you should be more selective about the quotations to which you are calling attention.

A Fourth Habit

You should anticipate objections to your thesis. In some ways this is the most difficult habit to master. Remember that even the most solid case for a viewpoint is vulnerable to dismissal by the suggestion of an apparently powerful counterexample. That is why you should anticipate problems, criticisms, counterexamples, and deal with them before they arise, and deal with them on your own terms. If you cannot imagine a sensible and relevant objection to your line of argument it means either that you are not looking hard enough or that your claim is not strong enough.

Tuesday, February 03, 2009

Robot Cultists Getting Too N.I.C.E. By Half

I get this question nearly every day: Why, oh, why, do you take all this superlative silliness seriously?

It's easy to discount Superlativity once you've slogged through the critique.

But try to recapture the state of mind with which you skitted over the ideological framing of "tech news" before you gave Superlativity any serious thought. Try to recapture the disinterest with which you passed over platitudes in popular, professional, and academic media that treat some scarcely worked through genetic technique as justifying the question "do you want to live forever?" Or that straightforwardly claims that economies or societies or personalities somehow "evolve." Or that declares the "experts" worry that "people" are unprepared to make good decisions in the face of "accelerating change." Or confidently proposes that "good design" (alone?) can achieve what are in fact palpably political accomplishments like sustainability, social justice, democratic participation, security, liberty, progress. Or that oh so politely indicates that some human lifeways, however wanted they may be by those who incarnate them for the present can nevertheless be declared "suboptimal" in the face of "enhancement" that is "sure" to "engineer" them out of existence for the more "optimal" morphologies and lifeways of the bland blank catalogue-models and workaholics we presumably pine to be in our best most clearheaded moments.

Superlativity as it is celebrated by the Robot Cultists is indeed an unsubstantiated, sociopathic, inelegant, infantile mess of theses and themes, but it is at one and the same time an iceberg tip, a symptom of a deeper more prevailing tendency to a reductionism conjoined to elitism and loathing of life that plays out in mainstream neoliberal and neoconservative corporate-militarist global "development" discourse, a constellation of attitudes crystallizing in something like a futurological programme and suffusing the self-image of whole academic disciplines and professional populations, among them some that attract torrents of cash and uncritical enthusiasm.

It's easy to expose the facile formulations of the futurological congress, to snicker at the oafish ever-marginal Robot Cult. But there are strong structural affinities between the ruling rationales of corporate-militarist incumbency and the superlative mindset. One might surely have felt the same disdain a lingering intelligent look at the Robot Cultists inevitably inspires, and with equal justice, in the early days when another klatch of badly off-putting off-kilter boys with toys who fancied themselves the smartest things in any room unleashed Neoconservatism on the world to the cost of us all.

What could be more perfect than an article in the Financial Times informing us that
Google and Nasa are throwing their weight behind a new school for futurists in Silicon Valley to prepare scientists for an era when machines become cleverer than people.

The new institution, known as "Singularity University", is to be headed by Ray Kurzweil, whose predictions about the exponential pace of technological change have made him a controversial figure in technology circles.

Google and Nasa's backing demonstrates the growing mainstream acceptance of Mr Kurzweil's views, which include a claim that before the middle of this century artificial intelligence will outstrip human beings, ushering in a new era of civilisation.

To be housed at Nasa's Ames Research Center, a stone's-throw from the Googleplex, the Singularity University will offer courses on biotechnology, nano-technology and artificial intelligence.

The so-called "singularity" is a theorised period of rapid technological progress in the near future. Mr Kurzweil, an American inventor, popularised the term in his 2005 book "The Singularity is Near".

Proponents say that during the singularity, machines will be able to improve themselves using artificial intelligence and that smarter-than-human computers will solve problems including energy scarcity, climate change and hunger.

Yet many critics call the singularity dangerous. Some worry that a malicious artificial intelligence might annihilate the human race.

Mr Kurzweil said the university was launching now because many technologies were approaching a moment of radical advancement. "We're getting to the steep part of the curve," said Mr Kurzweil. "It's not just electronics and computers. It's any technology where we can measure the information content, like genetics."

The school is backed by Larry Page, Google co-founder, and Peter Diamandis, chief executive of X-Prize, an organisation which provides grants to support technological change.

"We are anchoring the university in what is in the lab today, with an understanding of what's in the realm of possibility in the future," said Mr Diamandis, who will be vice-chancellor. "The day before something is truly a breakthrough, it's a crazy idea."

Despite its title, the school will not be an accredited university. Instead, it will be modelled on the International Space University in Strasbourg, France, the interdisciplinary, multi-cultural school that Mr Diamandis helped establish in 1987.

I leave it as an exercise for the reader (for now) to simply pluck out the unsubstantiated superlative platitudes contained in this breathlessly evangelizing account (did you notice that even the notional registration of skepticism in the article essentially functions as a demand for more funds for our "serious" singularitarians?), to observe the way in which these relentlessly reductive and at once hyperbolically expansive techno-utopian chants co-mingle and reinforce one another. Truly diligent readers may enjoy connecting the dots between these ideas and their proponents to the most ardent expressions of market fundamentalist ideology as well. As for the reference to the "N.I.C.E." in the title of my post, consider it an ambivalent recommendation of a dusty somewhat silly but still prescient book.

Sunday, February 01, 2009

Why Do You Take All This Superlative Silliness Seriously?

The Eternal Return of The Question, upgraded and adapted from the Moot:

I can only speak for myself, but I take transhumanist formulations seriously because they seem to me to exert a disproportionate and deranging influence on technodevelopmental deliberation at the worst imaginable time.

As I have said, superlative formulations have force because they
[a] activate customary irrational passions that are already occasioned by disruptive technoscientific change (panic from mistaken impotence, greed for mistaken omnipotence), because they

[b] congenially oversimplify and dramatize technodevelopmental complexities (reframing them as transcension, apocalypse, revolution, enhancement, immortalization) for lazy, undercritical, or overwrought people and media formations, because they

[c] conduce to the benefit of incumbent interests that portray themselves as more knowledgeable about matters of "advanced" or "accelerating" developments to justify circumventions of democratic deliberation, or frame technodevelopment in terms of "existential risk" that divert deliberation down corporate-militarist avenues (geoengineering, megascale infrastructure, centralized co-ordinated response).

I can go on, and have done, but I think you get the picture.

The point is, most of the reactionary formations that have menaced late-modernity (extractive-industrial-broadcast epoch) began as marginal subcultures of cocksure white boys certain they had the Keys to History in their hands. The silliness of superlativity is not enough to justify ignoring it or failing to understand it, especially once we see the context of techno-utopianism in which it so legibly locates itself.

I also believe that the Robot Cultists in their very extremity provide unusually distilled illustrations of the associations, dynamisms, guiding figures and so on that also play out in more mainstream neoliberal "globalism" and "development" discourse, and hence put us in a better position to understand the irrationality and authoritarianism of that discourse.

Anyway, as you know from the title of my blog, my hero is the political theorist Hannah Arendt (Amor Mundi, the love of the world, was her personal motto), who insisted that the philosopher's task is understanding, and where politics is concerned this means "thinking what we are doing."

I find that understanding the transhumanists and discerning the ways in which mainstream developmental discourse is illuminated by reference to their extremity helps "think what we are doing" in a moment of unprecedented planetary catastrophe (resource descent, climate change, WMD proliferation), planetary promise (proliferating p2p formations), planetary disruption (the shift into non-normalizing genetic, prosthetic, and cognitive therapies). It's as simple as that.

Thursday, January 29, 2009

Biology IS Special

Upgraded and Adapted from the Moot, a continuation of the discussion in the prior post (possibly with a different interlocutor, though):

I wrote: "Life is lived in vulnerable bodies, intelligence is performed in squishy brains and squishy socialities."

"James" responded: Yes, this is quite true, right now.

So, get back to me when your counterexample isn't made up bullshit.

Magical thinking isn't daring, it's dumb.

James writes: I agree with everything else in [your] post -- I just feel strongly about assumptions that biology is somehow special. It's not. Carbon's just what initially won out over everything else.

But, the thing is, biology is special, surely?

Actual lives, actually embodied intelligence, actual persons, are all actually special.

I know James will (rightly) disapprove being made to seem as though he would explicitly deny this (since I doubt he would), but I worry that we are lead to denigrate the ways in which actually existing lives are vulnerable and actually existing intelligences are embodied when we indulge in what James surely intends as a more specialized usage of the term "special" here. (Although I do find it intriguing that James goes so far as to indicate not only disagreement but "strong feeling" on this question of not connecting intelligence too forcefully to the living world even when there is not as yet any empirical reason at all not to do that very thing, especially where, for example, the parts of intelligence connected to "strong feeling" are concerned.)

James writes: There is nothing inherently "intelligent" about biological systems, nor is there anything inherently "dumb" about non-biological systems. Intelligence is a product of the complexity of the system in question; whatever makes it up is a triviality (this is likely, anyway. God knows how long it will take to find out, though...).

To admit the truth that every life in the world you know is lived in a body and every intelligence you encounter and actually come to terms with is vulnerably lived and historically situated doesn't commit one to some grand claim about intelligence being a property "inherent" always only in biology to the logical exclusion of everything else or what have you.

I don't have any interest in making such a claim. I don't think there is any reasonable occasion that impels me to that claim. I don't think there is any reason for people sensibly to care about such a claim. I don't agree to play the game of that final parenthesis in which we are suddenly called upon to make and compare "predictions" and argue about attributions premised on caring about whatever is presumably being zeroed in on in this discussion of "inherence" or not of intelligence in life.

To be honest, asserting either that intelligence inheres always only in biological beings -- or worse, asserting the contrary -- just seems to me to make people talk confusedly about things that do exist in terms of things that don't exist.

I am convinced that a great many people who talk this way do so simply because they are scared of their vulnerability or ultimately of dying and they want to linger "spiritually" or "informationally" beyond lived life and death and the denial of life's and intelligence's palpable incarnation somehow facilitates their denials of this. Obviously not all who talk this way do so for this reason, but many seem to indeed.

Others I am convinced who talk this way do so, oddly enough, because they don't like the humanities, their aesthetic temperament disdains the derangements of literal language in the figurative, they are impatient with the paradoxes and intractable dilemmas of theory, they grow painfully frustrated with the interminable processing of political or psychological difference, and so on, and a denigration of life's mess avails them a measure of more secure and instrumentally efficacious preoccupations -- which undeniably do have their beauty and power after all.

I realize the "made up bullshit" comment with which I began all this was unduly harsh. But the fact is the denial of the specialness of actually embodied intelligence, actually vulnerable lives is a truly extraordinary claim and I have never once encountered the extraordinary reason that justifies making it, nor certainly have I understood the curious tendency of those who make it to pretend that there is something extraordinary instead about the contrary claims that intelligence is embodied and life vulnerable when literally every intelligence and life has testified to precisely this and none the other.

"Technology" Changes the Game

Upgraded and adapted from the Moot

I wrote:
There is an ongoing prosthetic elaboration of agency -- where "culture" is the widest word for prostheses in this construal -- and which is roughly co-extensive with the ongoing historical elaboration of "humanity." But there are only techniques in the service of ends, and the ends are articulated by pretty conventional moral and aesthetic values and embedded in pretty conventional political narrative -- democratization against elitism, change against incumbency, consent against tyranny, equity for all against excellence for few, and so on.The pretense or gesture of a technoscientific circumvention of the political seems to me to conduce usually to de facto right wing politics, since it functions to de-politicize as neutrally "technical" a host of actually moral, aesthetic, political quandaries actually under contest. This is a mistake as easily made by dedicated well-meaning people of the left or the right, as by cynical or dishonest ones, or simply by foolish people, whatever their political sympathies. But it is always a mistake.


To which someone "Anonymously" responded:

Your "always" triggers me Dale. Technology changes the rules of the political game.

My replies to their (italicized) comments follow:

Weather changes the rules of the political game. Pandemics change the rules of the political game. Personalities change the rules of the political game. The devils, as well as the angels, are in the details.

When I insist that "technology" does not exist "in general" this is far from a denial that a diversity of techniques and devices exist and have an impact in the world. Quite the opposite.

There is no such thing as a "technology" that subsumes or subtends all the instances to which that description attaches in a way that can be isolated as a factor with a general predictable impact on political, social, cultural, historical change.

It is the deployment of technologies and the exercise of techniques arising out of unique historical situations, playing out unpredictably in historical dynamisms, and in the service of a diversity of ends that yields technodevelopmental effects.

To ascribe an outcome to "technology" is almost always vacuous. That this sort of utterance has become such an explanatory commonplace is enormously curious and even suspicious.

When most people became literate, it was possible to discuss politics with a much broader group of people.

And "becoming literate" = "technology" in this example?

What, everybody suddenly got bonked in the head with a book or maybe even a printing press? Just think of the complex multivalent practical, cultural, economic, institutional, legal, moral, psychological dynamisms and trajectories that materially fleshed out "becoming literate" in different historical, demographic, personal situations.

What developmental generalization are you drawing from that complex that presumably also obtain for all other instances of the "technological" including inventing and distributing and making use of the cotton gin and the internal combustion engine and the crossbow and anaesthesia and the technique of perspective painting?

If/when people are able to upload and thereby create close to immortal entities they wont have the same priorities as people restricted to living less than a century.

Here we go. Look, you are playing fast and loose with the English language in an all too customarily religious manner here, if I may say so. "If/then" statements cite causal conventions arising from and depending for their intelligibility on our experience of a world with mid-scale furniture and communicative peers and so on behaving in familiar ways.

When a religious person speaks of their expectation of personal resurrection as a soul and of its ascent into an immortal afterlife in Heaven these utterances can only be taken by sensible people as metaphorical utterances without literal reference or as public signals of subcultural membership in a moral or otherwise interpretative community, rather like a secret handshake -- or less charitably they can be taken as expressions of extreme confusion or insanity.

Precisely the same goes for statements about "uploading." When I dismiss these utterances you misunderstand me if you assume I am disagreeing with you on a matter of a testable hypothesis -- even when the form my dismissal takes is "never gonna happen." I am saying that what we mean by "persons," what we mean by "living" cannot coherently accommodate "uploading" or "immortality" and that people who say these things must be speaking metaphorically or subculturally (indeed, Robot Cult-urally) or be deeply confused or possibly a little crazy. Life is lived in vulnerable bodies, intelligence is performed in squishy brains and squishy socialities.

I believe that a majority of the elderly able to do so will do it,

When you use the verb "able" and the pronoun "it" here in respect to "uploading" you make the mistake of imagining you know something about which you are talking. Unfortunately, you don't.

and they will be both a minority (of earths total population) and a very resourceful group.

See, you are indulging in a full froth of faithful handwaving and imagine yourself to be engaging in some sort of policy wonk discourse. This is a problem.

If/when we are able to live comfortably on other planets, environmental issues on this planet wont be as important as they are now.

No doubt the same would be true if we could live in other dimensions or perform spells with wands. That human life on other suitably terraformed planets is logically feasible in ways that interdimensionality or magicality likely are not is irrelevant given that the scientific and, more to the point, political, legal, practical problems of environmentalism are urgently proximate in ways that render remote developmental possibilities like interplanetary diaspora and logical impossibilities like practical wand magic exactly equally irrelevant (at best) to those who would attend to actual problems.

Every second wasted in the contemplation of techno-utopian "solutions" to real problems -- however earnest -- is functionally equivalent to time devoted to the active frustration of problem-solving or active denialism about the problem in the first place. Again, at best it is a matter of handwaving by the faithful confusing itself and others for policy discourse.

Whatever political system that will evolve within the next hundred years I don't think the above will change.

Political systems don't "evolve." And I have no idea what actually substantial thing you have described in "the above" is presumably not going to change or what significance you think attaches to whatever invariance you think you have hit upon.

If the world were otherwise than it is, its problems would be different than they are, too.

Uh, sure. So what?

Sunday, January 25, 2009

What Do Transhumanists Actually Believe In?

To continue from my last post with the discussion of Russell Blackford's recent defense of the superlative technocentricity of the so-called Cosmic Engineers, I want to shift my attention away from what seems to me to be a curiously misplaced preoccupation in Blackford's piece with presumably fashionable and tyrannical political correctness among the relativist academic Left (of all things) to the one statement in the entire piece that seemed to me to name something like what Blackford thinks "transhumanism" actually, positively, substantially stands for. He writes:
I will always be looking for avenues to argue as strongly and effectively as I can for what I believe -- which includes the idea that technology can improve the human situation and enhance human capacities.

I find this statement problematic at two different levels.

First:

As an everyday sort of utterance it seems to me that the belief that "technology can improve the human situation and enhance human capacities" is as vapid a commonplace as one could ever hope to find.

Is there anybody on earth who manages consistently to disagree with this belief? Even deep ecologists who devote their lives to the critique of "the technological society" tend to defend the notion of "appropriate technology," after all, and even the ones who haven't exactly thought the matter through still tend to use pencils and wear eyeglasses and visit the doctor.

The idea that one invents tools to do wanted things with them is surely rather built in to the notion of "technology" in the first place? One doesn't want to end the story there -- there are questions about what is wanted in what sense, with what consequences, and so on, but we'll turn to a slightly deeper intervention in a moment.

As for "enhancing human capacities," this is a bit trickier, but at the same everyday speech level as the one in which almost everybody as a matter of course already believes technology can be helpful it is also true that almost everybody already believes as a matter of course that healthcare is a good thing (where it is made to be as safe and fair as may be and so on), and that healthcare is a matter of intervening in dis-ease to render ease.

Again, there are questions whether rendering ease is quite the same thing as "enhancement" but we'll get to that in a moment.

At this first level of attention, though, I just want to point out that there is a really substantial sense in which the belief Blackford declares to be his own and seems to identify with "transhumanism" constitutes such a complete commonplace that the question becomes against whom does Blackford really imagine himself to be in disagreement and why on earth would anybody imagine one needs a new (?), unique (?) "movement" or "program" to affirm or defend or promote these commonplaces?

Second:

Once we set aside everyday usage and interrogate these commonplaces in a more analytic way we find that they don't really hold up to scrutiny at all (this is no argument in my view against their perfect usefulness in their everyday usage; that would require a different argument).

Although I have no trouble at all making sense of the everyday utterance that "technology can improve the human situation," this is not at all an utterance I would be comfortable to affirm in a careful accounting of technoscientific change.

If one is taking greater care around these claims in an effort to understand technodevelopmental social struggle the first thing one will immediately observe is that while some technoscientific changes improve the situation (whether in the short term or in the longer term) of at least some human beings (though rarely all, and never in the same way or to the same extent) some do not, and that the logical possibility that technology can improve things for some is less to the point than determining just whose lot will be improved, and how much, for how long, at what cost, at what risk, to whom and on what terms, and then determining how best case outcomes might be facilitated in light of all this.

What one discovers soon enough is that it is never "technology" as such that "improves" things for anybody.

There is no such thing as "technology" at that level of generality in the first place, and it does a terrible disservice to sense to imply otherwise. Rather, there are historically situated technoscientific vicissitudes caught up in the ongoing technodevelopmental social struggle of the diversity of stakeholders to technoscientific change who share the world.

Further, it is the uses to which technoscientific discoveries are put that determines their impact for good or ill. These uses are driven by moral, esthetic, ethical, and political values -- and are not somehow determined by what passes for "technology" itself in any given moment of technodevelopmental social struggle.

This matters, because it means that even those who focus on the political problems and promises of technoscientific change in particular will rightly attend more to the terms of fairly conventional political value than to the particulars of technoscience to the extent that their concern is actually more political (facilitating equity, diversity, and consent, say) than specifically scientific.

The same sort of concern is very much alive when one wants to look closely at the notion of "enhancing human capacities." Enhancement is always: enhancement -- in the service of some ends over others; enhancement -- according to whom as against who else.

While we can agree that healthcare provision is being rendered non-normalizing in an unprecedented way by emerging genetic, prosthetic, and cognitive therapies, the determination of what non-normalizing interventionals are "enhancements" is not somehow determined by the therapies at hand but through the scene of actually informed actually nonduressed consensual self-determination in planetary multiculture.

To the extent that "transhumanism" wants to imply that political ends like the "improvement of the human situation" are determined by scientific developments apart from political contestation and consensual self-determination then this seems to me a facile, too-familiar, dangerously anti-democratizing thesis of reductionism coupled to technocratic elitism.

To the extent that "transhumanism" wants to imply that it can dictate the terms on which non-normalizing healthcare will yield "enhancement of human capacities" and when it will not apart from political contestation and consensual self-determination then this seems to me a moralizing, too-familiar, dangerously anti-democratizing thesis of eugenicism coupled to technocratic elitism again.

To the extent that "transhumanism" wants no more than to imply that tools can be useful and healthcare can be a good thing, well, I'm afraid one doesn't need to join a Robot Cult to advocate such commonplaces, indeed one probably needs to find one's way to a Luddite Cult as marginal as the Robot Cult itself to find anybody who consistently disapproves such commonplaces.

Now, if one wants to profess faith in a technologically determined human destiny aspiring toward the accomplishment of secularized theological omni-predicates, digital superintelligence, therapized superlongevity, virtual or nanotechnological superabundance then I daresay one probably does need to join a Robot Cult to find a community of the like-minded, and the same goes for those who would recast eugenic parochialism as an emancipatory program in this day and age.

None of these results seem to me to conduce much to the benefit of those who would declare "movement transhumanism" a reasonable enterprise as it actually plays out in the world.

Condensed Critique of Transhumanism

UPDATE/Preface: The journal Existenz has published and made freely available online my essay Futurological Discourse and Posthuman Terrains, which now seems to me the best, most concise and yet elaborated introduction to my critique of transhumanism, and so I would preface the recommendations that follow with the suggestion that the Existenz article might also be a better starting point for some readers. The Existenz essay is rather densely philosophical in places, however, while many of the pieces that follow are more humorous or more readily digestable, and so I don't think that essay is a perfect substitute for the following by any means.

"Transhumanism" is essentially a techno-transcendental digital-utopian and/or "enhancement"-eugenicist futurological discourse and futurist sub(cult)ure. (Sometimes, I understand that the term has been used in connection with some trans activism as well, but that is not what I am talking about here -- and I want to be clear that I have devoted a lifetime of activism and writing and teaching to resisting sexist, heterosexist, cissexist patriarchy.) I have chosen the following handful of pieces as providing a condensed critique of the various "movement transhumanisms." This is the aspect of my anti-futurological critique which seems most interesting to most folks (for better or worse). Hundreds of posts, arranged by futurist topic as well as by the individual futurological author getting skewered are also to be found in my Superlative Summary for the real gluttons for punishment among you. While transhumanism is, strictly speaking, just one of the sects in the superlative futurological Robot Cult archipelago (others include the Extropians, Singularitarians, techno-immortalists, crypto-anarchists and bitconartists, cybernetic totalists, nano-cornucopiasts, geo-engineers, and so on) transhumanism does overlap considerably with most of the others and exhibits a certain rhetorical and subcultural representativeness.

As someone who respects real science and advocates real public commitments to science and critical thinking education and real public investments in research and sustainable infrastructure, I am annoyed of course with the deranging futurological frames and narratives of techno-transcendentalists (immortal cyberangels! nano-magick utility-fog!) and disasterbators (Robocalypse! grey goo!) who cater to the fears and fantasies of the uninformed and skew policy priorities (for instance, the futurological enablement of reactionary talk about raising the retirement age), not to mention the straightforward pseudo-scientific blathering of uploading circle-squarers (my critique in a phrase: you are not a picture of you) and cryonics cranks, cheerleading over drextechian genies-in-a-bottle, GOFAI-deadenders (my critique in a phrase: Moore's Law isn't going to spit out a sooper-intelligent Robot God Mommy to kiss your boo boos away, sorry), geo-engineering apologists for corporate-military eco-criminals, facile evo-devo reactionaries, not to mention all manner of digital utopian hucksters and TED-squawkers.

But to step back from the obvious, I also regard mainstream futurology as the quintessential discourse of neoliberal global developmentalism, market-mediation, and fraudulent financialization. There is a certain strain of delusive utopianism that drives neoliberalism's callous immaterialism (eg, its focus on branding over labor conditions, its focus on fraudulent financialization over sustainable production) and hyperbolic salesmanship through and through, but what I describe as superlative futurological discourses represent a kind of clarifying -- and also rather bonkers -- extremity of this pseudo-utopianism. While there is obviously plenty that is deranging and dangerous about such techno-transcendental or superlative futurological discourses and the rather odd organizations and public figures devoted to them, what seems to me most useful about paying attention to these extreme and marginal formations is the way they illuminate underlying pathologies of the more prevailing mainstream futurological discourses we have come to take for granted in so much public policy discussion concerning science, technology, and global development.

Among these parallel pathologies, it seems to me, are shared appeals to irrational passions -- fears of impotence and fantasies of omnipotence -- shared tendencies to genetic reductionism, technological determinism, and a certain triumphalism about techno-scientific progress. I also discern in both mainstream and superlative futurology a paradoxical "retro-futurist" kind of reassurance being offered to incumbent and elite interests that "progress" or "accelerating change" will ultimately amount to a dreary amplification of the familiar furniture of the present world or of parochially preferred present values. Also, far too often, one finds in both mainstream and superlative futurology disturbing exhibitions of indifference or even hostility to the real material bodies and real material struggles in which lives, intelligences, lifeways, and human histories are actually incarnated in their actual flourishing diversity.

An easy way to think of the relation I am proposing between these two modes of futurology is to say that mainstream futurology suffuses our prevailing deceptive hyperbolic corporate-military PR/advertising discourse, while superlative futurology amplifies this advertising and promotional hyperbole into an outright delusive promise of personal transcendence (superintelligence, superlongevity, superabundance) of human finitude and this fraudulent speculation and public relations into outright organized sub(cult)ural religiosity.

The first four pieces below subsume transhumanism within the terms of my critique of superlative futurology, the next one focuses on the structural (and sometimes assertive) eugenicism of transhumanist "enhancement" discourse, and the final piece tries to provide a sense of the more positive perspective out of which my critique is coming:
A Superlative Schema

The Superlative Imagination

Understanding Superlative Futurology

Transhumanism Without Superlativity Is Nothing

Eugenics and the Denigration of Consent

Amor Mundi and Technoprogressive Advocacy
More recent pieces, Ten Reasons to Take Seriously the Transhumanists, Singularitarians, Techno-Immortalists, Nano-Cornucopiasts and Other Assorted Robot Cultists and White Guys of "The Future" and Ten Things You Must Fail to Understand to Remain A Transhumanist for Long may provide more accessible, certainly more pithy and snarky, summaries of many facets of the critique. Of course, if pithy is what is really wanted, my mostly aphoristic Futurological Brickbats anthology is possibly worth a look.

For those who are interested in the always controversial but not really very deep issue of the "cultishness" or not of the various superlative futurological sub(cult)ures, and just how facetious I am being when I refer to these futurological formations as "Robot Cults," I recommend this fairly representative post dealing with those questions (which do pop up fairly regularly). Perhaps more serious, at least potentially, there is this rather disorganized and muckraking archive documenting and exploring key figures and institutional nodes in the Robot Cult archipelago, exposing some of their more patent ties to reactionary causes and politics.

Sunday, January 18, 2009

So-Called Technoscientific Depoliticization Usually Conduces to the Benefit of Conservative Politics

Upgraded and adapted from the Moot, in response a question about a superlative-minded technocentric of our acquaintance:
How can a self-declared "radical democrat" so easily switch from vilifying people because they are "libertopians" or "Rapture Nerds" to embracing them as soon as they declare their transhumanist faith or seem to attract enough buzz that they could be useful?

By making the mistake of thinking there is such a thing as a commitment to "technology in general" with a politics of its own, separable from conventional left against right politics -- superseding them in fact.

Of course, there is no such thing as "technology in general."

Particular techniques are fully susceptible of "naturalization" or "denaturalization," "artifactualization" or "deartifactualization" almost entirely according to their relative familiarity, or according to the relative disruptiveness of their applications in the present. That is to say, we are apt to describe as "natural" what might once have been conspicuously artifactual once we've grown accustomed to it, or to be struck by the artificiality of the long-customary should historical vicissitudes render its effects problematic.

There is an ongoing prosthetic elaboration of agency -- where "culture" is the widest word for prostheses in this construal -- and which is roughly co-extensive with the ongoing historical elaboration of "humanity." But this is a generality interminably articulated by technodevelopmental social struggle, there is no one politics we can sensibly assign it.

There are only techniques in the service of ends (and even these ends are plural in their basic character), and the ends tend to be inspired and articulated by pretty conventional moral and aesthetic values and embedded in and expressive of pretty conventional political narratives -- democratization against elitism, change against incumbency, consent against authority, equity for all against excellence for few, and so on.

The pretense or gesture of a technoscientific circumvention of the political -- and affirming a "technology politics" indifferent to the primary articulation of technoscientific change in the world by democratizing as against anti-democratizing politics finally amounts to such an effort at circumvention in my view -- seems to me usually to conduce to de facto right wing politics, since it functions to de-politicize as neutrally "technical" a host of actually moral, aesthetic, political quandaries actually under contest.

This is a mistake as easily made by dedicated well-meaning people of the left or the right, as by cynical or dishonest ones, or simply by foolish people, whatever their political sympathies.

But it is always a mistake.

Superlativity and Existential Risk Discourse

Updated and adapted from the Moot, in response to the question:
[D]o you think there is a place for deliberation about risks like nuclear war, pandemic disease, infrastructural collapse, etc. as a single class of entities?


Do I think these places for deliberation actually exist? Of course they do.

As it happens, I actually don't know that I believe there is ultimately more use than not in treating WMD proliferation (as exacerbated by militarist nation-statism), catastrophic climate change (as exacerbated by extractive-industrial production), proliferating pandemic vectors (as exacerbated by overurbanization), resource descent (as exacerbated by corporate-industrial agriculture practices), together with speculation about dramatic meteor impacts and gamma ray bursters, and I certainly don't think it makes any kind of sense to treat all these concerns as essentially of a piece with the silly pseudo-problems that preoccupy Robot Cultists, like how to cope with unfriendly Robot Gods, or planet-eating nanoblobs, or gengineered sooper-brained baby centaur clone armies.

The main point is that one needn't join a Robot Cult to find serious discussions of actually-proximate global security issues.

Indeed, very much to the contrary, Robot Cult versions of these discussions tend to contribute little but hyperbole and disastrously skewed priorities to these topics in my view -- although, no doubt, they also contribute a smidge of unearned credibility to Robot Cultists themselves who just love opportunistically to glom on to complex technoscience questions and exacerbate the irrational passions they inevitably inspire, substitute a confectionary dusting of hokey neologisms for relevant expertise, and then embed contentious issues in a dramatic science fictional narrative that compels attention but usually without shedding much light, all in the service of whomping up membership numbers, donor dollars, and media attention for the organizations with which they personally identify in their sub(cult)ural superlativity.

Wednesday, January 14, 2009

Let Your Futurological Freak Flag Fly

Look, although I am a crusty atheistical type of long standing myself, I'm also reasonably cheerfully nonjudgmental about the whole thing so long as I'm not getting lied or preached to.

I can't honestly say that it is my experience that organized religiosity seems to conduce much to either sanity or good conscience in those who make a big deal out of it, but I'm more or less content to say everybody should believe whatever they need to believe to get them through the night, at any rate until their beliefs start playing out in misleading, violent, or exploitative ways in the world.

The same is true of the fanciful faiths of the techno-immortalists, extropians, singularitarians, transhumanists, and other assorted superlative technocentrics, certainly.

By all means, Robot Cultists, cult robotically away to your heart's content, let your futurological freak flag fly.

Just don't try to peddle your Robot Cultism as
[1] constituting a novel, coherent, and systematic philosophical viewpoint; as

[2] advocating a unique, coherent, and needed political program; as

[3] contributing somehow to scientific knowledge or useful technique; or as

[4] engaging in serious public policy deliberation.

Otherwise, you know, let a bazillion blossoms bloom, and so on and so forth. I mean, after all, who cares?

Tuesday, January 13, 2009

The Democratizing Priority of Consensual Self-Determination in the Emerging Era of Non-Normalizing Healthcare

Upgraded and Adapted from the Moot:

The emergence of non-normalizing genetic, cognitive, and prosthetic therapies seems to me to demand a shift in the language of democratizing progressive healthcare advocacy from universal access (at least as its sole or even primary organizing principle) to consensual self-determination.

This suggestion raises understandable red-flags to those who know well how a focus on consent-talk over access-talk or over more "neutral" harm-reduction-talk in this area has often functioned as a reactionary strategy simply to deny healthcare to vulnerable people as a way of engaging in class warfare, often with the ugliest kinds of sexist, racist, colonialist inflections.

I would like to think I manage to circumvent appropriation of my own argument by such reactionary politics since I do insist that consent, when it is substantial rather than vacuously formal, must be actually informed and actually non-duressed -- a requirement that demands strong regulation and a substantial provision of social services (as close to universal basic income as we can manage, the widest possible access to reliable public knowledges, and so on) that tend to make my own version of consent-talk unappealing to anti-democratic politics.

Be all that as it may, a shift into consensual therapeutic self-determination is indeed a real shift for democratic-minded progressives to come to terms with, and much of that work remains for now in its barest beginnings.

Among the implications of this shift for me is that progressives need to understand that the familiar technocratic forms of eugenics (for which the transhumanists I decry here represent an extreme case) that would police lifeway diversity into "optimal/normal" forms is matched by conservative forms of eugenics (for which the bioconservatives I decry here represent an extreme case) that would police wanted lifeway diversity into "natural/customary" forms, and that some traditional progressive advocacy language is quite vulnerable to appropriation by anti-democratic politics in these two modes in the aftermath of the shift into non-normalizing therapy.

Further, both superlative and supernative futurological discourses (as well as the more mainstream developmental discourses these sometimes symptomize, sometimes exaggerate, and sometimes pioneer) seem to me to derange our capacity to think about this shift in a reasonable way at a time when it is fairly urgent that we do so.

I am interested in progressive healthcare discourse in an era of emerging non-normalizing therapy. I strongly regret and worry about the extent to which public discourse on these questions has been framed or frustrated by hyberbolizing and faith-based formations of superlativity and supernativity.

I also strongly regret that topics that are perfectly fitting and edifying (if your tastes incline that way, as mine do) for sf literary salons -- such as the question of what kind of coherent narrative subjecthood could be maintained through a completely speculative radically underspecified prosthetic prologation to the tune of centuries of something akin to what we presently mean by the terms lifespan or consciousness -- are sometimes treated as topics connected in even the remotest way to healthcare policy discourse. Nothing at all good ever comes of such confusions.