Using Technology to Deepen Democracy, Using Democracy to Ensure Technology Benefits Us All

Friday, June 30, 2006

When Robots Kill

The folks over at Meme Therapy did one of their Brain Parade columns yesterday -- in which they ask a handful of people for their responses on some topic or other. The latest question: "The military is increasingly using robotic technology. What kinds of ethical considerations should we be making before we automate killing?" This was my own, rather off-the-cuff, response:
Well, I think ethical considerations should compel us to reject the automation of killing altogether. Ethics also has something to say about the social costs of the war addiction of our bomb-building elites, and about the long-term personal and social costs imposed by the brutal roboticizing process that transforms citizens into soldiers in the first place. You know, killing a human being should simply never seem easy. It’s so obvious it sounds sanctimonious to point it out, but there it is. And since we’re having this exchange in a time of war it should be said often and loudly as well that definitely we know we’re in trouble when so many of our elected representatives sound glib at best when they say war is a last resort. Every war is a disaster, every war is a defeat -- even when we “win” one. Wars of choice like the current catastrophic Iraq adventure especially bespeak an almost unfathomably profound breakdown of the ethical imaginary.

The automation of mass violence -- via mass media distraction, via the video-gamization of weaponry, via the neuroceutical modification of soldiery -- is an extraordinary intensification of the techniques of training and drill that have long functioned as a ritual instrumentalization of the individual soldier. This instrumentalization has everything to do with the obliteration of ethics in the encounter of subjects in a war-zone and its replacement with an encounter between objectified no-longer-quite subjects. The outright roboticization of militarism is a step along a tragic trajectory rather than the appearance of something altogether new.

I was intrigued by this comment to my response (it appeared within minutes): "We're a few thousand years too late for that. The evolution of weapons from stick to spear to sword to machine-gun is all about making it more effecient. I can't see a world where we lay down rifles for rocks, which is where your thoughts would lead us. Nice to think about, just isn't going to happen."

I personally don't think that weapons "evolve" at all, and also I worry that there is an awful lot of important stuff that falls out of any story you might want to tell about the history of killing tools for which "efficiency" is what matters most. My own hope is that things like more respect for the Geneva Conventions is where my thought would lead us, rather than swtiching rifles for rocks (which doesn't seem to me to be that dire a prospect to contemplate, come to think of it). But anyway, the somewhat smug dismissal of what I had to say as empty naive abstraction ("Nice to think about, just isn't going to happen.") was a little rankling to me since it seems that what is being dimissed here are "ethical considerations" as such, which was kinda sorta the premise of the question at hand.

I can watch it happen time after time after time, and still I find as fascinating as ever the joyless spectacle in which a technologist resigns the field of "ought" for some parochial characterization of "is" and convinces himself that this transaction somehow constitutes progress toward solving a problem rather than simply massively missing the point. The key symptomatic moment for me is the statement "I can't imagine a world" which precedes the "hardboiled" rejection of ethical considerations that follows. Just so. The dismissal of "ought" (which I think is doubly indicated by the substitution of an evolutionary for a deliberative vocabulary as well as by the foregrounding of "efficiency") constitutes an impoverishment rather than improvement of intelligence which soon enough constrains what will be available to us in the way of "is." Why this consequence would be imagined to be useful or pragmatic in any way is altogether beyond me.

1 comment:

Dale Carrico said...

Hi, Jose. I don't think you seem cynical at all. The truth is, your expectations on this issue are not far from my own. I guess the larger point I'm trying to remind people of is that there's a difference between what we sensibly expect and what we hope for (ethics) or demand (politics) from the world. It's easy to forget just how often the things we sensibly expect turn out to be wrong just because some people's hopes and demands changed the world in unexpected ways.