Lethal Autonomous Weapons threaten to become a symbol of our age; not in their clinical lethality, but in their evacuation of human responsibility from one of the most profound and terrible of actions, the taking of a human life. They will be an apt symbol for an age in which we will grow increasingly accustomed to holding algorithms responsible for all manner of failures, mistakes, and accidents, both trivial and tragic. Except, of course, that algorithms cannot be held accountable and they cannot be forgiven.Crucial to his reflections on this phenomenon is a rich reading of Hannah Arendt's worry about the adequacy of thought to judge novel technoscientific developments. (A concise formulation of that argument appears in her The Conquest of Space and the Stature of Man.) Of course, the combination of the "killer robot" topic with its taking up of an Arendtian framework makes Sacasas' piece simply irresistible to me. For my own sense of the indispensability of Arendt to the thinking of technodevelopmental historical struggle I recommend the fourth and also the final sections of my essay Futurological Discourses and Posthuman Terrains.
We cannot know, neither exhaustively nor with any degree of certainty, what the introduction of Lethal Autonomous Weapons will mean for human society, at least not by the standards of techno-scientific thinking. In the absence of such certainty, because we do not seem to know how to think or judge otherwise, they will likely be adopted and eventually deployed as a matter of seemingly banal necessity.
All that said, I do want to point out that Arendt sometimes took the futurological assertions she otherwise critiqued so forcefully at face value in a way that skewed her judgments of them in my view. She seemed to accept the futurological prediction that automation would soon emancipate majorities from labor to leisure, that genetic medicine would soon deliver humans youthful lifespans beyond a century, that intelligent robots would soon replace soldiers on the battlefield. (Her credulity on the first two claims is to be found in the famous Prologue to The Human Condition, the third appears in both versions of "On Violence," about which I say more in a moment.) Of course, Arendt had enormously insightful things to say about the costs and risks of such outcomes where their prophets mostly insisted on their benefits, but Arendt's critiques accepted the plausibility of the prophesies themselves as novelties appearing in the world for us to judge as such. Arendt's political theory re-orients worldly thinking from its historical concern with biological mortality (with death and threat and economy) with her own concern with biological natality (with birth and beginnings and novelties), and however illuminating and transformative this re-orientation may be as a general matter I do think this emphasis made her more susceptible than need be to marketing hyperbole among Very Serious public intellectuals talking about technology in ways that reiterated the customary norms and forms of American promotional discourses, repackaging the static and the stale as acceleration and novelty (indeed, it is this suffusion of public discourse with marketing rhetoric that was the novelty demanding and still demanding our judgment in my view). Be that as it may, we do Arendt an ill turn if we hear in her talk of the rupture of tradition at the end of philosophy anything like the glib opportunism of our "tech disruptors" today, or worse those superlative futurologists who would hyperbolize this all-too-familiar disruption into "The Singularity."
I replied to Sacasas at his blog, and hope he finds my comments congenial, but I am posting them here as well (you really should read his post first, which is enormously more elaborated than I have sketched, by following the link at the beginning):
Of course the use of these weapons systems is a problem, but I think it is enormously important that we not abet it by accepting the framing of its boosters. That algorithms cannot be held accountable for crimes is not the novelty introduced by these weapons, and is no more interesting than the fact that neither can bullets be held so accountable. What matters is that human coders CAN be held accountable, human funders of the code, human owners of the machines running the code, human officials in institutions implementing the policies (many in the name of US humans) releasing these war machines into the world. The displacement of agency from the human action these machines mediate to the machines themselves is commonplace in both the advocates and critics of these developments.
You are right to turn to Arendt in this context, and I would recommend "On Violence" as the key text, in which Arendt begins with the observation that violence needs tools and comes to propose that in violence we think of others as tools and so become tools ourselves (her deeper point is that this transaction is mistaken in the tradition of political/antipolitical philosophizing as the essence of power when it is quite the opposite). Anyway, the reason the piece is especially relevant to your point is that in its longer published version, "On Violence" connects these theses to a critique of the then-fledgling archipelago of corporate-military think tanks and makes the point that their problem is not that they "think the unthinkable" (the title of a fine and influential critique of the time) but that they are not thinking at all. By this she does mean "thoughtlessness" in the sense that interests you, and she says fairly specific things about its character as extrapolative and not imaginative, arguments that connect with her prescient discussion of computation in Human Condition and claims about judgment that preoccupy the last years of her life. I think it is probably wrong to assimilate those critiques to the proposal of "thinking without bannisters" she elaborated for thought at the end of philosophy.
For me, so-called "autonomous weapons systems" (it matters that they are not actually autonomous) connect to older problems of the facilitation of human violence through the technological alienation of its perpetrator from its victims (remote operators, animated displays, bureaucratic casualty statistics) and to the deceptive rhetoric investing lethal weapons that kill innocent civilians with "smartness" and "precision" (an old story re-enacted by the present killer robot discourse) or divesting the dead of the reality of their lives with "collateral damage" and the promiscuous assignment of the label "terrorist." Again, these are not new strategies and critics make a mistake when they accept the hype of boosters that these new systems are without precedent. You are the first to recognize the wish-fulfillment fantasies mobilized by techno-transcendental discourses. The way to respond to the daydreams of the techno-priests is not to re-frame them as nightmares but to expose the irreality of the dream, to bring tech-talk back to earth, and insist on worldly judgments of worldly actions on worldly terms.