Using Technology to Deepen Democracy, Using Democracy to Ensure Technology Benefits Us All
Friday, March 10, 2006
Technology and Terror
Social discontent provoked by the experience of injustice is a primary trigger of violence and unrest. But we are fast approaching -— if we're not already there -— the extraordinary moment when technologies of abundance, intelligently administered, could provide new means to alleviate at last the sources of such discontent. Meanwhile, these same technologies will also provide new and relatively cheap means to express discontent with unprecedented destructive impact.
I worry that while both of these points are understood well enough on their own, they are rarely discussed together. This matters because what is potentially emancipatory about new technologies is inseparable from what is potentially devastating about them. And so: The very same digital networks that facilitate global communication, collaboration, and trade peer-to-peer render us vulnerable to global attacks from viruses, scams and spam. The very same science and technology that could revolutionize medicine could revolutionize biowarfare. The very same hypothesized capacity for nanoscale self-replication that could eliminate poverty and heal the biosphere also inspires panicky visions of a world reduced to goo.
But the connection I have in mind here goes deeper than the simple recognition that new technologies bring both new powers and new risks.
I believe that the power of emerging technologies to redress the sources of legitimate social discontent -— to end global poverty, to promote universal health and education and to develop abiding, genuinely representative and accountable public institutions -— provides the only way to manage the lethal power of emerging weapons of mass destruction, as well as the relative ease with which they could find their way into the hands of those who would express or exploit such discontent.
Contemplating Insane Destructiveness
New technologies will be unprecedented in their creative and their destructive power, as well as in their ubiquity, and this changes everything.
Two short essays, one by Lawrence Lessig in the April edition of Wired magazine and the other by Richard Rorty in the April 1 issue of the London Review of Books, address this in interestingly similar terms.
These essays look at the problem of the likely near-term development and proliferation of relatively cheap and massively destructive new technologies such as bioengineered pathogens (Lessig) and suitcase nukes (Rorty).
"Key technologies of the future -— in particular, genetic engineering, nanotech, and robotics (or GNR) because they are self-replicating and increasingly easier to craft —- would be radically more dangerous than technologies of the past," writes Lessig in terms that evoke an earlier essay by Bill Joy, but the technophobic conclusions of which Lessig significantly rejects. "It is impossibly hard to build an atomic bomb; when you build one, you've built just one. But the equivalent evil implanted in a malevolent virus will become easier to build, and if built, could become self-replicating. This is P2P (peer-to-peer) meets WMD (weapons of mass destruction), producing IDDs (insanely destructive devices)."
Rorty writes in a similar vein that "[w]ithin a year or two, suitcase-sized nuclear weapons (crafted in Pakistan or North Korea) may be commercially available. Eager customers will include not only rich playboys like Osama bin Laden but also the leaders of various irredentist movements that have metamorphosed into well-financed criminal gangs. Once such weapons are used in Europe, whatever measures the interior ministers have previously agreed to propose will seem inadequate."
It is probably inevitable that discussions of the threat of weaponized emerging technologies will reflect the distress of the so-called contemporary "War on Terror." But it is important to recognize that present-day terrorism, however devastating, is a timid anticipation of the dangers and dilemmas to come. The March 11, 2004 Madrid attacks made use of conventional explosives, and the September 11, 2001 attacks in the United States involved the crude hijacking and repurposing of fuel-fat jets as missiles.
To the extent that these attacks have provoked as a response (or worse, have provided a pretext for) "preemptive" and essentially unilateral military adventures abroad, and assaults on civil liberties at home, it is increasingly difficult to maintain much hope that we are mature enough as a civilization to cope with the forces we have ourselves set in motion.
Regulation Between Relinquishment and Resignation
Both Lessig and Rorty anticipate that when confronted with the horrifying reality or even simply the prospect of new technological threats the first impulse of the North Atlantic democracies is almost certain to be misguided compensatory expansions of state surveillance and control.
Both essays point to the likely futility of such efforts to perfectly police the creation and traffic of unprecedented technologies. In the worst case, with Lessig's designer pathogen or with the goo bestiary that preoccupies the nightmares of nanotech Cassandras (and don't forget the actual story: Cassandra was right!), we are confronted with the prospect of new massively destructive technologies that might be cooked up in obscure laboratories at comparably modest costs, using easily obtainable materials, employing techniques in the public domain, and distributed via stealthy networks.
In the Bill Joy essay that inspired Lessig's piece, the epic scale of the threats posed by emerging technologies prompted Joy to recommend banning their development altogether. The typical rejoinder to Joy's own proposal of "relinquishment," of a principled (or panic-stricken) pre-emptive ban on these unprecedentedly destructive technological capacities is that it is absolutely unenforceable, and hence would too likely shift the development and use of such technologies to precisely the least scrupulous people and least regulated conditions. And all of this would, of course, exacerbate the very risks any such well-meaning but misguided ban would have been enacted to reduce in the first place.
Definitely I agree with this rejoinder, but it's important not to misapply its insights. The fact that laws prohibiting murder don't perfectly eliminate the crime scarcely recommends we should strike these laws off the books. If Joy's technological relinquishment was the best or only hope for humanity's survival, then we would of course be obliged to pursue it whatever the challenges.
But surely the stronger reason to question relinquishment is simply that it would deny us the extraordinary benefits of emerging technologies -— spectacularly safe, strong, cheap materials and manufactured goods; abundant foodstuffs; new renewable energy technologies; and incomparably effective medical interventions.
Technophiles often seem altogether too eager to claim that technological regulation is unenforceable, or that developmental outcomes they happen to desire themselves are "inevitable." But of course the shape that development will take —- its pace, distribution, and deployments -— is anything but inevitable in fact. And all technological development is obviously and absolutely susceptible to regulation, for good or ill, by laws, norms, market forces and structural limits.
Market libertarian technophiles such as Ronald Bailey sometimes seem to suggest that any effort to regulate technological development at all is tantamount to Joy's desire to ban it altogether. Bailey counters both Joy's relinquishment thesis and Lessig's more modest proposals with a faith that "robust" science on its own is best able to defend against the threats science itself unleashes. This is an argument and even a profession I largely share with him, but only to the extent that we recognize just how much of what makes science "robust" is produced and maintained in the context of well-supported research traditions, stable institutions, steady funding and rigorous oversight, most of which look quite like the "regulation" that negative libertarians otherwise rail against. For me, robust scientific culture looks like the fragile attainment of democratic civilization, not some "spontaneous order."
So too "deregulation" is a tactic that is obviously occasionally useful within the context of a broader commitment to reform and good regulation. But treated as an end in itself the interminable market fundamentalist drumbeat of "deregulation" -— so prevalent among especially American technophiles —- amounts to an advocacy of lawlessness. Does this really seem the best time to call for lawlessness? Market libertarian ideologues often promote a policy of "market-naturalist" resignation that seems to me exactly as disastrous in its consequences as Joy's recommendation of relinquishment.
In fact, the consequence of both policies seems precisely the same —- to abandon technological development to the least scrupulous, least deliberative, least accountable forces on offer. My point is not to demonize commerce, of course, but simply to recognize that good governance encourages good and discourages antisocial business practices, while a healthy business climate is likewise the best buttress to good democratic governance.
While I am quite happy to leave the question of just which toothbrush consumers prefer to market forces, it seems to me a kind of lunacy to suggest that the answer to coping with emerging existential technological threats is, "Let the market decide." What we need is neither resignation nor relinquishment, but critical deliberation and reasonable regulation. What we need is Regulation between Relinquishment and Resignation (RRR).
Resources for Hope?
Lessig and Rorty make different but complementary recommendations in the face of the dreadful quandaries of cheap and ubiquitous, massively destructive emerging technologies. Taken together, these recommendations provide what looks to me like the basis for a more reasonable and hopeful strategy.
Rorty insists, first and foremost, that citizens in the North Atlantic democracies must challenge what he describes as "the culture of government secrecy":
"Demands for government openness should start in the areas of nuclear weaponry and of intelligence-gathering," which are, he points out, "the places where the post-World War Two obsession with secrecy began." More specifically, we must demand that our governments "publish the facts about their stockpiles of weapons of mass destruction [and] make public the details of two sets of planned responses: one to the use of such weapons by other governments, and another for their use by criminal gangs such as al-Qaida."
He goes on to point out that "[i]f Western governments were made to disclose and discuss what they plan to do in various sorts of emergency, it would at least be slightly harder for demagogic leaders to argue that the most recent attack justifies them in doing whatever they like. Crises are less likely to produce institutional change, and to have unpredictable results, if they have been foreseen and publicly discussed."
Never has the need for global collaboration been more conspicuous. Never has the need to unleash the collective, creative, critical intelligence of humanity been more urgent. And yet the contemporary culture of the "War on Terror" has seemed downright hostile to intelligence in all its forms. Efforts to understand the social conditions that promote terror are regularly dismissed as "appeasement." Critical thinking about our response to terror is routinely denigrated as "treason." Authorities strive to insulate their conduct from criticism and scrutiny behind veils of secrecy in the name of "security." (And all of this is depressingly of a piece, of course, with the current Bush Administration's assaults on consensus environmental science, genetic research, effective sex education, and all the rest.)
It is no wonder so many of us fear the "War on Terror" quite as much as we fear terrorism itself. But how much more damaging than the self-defeating and authoritarian responses to conventional terrorism can we expect the response to the emerging threats of Lessig's "Insanely Destructive Devices" to be?
When devastating technologies become cheap and ubiquitous we must redress the social discontent that makes their misuse seem justifiable to more people than we can ever hope to manage or police. Since we cannot hope to halt the development of all the cheap, disastrously weaponizable technologies on the horizon, nor can we hope to perfectly control their every use, Lessig suggests that "perhaps the rational response is to reduce the incentives to attack... maybe we should focus on ways to eliminate the reasons to annihilate us." Fantasies of an absolute control over these technologies, or of an absolute control through technology (SDI, TIA, and its epigones, anyone?), are sure to exacerbate the very discontent that will make their misuse more widespread.
Anticipating the inevitable objection, Lessig is quick to point out that "[c]razies, of course, can't be reasoned with. But we can reduce the incentives to become a crazy. We could reduce the reasonableness -— from a certain perspective -— for finding ways to destroy us." Criminals, fanatics and madmen are in fact a manageable minority in any culture. (Racist know-nothing slogans to the contrary about a so-called epic and epochal "Clash of Civilizations" deserve our utter contempt.) Although there is no question that Lessig's "Insanely Destructive Devices" could still do irreparable occasional harm in their hands, it is profoundly misleading to focus on the threats posed by crazy and criminal minorities when it is as often as not the exploitation of legitimate social discontent that makes it possible for lone gunmen to recruit armies to their "causes."
Lessig concludes that "[t]here's a logic to p2p threats that we as a society don't yet get. Like the record companies against the Internet, our first response is war. But like the record companies, that response will be either futile or self-destructive. If you can't control the supply of IDDs, then the right response is to reduce the demand for IDDs. [Instead, America's] present course of unilateral cowboyism will continue to produce generations of angry souls seeking revenge on us."
For generations, progressives have sought to ameliorate the suffering of the wretched of the Earth. We have struggled to diminish poverty, widen the franchise, and ensure through education and shared prosperity that more and more people (though still obscenely too few people) have a personal stake as citizens in their societies. We have fought for these things because we have been moved by the tragedy of avoidable suffering, and by the unspeakable waste of intelligence, creativity and pleasure that is denied us all when any human being is oppressed into silence by poverty or tyranny.
The emerging threat of cheap and ubiquitous, massively destructive technologies provides a new reason to redress social injustice and the discontent it inspires (for those among you who really need another reason): The existence of injustice anywhere might soon threaten you quite literally, and needlessly, with destruction.
This piece is a slightly edited version of a column that appeared at BetterHumans, April 28, 2004.
I worry that while both of these points are understood well enough on their own, they are rarely discussed together. This matters because what is potentially emancipatory about new technologies is inseparable from what is potentially devastating about them. And so: The very same digital networks that facilitate global communication, collaboration, and trade peer-to-peer render us vulnerable to global attacks from viruses, scams and spam. The very same science and technology that could revolutionize medicine could revolutionize biowarfare. The very same hypothesized capacity for nanoscale self-replication that could eliminate poverty and heal the biosphere also inspires panicky visions of a world reduced to goo.
But the connection I have in mind here goes deeper than the simple recognition that new technologies bring both new powers and new risks.
I believe that the power of emerging technologies to redress the sources of legitimate social discontent -— to end global poverty, to promote universal health and education and to develop abiding, genuinely representative and accountable public institutions -— provides the only way to manage the lethal power of emerging weapons of mass destruction, as well as the relative ease with which they could find their way into the hands of those who would express or exploit such discontent.
Contemplating Insane Destructiveness
New technologies will be unprecedented in their creative and their destructive power, as well as in their ubiquity, and this changes everything.
Two short essays, one by Lawrence Lessig in the April edition of Wired magazine and the other by Richard Rorty in the April 1 issue of the London Review of Books, address this in interestingly similar terms.
These essays look at the problem of the likely near-term development and proliferation of relatively cheap and massively destructive new technologies such as bioengineered pathogens (Lessig) and suitcase nukes (Rorty).
"Key technologies of the future -— in particular, genetic engineering, nanotech, and robotics (or GNR) because they are self-replicating and increasingly easier to craft —- would be radically more dangerous than technologies of the past," writes Lessig in terms that evoke an earlier essay by Bill Joy, but the technophobic conclusions of which Lessig significantly rejects. "It is impossibly hard to build an atomic bomb; when you build one, you've built just one. But the equivalent evil implanted in a malevolent virus will become easier to build, and if built, could become self-replicating. This is P2P (peer-to-peer) meets WMD (weapons of mass destruction), producing IDDs (insanely destructive devices)."
Rorty writes in a similar vein that "[w]ithin a year or two, suitcase-sized nuclear weapons (crafted in Pakistan or North Korea) may be commercially available. Eager customers will include not only rich playboys like Osama bin Laden but also the leaders of various irredentist movements that have metamorphosed into well-financed criminal gangs. Once such weapons are used in Europe, whatever measures the interior ministers have previously agreed to propose will seem inadequate."
It is probably inevitable that discussions of the threat of weaponized emerging technologies will reflect the distress of the so-called contemporary "War on Terror." But it is important to recognize that present-day terrorism, however devastating, is a timid anticipation of the dangers and dilemmas to come. The March 11, 2004 Madrid attacks made use of conventional explosives, and the September 11, 2001 attacks in the United States involved the crude hijacking and repurposing of fuel-fat jets as missiles.
To the extent that these attacks have provoked as a response (or worse, have provided a pretext for) "preemptive" and essentially unilateral military adventures abroad, and assaults on civil liberties at home, it is increasingly difficult to maintain much hope that we are mature enough as a civilization to cope with the forces we have ourselves set in motion.
Regulation Between Relinquishment and Resignation
Both Lessig and Rorty anticipate that when confronted with the horrifying reality or even simply the prospect of new technological threats the first impulse of the North Atlantic democracies is almost certain to be misguided compensatory expansions of state surveillance and control.
Both essays point to the likely futility of such efforts to perfectly police the creation and traffic of unprecedented technologies. In the worst case, with Lessig's designer pathogen or with the goo bestiary that preoccupies the nightmares of nanotech Cassandras (and don't forget the actual story: Cassandra was right!), we are confronted with the prospect of new massively destructive technologies that might be cooked up in obscure laboratories at comparably modest costs, using easily obtainable materials, employing techniques in the public domain, and distributed via stealthy networks.
In the Bill Joy essay that inspired Lessig's piece, the epic scale of the threats posed by emerging technologies prompted Joy to recommend banning their development altogether. The typical rejoinder to Joy's own proposal of "relinquishment," of a principled (or panic-stricken) pre-emptive ban on these unprecedentedly destructive technological capacities is that it is absolutely unenforceable, and hence would too likely shift the development and use of such technologies to precisely the least scrupulous people and least regulated conditions. And all of this would, of course, exacerbate the very risks any such well-meaning but misguided ban would have been enacted to reduce in the first place.
Definitely I agree with this rejoinder, but it's important not to misapply its insights. The fact that laws prohibiting murder don't perfectly eliminate the crime scarcely recommends we should strike these laws off the books. If Joy's technological relinquishment was the best or only hope for humanity's survival, then we would of course be obliged to pursue it whatever the challenges.
But surely the stronger reason to question relinquishment is simply that it would deny us the extraordinary benefits of emerging technologies -— spectacularly safe, strong, cheap materials and manufactured goods; abundant foodstuffs; new renewable energy technologies; and incomparably effective medical interventions.
Technophiles often seem altogether too eager to claim that technological regulation is unenforceable, or that developmental outcomes they happen to desire themselves are "inevitable." But of course the shape that development will take —- its pace, distribution, and deployments -— is anything but inevitable in fact. And all technological development is obviously and absolutely susceptible to regulation, for good or ill, by laws, norms, market forces and structural limits.
Market libertarian technophiles such as Ronald Bailey sometimes seem to suggest that any effort to regulate technological development at all is tantamount to Joy's desire to ban it altogether. Bailey counters both Joy's relinquishment thesis and Lessig's more modest proposals with a faith that "robust" science on its own is best able to defend against the threats science itself unleashes. This is an argument and even a profession I largely share with him, but only to the extent that we recognize just how much of what makes science "robust" is produced and maintained in the context of well-supported research traditions, stable institutions, steady funding and rigorous oversight, most of which look quite like the "regulation" that negative libertarians otherwise rail against. For me, robust scientific culture looks like the fragile attainment of democratic civilization, not some "spontaneous order."
So too "deregulation" is a tactic that is obviously occasionally useful within the context of a broader commitment to reform and good regulation. But treated as an end in itself the interminable market fundamentalist drumbeat of "deregulation" -— so prevalent among especially American technophiles —- amounts to an advocacy of lawlessness. Does this really seem the best time to call for lawlessness? Market libertarian ideologues often promote a policy of "market-naturalist" resignation that seems to me exactly as disastrous in its consequences as Joy's recommendation of relinquishment.
In fact, the consequence of both policies seems precisely the same —- to abandon technological development to the least scrupulous, least deliberative, least accountable forces on offer. My point is not to demonize commerce, of course, but simply to recognize that good governance encourages good and discourages antisocial business practices, while a healthy business climate is likewise the best buttress to good democratic governance.
While I am quite happy to leave the question of just which toothbrush consumers prefer to market forces, it seems to me a kind of lunacy to suggest that the answer to coping with emerging existential technological threats is, "Let the market decide." What we need is neither resignation nor relinquishment, but critical deliberation and reasonable regulation. What we need is Regulation between Relinquishment and Resignation (RRR).
Resources for Hope?
Lessig and Rorty make different but complementary recommendations in the face of the dreadful quandaries of cheap and ubiquitous, massively destructive emerging technologies. Taken together, these recommendations provide what looks to me like the basis for a more reasonable and hopeful strategy.
Rorty insists, first and foremost, that citizens in the North Atlantic democracies must challenge what he describes as "the culture of government secrecy":
"Demands for government openness should start in the areas of nuclear weaponry and of intelligence-gathering," which are, he points out, "the places where the post-World War Two obsession with secrecy began." More specifically, we must demand that our governments "publish the facts about their stockpiles of weapons of mass destruction [and] make public the details of two sets of planned responses: one to the use of such weapons by other governments, and another for their use by criminal gangs such as al-Qaida."
He goes on to point out that "[i]f Western governments were made to disclose and discuss what they plan to do in various sorts of emergency, it would at least be slightly harder for demagogic leaders to argue that the most recent attack justifies them in doing whatever they like. Crises are less likely to produce institutional change, and to have unpredictable results, if they have been foreseen and publicly discussed."
Never has the need for global collaboration been more conspicuous. Never has the need to unleash the collective, creative, critical intelligence of humanity been more urgent. And yet the contemporary culture of the "War on Terror" has seemed downright hostile to intelligence in all its forms. Efforts to understand the social conditions that promote terror are regularly dismissed as "appeasement." Critical thinking about our response to terror is routinely denigrated as "treason." Authorities strive to insulate their conduct from criticism and scrutiny behind veils of secrecy in the name of "security." (And all of this is depressingly of a piece, of course, with the current Bush Administration's assaults on consensus environmental science, genetic research, effective sex education, and all the rest.)
It is no wonder so many of us fear the "War on Terror" quite as much as we fear terrorism itself. But how much more damaging than the self-defeating and authoritarian responses to conventional terrorism can we expect the response to the emerging threats of Lessig's "Insanely Destructive Devices" to be?
When devastating technologies become cheap and ubiquitous we must redress the social discontent that makes their misuse seem justifiable to more people than we can ever hope to manage or police. Since we cannot hope to halt the development of all the cheap, disastrously weaponizable technologies on the horizon, nor can we hope to perfectly control their every use, Lessig suggests that "perhaps the rational response is to reduce the incentives to attack... maybe we should focus on ways to eliminate the reasons to annihilate us." Fantasies of an absolute control over these technologies, or of an absolute control through technology (SDI, TIA, and its epigones, anyone?), are sure to exacerbate the very discontent that will make their misuse more widespread.
Anticipating the inevitable objection, Lessig is quick to point out that "[c]razies, of course, can't be reasoned with. But we can reduce the incentives to become a crazy. We could reduce the reasonableness -— from a certain perspective -— for finding ways to destroy us." Criminals, fanatics and madmen are in fact a manageable minority in any culture. (Racist know-nothing slogans to the contrary about a so-called epic and epochal "Clash of Civilizations" deserve our utter contempt.) Although there is no question that Lessig's "Insanely Destructive Devices" could still do irreparable occasional harm in their hands, it is profoundly misleading to focus on the threats posed by crazy and criminal minorities when it is as often as not the exploitation of legitimate social discontent that makes it possible for lone gunmen to recruit armies to their "causes."
Lessig concludes that "[t]here's a logic to p2p threats that we as a society don't yet get. Like the record companies against the Internet, our first response is war. But like the record companies, that response will be either futile or self-destructive. If you can't control the supply of IDDs, then the right response is to reduce the demand for IDDs. [Instead, America's] present course of unilateral cowboyism will continue to produce generations of angry souls seeking revenge on us."
For generations, progressives have sought to ameliorate the suffering of the wretched of the Earth. We have struggled to diminish poverty, widen the franchise, and ensure through education and shared prosperity that more and more people (though still obscenely too few people) have a personal stake as citizens in their societies. We have fought for these things because we have been moved by the tragedy of avoidable suffering, and by the unspeakable waste of intelligence, creativity and pleasure that is denied us all when any human being is oppressed into silence by poverty or tyranny.
The emerging threat of cheap and ubiquitous, massively destructive technologies provides a new reason to redress social injustice and the discontent it inspires (for those among you who really need another reason): The existence of injustice anywhere might soon threaten you quite literally, and needlessly, with destruction.
This piece is a slightly edited version of a column that appeared at BetterHumans, April 28, 2004.
Subscribe to:
Post Comments (Atom)
1 comment:
The whole language of "crazies" is taken from the Lessig piece itself. What I was trying to react against by generalizing his point in the passage to which I think you are responding here, was the very widespread rhetoric in which "terrorism" was/is attributed to the presumed "craziness" of whole regions and monolithically-construed cultures.
While I consider such rhetoric to be the worst kind of bloody-minded madness, I fear it isn't a straw man in the least. Just listen to conservative talk radio to find endless variations on these pathologizing racist and nationalist themes.
I think this was even more conspicuous when I first wrote the piece a couple of years ago -- before the Killer Clown College began its current conspicuous crumble and the know-nothing bullies were still full of chickenhawk swagger and smug war-profiteering.
Of course I agree with your initial point that disincentivizing terror is the way to go -- which is why I made it myself! And of course I agree with you, too, that it would be loony to claim that strong but imperfect success in fighting terror from such disincentivization is somehow grounds for jettisoning the approach...
But think back to the derision with which proposals about global terror being a policing issue rather than a war issue, about never backing down to the "beheaders," about bring-it-on tough talk being the only way to deal with "freedom haters," and all the rest of that brainless bullshit and you will remember all too well the "arguments" to which I was trying to respond as judiciously as I knew how at the time in the original column.
Post a Comment