Using Technology to Deepen Democracy, Using Democracy to Ensure Technology Benefits Us All
Friday, April 22, 2005
Chapter Two: Markets from Math
The personal life of every individual is based on secrecy, and perhaps it is partly for that reason that civilized man [sic] is so nervously anxious that personal privacy should be respected. -– Anton Chekhov
It is insufficient to protect ourselves with laws, we need to protect ourselves with mathematics. -– Bruce Schneier
One: Weaving Nets, Smashing States
I. The "First Generation" of Cyberspatial Theory
II. Taking the First Generation Seriously
III. "California Ideology" Among the First Generation
Two: Arguments from Inevitability and from Desire
IV. Manifesto
V. What Is Manifest
VI. P2P, Not Anarchy
VII. Afterward
Three: Liber-Tech
VIII. Techniques of Secrecy
IX. Building Resistance In
X. e2e
Four: The Discretionary: Secrecy, Privacy, and Control
XI. From Privation to Discretion
XII. Description As Threat
XIII. Privacy Under Control
XIV. Digital Libertarianism
Go to Pancryptics Table of Contents
It is insufficient to protect ourselves with laws, we need to protect ourselves with mathematics. -– Bruce Schneier
One: Weaving Nets, Smashing States
I. The "First Generation" of Cyberspatial Theory
II. Taking the First Generation Seriously
III. "California Ideology" Among the First Generation
Two: Arguments from Inevitability and from Desire
IV. Manifesto
V. What Is Manifest
VI. P2P, Not Anarchy
VII. Afterward
Three: Liber-Tech
VIII. Techniques of Secrecy
IX. Building Resistance In
X. e2e
Four: The Discretionary: Secrecy, Privacy, and Control
XI. From Privation to Discretion
XII. Description As Threat
XIII. Privacy Under Control
XIV. Digital Libertarianism
Go to Pancryptics Table of Contents
MXIV. Digital Libertarianism
And now I am ready to return briefly, by way of conclusion, again to James Boyle’s essay, “Foucault in Cyberspace,” written in 1997 and available online. In this essay, recall that Boyle delineates and critiques the “set of political and legal assumptions” that he calls “the jurisprudence of digital libertarianism.”
He proposes as key elements within this perspective both a belief “about the state's supposed inability to regulate the Internet,” as well as “a preference for technological solutions to hard legal issues on-line.” Both of these propositions should be quite familiar by now. And there is no question that it is importantly a consequence of the article of faith expressed in the first of these propositions that the libertarians about whom Boyle is writing here express the preference of the second proposition.
Boyle makes what he calls a “familiar criticism” of what he is labeling digital libertarianism, namely, that its advocates manifest a certain “blindness towards the effects of private power” deriving from their rather monomaniacal focus on abuses of state power, but then appends to this the “less familiar” criticism that “the conceptual structure and jurisprudential assumptions of digital libertarianism lead its practitioners to ignore the ways in which the state can often use privatized enforcement and state-backed technologies to evade some of the supposed practical (and constitutional) restraints on the exercise of legal power over the Net.” Boyle’s point is that techno-libertarians like the cypherpunks have been distracted by their monolithic conception of an overbearing sovereign state from grasping the diffuse, fine-grained, multilateral, technical mechanisms through which authorities, whether governmental or not, exercise oppressive power over individuals in fact.
When such libertarians think of law, Boyle declares, “they tend to conjure up a positivist, even Austinian image.” For them it would seem that “law is a command backed by threats, issued by a sovereign who acknowledges no superior, directed to a geographically defined population which renders that sovereign habitual obedience.” And for Boyle this is significant because it makes libertarian technology enthusiasts cockier and hence considerably more vulnerable than they would otherwise be. “[T]hey think of the state's laws as blunt instruments incapable of imposing their will on the global subjects of the Net and their evanescent and geographically unsituated transactions,” he writes. “Indeed, if there was ever a model of law designed to fail at regulating the Net, it is the Austinian model.” But, as luck would have it, such an “Austinian model is both crude and inaccurate…” for the “actual experience of power [is] exercised through multitudinous non-state sources, often dependent on material or technological means of enforcement.”
This disastrously distracting “Austinian image” of the fantastic sovereign State will be a familiar one by now, since it corresponds exactly to an image of the sovereign individual about whom I have written already at length. It is intriguing to contemplate the prospect that the libertarian project simply stages an interminable confrontation between what amount to two utterly illusory antagonists -– two sovereign models of agency –- one public, one private -– two presumably perfect self-sufficiencies, but both of which depend for their intelligibility and normative force in fact on the ongoing conjuration of the other.
And so, when libertarians obsessively array the individual against the state, they find the very image of the unattainable state of efficacious autonomy they attain to in the image of the State itself, even while it is to this image of monstrous and monolithic power that they assign the primary or even solitary blame for their failure to attain it. For libertarians it sometimes seems as if the agencies of the individual and the State endlessly both mirror and antagonize one another, so that the shoring up of privacy becomes not so much a withdrawal of the individual from public life, as a withdrawal from the public of a life-giving power that otherwise the individual is always only deprived of by the State. It is to networks that I will suggest we may turn to better discern the image of an empowering (inter)dependency that confounds the bind of these imaginary interminably contending sovereign agencies. But before I turn from the pancryptic libertarian-cypherpunk utopia/dystopia altogether, I want first to discuss its curious complement, the panoptic “cheerful libertarian” utopia/dystopia of David Brin’s Transparent Society.
Go to Next Section of Pancryptics
Go to Pancryptics Table of Contents
He proposes as key elements within this perspective both a belief “about the state's supposed inability to regulate the Internet,” as well as “a preference for technological solutions to hard legal issues on-line.” Both of these propositions should be quite familiar by now. And there is no question that it is importantly a consequence of the article of faith expressed in the first of these propositions that the libertarians about whom Boyle is writing here express the preference of the second proposition.
Boyle makes what he calls a “familiar criticism” of what he is labeling digital libertarianism, namely, that its advocates manifest a certain “blindness towards the effects of private power” deriving from their rather monomaniacal focus on abuses of state power, but then appends to this the “less familiar” criticism that “the conceptual structure and jurisprudential assumptions of digital libertarianism lead its practitioners to ignore the ways in which the state can often use privatized enforcement and state-backed technologies to evade some of the supposed practical (and constitutional) restraints on the exercise of legal power over the Net.” Boyle’s point is that techno-libertarians like the cypherpunks have been distracted by their monolithic conception of an overbearing sovereign state from grasping the diffuse, fine-grained, multilateral, technical mechanisms through which authorities, whether governmental or not, exercise oppressive power over individuals in fact.
When such libertarians think of law, Boyle declares, “they tend to conjure up a positivist, even Austinian image.” For them it would seem that “law is a command backed by threats, issued by a sovereign who acknowledges no superior, directed to a geographically defined population which renders that sovereign habitual obedience.” And for Boyle this is significant because it makes libertarian technology enthusiasts cockier and hence considerably more vulnerable than they would otherwise be. “[T]hey think of the state's laws as blunt instruments incapable of imposing their will on the global subjects of the Net and their evanescent and geographically unsituated transactions,” he writes. “Indeed, if there was ever a model of law designed to fail at regulating the Net, it is the Austinian model.” But, as luck would have it, such an “Austinian model is both crude and inaccurate…” for the “actual experience of power [is] exercised through multitudinous non-state sources, often dependent on material or technological means of enforcement.”
This disastrously distracting “Austinian image” of the fantastic sovereign State will be a familiar one by now, since it corresponds exactly to an image of the sovereign individual about whom I have written already at length. It is intriguing to contemplate the prospect that the libertarian project simply stages an interminable confrontation between what amount to two utterly illusory antagonists -– two sovereign models of agency –- one public, one private -– two presumably perfect self-sufficiencies, but both of which depend for their intelligibility and normative force in fact on the ongoing conjuration of the other.
And so, when libertarians obsessively array the individual against the state, they find the very image of the unattainable state of efficacious autonomy they attain to in the image of the State itself, even while it is to this image of monstrous and monolithic power that they assign the primary or even solitary blame for their failure to attain it. For libertarians it sometimes seems as if the agencies of the individual and the State endlessly both mirror and antagonize one another, so that the shoring up of privacy becomes not so much a withdrawal of the individual from public life, as a withdrawal from the public of a life-giving power that otherwise the individual is always only deprived of by the State. It is to networks that I will suggest we may turn to better discern the image of an empowering (inter)dependency that confounds the bind of these imaginary interminably contending sovereign agencies. But before I turn from the pancryptic libertarian-cypherpunk utopia/dystopia altogether, I want first to discuss its curious complement, the panoptic “cheerful libertarian” utopia/dystopia of David Brin’s Transparent Society.
Go to Next Section of Pancryptics
Go to Pancryptics Table of Contents
MXIII. Privacy Under Control
It is interesting to notice that while Hughes initially figures his project as relatively modest and defensive, within the space of only a few pages his ambitions have taken on the superlative, transcendentalizing cadences into which technophilia seems almost inevitably to be drawn.
At the outset, Hughes evokes the scene of a privacy the value of which is generally affirmed and widely enjoyed. “People have been defending their own privacy for centuries with whispers, darkness, envelopes, closed doors, secret handshakes, and couriers.” But this familiar and bucolic world of privacy is newly threatened by the unprecedented technological empowerment of unscrupulous authorities. The relative anonymity of cash purchasing -– “[w]hen I purchase a magazine at a store and hand cash to the clerk, there is no need to know who I am” -– has been displaced by electronically mediated purchasing in which “my identity is revealed by the underlying mechanism of the transaction.” The modest annoyances of gossip are exacerbated by digital networked communication into deeper threats: “Information is fleeter of foot, has more eyes, knows more, and understands less than Rumor.” Given these new threats, Hughes proposes that “[w]e must defend our own privacy if we expect to have any…”
But in the space of just two sentences Hughes’s tone changes extraordinarily. Rather than representing an unprecedented threat to the general enjoyment of modest privacies, technology becomes instead an engine through which Hughes imagines an unprecedented expansion and augmentation of privacy, and of the private agentic selves that would exercise it more perfectly: “The technologies of the past did not allow for strong privacy, but electronic technologies do.” It is finally this “strong” privacy that Hughes is championing in his essay, a privacy and a private self that is not so much tremulous as tremendous, shored up and rendered invulnerable by encryption, rendered more perfectly autonomous by anonymity, a sovereign self enthroned at the scene of decision.
The initial plausibility of Hughes’s inaugural claim that “[p]rivacy is necessary for an open society in an electronic age,” has itself come to look less sure-footed by now. Of just what does the “openness” consist in Hughes’s conception of an “open society” that would facilitate privacy in his understanding of it? Has Hughes mistaken for an “open” society simply one in which all the intentions of private actors that are registered in contractual, deliberate, explicitly discretionary terms are fully respected? What manner of openness after all readily reconciles with his desire for such strong personal control over the terms of public information and social interaction?
Hughes insists early on that “freedom of speech, even more than privacy, is fundamental to an open society,” and so “[w]e seek not to restrict any speech at all.” On the contrary, cypherpunks would radically impoverish spontaneous sociality and through ubiquitous encryption restrict instead the circumstances in which anyone could have occasion to speak in untoward ways at all! Or as Hughes points out, “to reveal one’s identity with assurance when the default is anonymity requires the cryptographic signature” -– which is just to say that under the hypothesized state of crypto-anarchy it would be literally impossible to definitively appear in public at all except by deliberate intention and in terms that are explicitly under one’s control. But just how “open” finally would we call a society that actually managed to so perfectly privatize the terms of the public disclosure of selves?
In making his case for the promotion of privacy, Hughes repeatedly makes recourse to this register of the discretionary and to the primacy of explicit intentions: “Privacy is the power to selectively reveal oneself to the world.” … “[W]e must ensure that each party to a transaction can have knowledge only of what is directly necessary for that transaction.” … “An anonymous system empowers individuals to reveal their identity when desired and only when desired; this is the essence of privacy.” … “If I say something, I want it heard only by those for whom I intend it.”
Certainly, at least part of an informational construal of privacy amounts to an ongoing demand that one’s intentions be respected in matters of the public circulation of at least some kinds of personal information. But I submit that the participants in social transactions are hardly always in a position to know themselves the terms of disclosure that are “directly necessary” to any given transaction. I submit that because our actions have unintended and unforeseeable consequences, for both good and ill, and because self-knowledge is imperfect and incomplete, to say the least, one is never in fact in a position to know fully what one intends in the matter of disclosing oneself in public, one never knows completely for whom one intends one’s descriptions to be heard, one cannot always know when another’s speech will be (or in fact was) welcome or unwanted, or when the susceptibility to description otherwise than one intends will be far from threatening, but rather emancipatory, redemptive, or deeply pleasurable.
For Hughes privacy is discretionary, a kind of deliberate act, and just as it is the case that “[t]o encrypt is to indicate the desire for privacy” -– indeed “encryption is [the] fundamentally… private act” -– Hughes also intriguingly suggests that “to encrypt with weak cryptography is to indicate not too much desire for privacy.” To the extent that technological development is ongoing, and hence that strong technologies are constantly rendered weaker by the development of more powerful technologies over time, it is interesting that Hughes seems to invite the implication here that the intelligible indication of a desire for privacy might require therefore an interminable maintenance of the most sophisticated and powerful technologies on offer, since obsolescence might be taken as a signal that one’s desire for privacy is on the wane. Imagine a hacker who, in uncovering a vulnerability in a hitherto secure system, discerns thereby the “intention” of her victim to be exposed to attack in the first place. The solitary, controlled and controlling, superlatively prostheticized cypherpunk would proceed then from what might seem a somewhat hyperbolic inaugural anxiety about a threatening susceptibility to indiscriminate public disclosure, to an arms race of interminable augmentation for which any relaxation might be construed as the disclosure of a literal invitation to devastation.
In The Human Condition, Hannah Arendt wrote that “[t]o live an entirely private life means above all… to be deprived of the reality that comes from being seen and heard by others, to be deprived of an ‘objective’ relationship with them that comes from being related to and separated from them through the intermediary of a common world of things.” Hughes and the rest of the cypherpunks would protest no doubt that they do not mean to withdraw entirely into private seclusions, but to gain through encryption techniques a renewed measure of control over the terms in which they appear in public. But I think that the terms of the control they seek over social interactions would altogether eliminate their spontaneity (the price of which, after all, as the cypherpunks themselves warn again and again, is to take on a real and abiding vulnerability to others) and so substitute for the “objectivity” of an Arendtian improvisatory, collaborative negotiation of a world in common the interminable expressions of canned subjectivities, of atoms in the void.
Go to Next Section of Pancryptics
Go to Pancryptics Table of Contents
At the outset, Hughes evokes the scene of a privacy the value of which is generally affirmed and widely enjoyed. “People have been defending their own privacy for centuries with whispers, darkness, envelopes, closed doors, secret handshakes, and couriers.” But this familiar and bucolic world of privacy is newly threatened by the unprecedented technological empowerment of unscrupulous authorities. The relative anonymity of cash purchasing -– “[w]hen I purchase a magazine at a store and hand cash to the clerk, there is no need to know who I am” -– has been displaced by electronically mediated purchasing in which “my identity is revealed by the underlying mechanism of the transaction.” The modest annoyances of gossip are exacerbated by digital networked communication into deeper threats: “Information is fleeter of foot, has more eyes, knows more, and understands less than Rumor.” Given these new threats, Hughes proposes that “[w]e must defend our own privacy if we expect to have any…”
But in the space of just two sentences Hughes’s tone changes extraordinarily. Rather than representing an unprecedented threat to the general enjoyment of modest privacies, technology becomes instead an engine through which Hughes imagines an unprecedented expansion and augmentation of privacy, and of the private agentic selves that would exercise it more perfectly: “The technologies of the past did not allow for strong privacy, but electronic technologies do.” It is finally this “strong” privacy that Hughes is championing in his essay, a privacy and a private self that is not so much tremulous as tremendous, shored up and rendered invulnerable by encryption, rendered more perfectly autonomous by anonymity, a sovereign self enthroned at the scene of decision.
The initial plausibility of Hughes’s inaugural claim that “[p]rivacy is necessary for an open society in an electronic age,” has itself come to look less sure-footed by now. Of just what does the “openness” consist in Hughes’s conception of an “open society” that would facilitate privacy in his understanding of it? Has Hughes mistaken for an “open” society simply one in which all the intentions of private actors that are registered in contractual, deliberate, explicitly discretionary terms are fully respected? What manner of openness after all readily reconciles with his desire for such strong personal control over the terms of public information and social interaction?
Hughes insists early on that “freedom of speech, even more than privacy, is fundamental to an open society,” and so “[w]e seek not to restrict any speech at all.” On the contrary, cypherpunks would radically impoverish spontaneous sociality and through ubiquitous encryption restrict instead the circumstances in which anyone could have occasion to speak in untoward ways at all! Or as Hughes points out, “to reveal one’s identity with assurance when the default is anonymity requires the cryptographic signature” -– which is just to say that under the hypothesized state of crypto-anarchy it would be literally impossible to definitively appear in public at all except by deliberate intention and in terms that are explicitly under one’s control. But just how “open” finally would we call a society that actually managed to so perfectly privatize the terms of the public disclosure of selves?
In making his case for the promotion of privacy, Hughes repeatedly makes recourse to this register of the discretionary and to the primacy of explicit intentions: “Privacy is the power to selectively reveal oneself to the world.” … “[W]e must ensure that each party to a transaction can have knowledge only of what is directly necessary for that transaction.” … “An anonymous system empowers individuals to reveal their identity when desired and only when desired; this is the essence of privacy.” … “If I say something, I want it heard only by those for whom I intend it.”
Certainly, at least part of an informational construal of privacy amounts to an ongoing demand that one’s intentions be respected in matters of the public circulation of at least some kinds of personal information. But I submit that the participants in social transactions are hardly always in a position to know themselves the terms of disclosure that are “directly necessary” to any given transaction. I submit that because our actions have unintended and unforeseeable consequences, for both good and ill, and because self-knowledge is imperfect and incomplete, to say the least, one is never in fact in a position to know fully what one intends in the matter of disclosing oneself in public, one never knows completely for whom one intends one’s descriptions to be heard, one cannot always know when another’s speech will be (or in fact was) welcome or unwanted, or when the susceptibility to description otherwise than one intends will be far from threatening, but rather emancipatory, redemptive, or deeply pleasurable.
For Hughes privacy is discretionary, a kind of deliberate act, and just as it is the case that “[t]o encrypt is to indicate the desire for privacy” -– indeed “encryption is [the] fundamentally… private act” -– Hughes also intriguingly suggests that “to encrypt with weak cryptography is to indicate not too much desire for privacy.” To the extent that technological development is ongoing, and hence that strong technologies are constantly rendered weaker by the development of more powerful technologies over time, it is interesting that Hughes seems to invite the implication here that the intelligible indication of a desire for privacy might require therefore an interminable maintenance of the most sophisticated and powerful technologies on offer, since obsolescence might be taken as a signal that one’s desire for privacy is on the wane. Imagine a hacker who, in uncovering a vulnerability in a hitherto secure system, discerns thereby the “intention” of her victim to be exposed to attack in the first place. The solitary, controlled and controlling, superlatively prostheticized cypherpunk would proceed then from what might seem a somewhat hyperbolic inaugural anxiety about a threatening susceptibility to indiscriminate public disclosure, to an arms race of interminable augmentation for which any relaxation might be construed as the disclosure of a literal invitation to devastation.
In The Human Condition, Hannah Arendt wrote that “[t]o live an entirely private life means above all… to be deprived of the reality that comes from being seen and heard by others, to be deprived of an ‘objective’ relationship with them that comes from being related to and separated from them through the intermediary of a common world of things.” Hughes and the rest of the cypherpunks would protest no doubt that they do not mean to withdraw entirely into private seclusions, but to gain through encryption techniques a renewed measure of control over the terms in which they appear in public. But I think that the terms of the control they seek over social interactions would altogether eliminate their spontaneity (the price of which, after all, as the cypherpunks themselves warn again and again, is to take on a real and abiding vulnerability to others) and so substitute for the “objectivity” of an Arendtian improvisatory, collaborative negotiation of a world in common the interminable expressions of canned subjectivities, of atoms in the void.
Go to Next Section of Pancryptics
Go to Pancryptics Table of Contents
MXII. Description As Threat
Where Hughes’ own case for privacy departs from such conventions in my view is in its axiomatic stringency, and in the extremity of its proposed applications. Consider the example through which Hughes first seeks to illustrate his sense of just what privacy consists of, and by what it might be threatened. “If two parties have some sort of dealings,” writes Hughes, conjuring up sociality at a very general, even exhaustive, level, “then each has a memory of their interaction.” Further, obviously enough, “[e]ach party can speak about their own memory of this [interaction].” Without pausing even to begin a new sentence, Hughes then asks what seems to me a rather curious question (it immediately follows the last statement with a semicolon): “[H]ow could anyone prevent it?”
I wonder myself, to the contrary, if it would occur to many people at all to desire to prevent other individuals from testifying to their own memories of events in which we have likewise taken part in any kind of general way in the first place -– even if, say, one might easily imagine isolated instances where one might desire such a thing for fear of embarrassment or the like.
Hughes, however, seems to regard this perceived basic threat of being described by others on terms that are not fully under one’s control as a commonplace one, widespread enough in fact to provide a firm foundation from which to erect his more general case. “If many parties speak together in the same forum, each can speak to all the others and aggregate together knowledge about individuals and other parties,” he continues on. “The power of electronic communications has enabled such group speech,” and he then adds, “it will not go away merely because we might want it to,” clearly implying that indeed many (or even all: “we”) might so want it to.
“Since any information can be spoken of, we must ensure that we reveal as little as possible,” Hughes proposes, as if this were the most obvious suggestion in the world.
But isn’t it often, even usually, the case that it is perfectly harmless or even desirable that personal information be spoken of? What could be more commonplace? Even granting that there are conspicuous occasions when an unwanted disclosure of personal information is unappealing, it seems hyperbolic to generalize from these cases to the stronger claim that to be susceptible to description is to be intolerably vulnerable as such. In Hughes’ account it is almost as if the susceptibility to description, which would seem to say the least an ineradicable dimension of public life in its most elementary characterization, is always an intolerable violation, come what may, whatever form it takes, with whatever consequences.
Hughes points out that one “cannot expect governments, corporations, or other large, faceless organizations to grant us privacy out of their beneficence.” This is because “[i]t is to their advantage to speak of us, and we should expect that they will speak.” Here, once again, the violation of privacy in such a formulation seems less a matter of exposure to unwanted scrutiny than it is a matter particularly of bearing the burden of unwanted description.
And while it seems perfectly reasonable to want to be protected from inaccurate, misleading, fraudulent, or literally threatening descriptions that organizations might circulate to facilitate our manipulation or control it scarcely seems right to imply that every time it is to the advantage of an organization or individual to speak of us this will likewise disadvantage us to be so spoken of.
To take an example that tends to loom large in market libertarian accounts of properly functioning social orders, it is precisely because we do not control entirely the terms in which we are publicly described that “reputations” exert a normative force impelling better conduct from public actors, many of whom could no doubt produce public rationalizations to retroactively justify any conduct of their own, however unappealing. Again, it is difficult to generalize from a concern with particular abuses of description to a denigration of the public susceptibility to description as such.
Foundational to Hughes’ own case is a forceful distinction of privacy from secrecy. “Privacy is not secrecy,” he simply declares in his Manifesto’s second sentence. Although few would be tempted to collapse the meaning of the two in any case, Hughes manages to distinguish them in terms that render both concepts importantly idiosyncratic. “A private matter is something one doesn’t want the whole world to know,” he goes on to write, “but a secret matter is something one doesn’t want anybody to know.”
This might seem to suggest that the difference between secrecy and privacy for Hughes is simply a register of the degree to which one is exposed to uninvited scrutiny and so susceptible to unwanted description. But the discussion of privacy that follows in the piece belies any suggestion that privacy is just a matter of a more moderate or tentative withdrawal from the public sphere than the more absolute withdrawal that secrecy seems to be for him.
“If the content of my speech is available to the world,” writes Hughes later in essay, then “I have no privacy.” Note that Hughes does not suggest that one’s privacy is diminished by the accessibility of one’s speech, but utterly obliterated by this accessibility. Elsewhere, he writes that “[w]hen my identity is revealed by the underlying mechanism of [a] transaction, I have no privacy.” Once again, the phrasing here suggests not an impairment or diminution of privacy, but its obliteration.
Now, it is certainly true that growing numbers of people have left discernible traces of their purchasing habits, say, by using credit cards in their own names over telephone lines or via the Web, and likewise of their reading habits by consenting to register by name before accessing online versions of mainstream news-sources and popular magazines and the like. And it is true that by means of such traces the agents of corporations, governments, and other organizations can now discern patterns from which to compile personal profiles of extraordinary depth and predictive power. But even if many people will admit to a certain uneasiness about such unprecedented levels of scrutiny, and worry that new kinds of abuses are now possible that demand serious address in law and in policy, it is not right to suppose that this emerging state of affairs inevitably constitutes an outright obliteration of personal privacy rather than simply signifying a transformation of what privacy is coming to mean in everyday life.
It is not, after all, just corporations and governments and comparably authoritative organizations that pry into our personal affairs in the present day, but more and more often ordinary people who do so in their everyday commerce with one another. It is becoming commonplace for people to do an online search of a person’s name before meeting them for the first time or once a new acquaintance has been made. It isn’t clear that people experience the unprecedented exposure of personal information via the suddenly ubiquitous recourse to Google and other online search engines necessarily as a violation of their privacy at all, so much as a shift in the set of expectations on the basis of which they distinguish what properly counts as the public and the private as such, a shift in their sense of the authority and relevance of publicly available information and the ways in which it connects up to personal identity, and a shift in the sorts of demands they are likely to make of their privacy in the first place.
If nothing else, Hughes’ formulations on privacy, publicity, and secrecy seem to overlook a familiar dilemma of the published, namely, that as often as not publication is compatible with nearly perfect obscurity. To be vulnerable to scrutiny is not necessarily to be scrutinized, that information is available provides no assurance that it will be availed of –- all to the endless frustration of the many people who so crave attention that they provide content online for free.
Again, Hughes defines a “private matter” as something one doesn’t want the whole world to know, when the truth is few if any of us are ever in any kind of position to command the attention of the whole world under any circumstances. Similarly, to say of a “secret matter” that it is “something one doesn’t want anybody to know” seems a curiously implausible amplification of the usual sense of the term. Typically we think of secrecy more as a restriction than as an absolute truncation of the flow of information. Secrets may be clandestine, intimate, or even open, but almost never do they imply perfect silences, which is why so many popular sayings imply that only the dead can keep secrets entirely to themselves.
It seems fairly clear that Hughes’ curious pairing of the “secret” and the “private” in his piece derives, in fact, directly from the special technical use of the terms that describe the functions of the secret and the public keys in the asymmetric encryption systems on which cypherpunks dote so much in so many of their discussions, schemes, and utopian projections.
Recall that public key encryption systems rely for their effectiveness on the unique mathematical pairing of a secret key that must be revealed to no-one and a public key that literally anybody can access. Hughes seems to have taken up these technical usages, and then redeployed versions of them as political categories. “The act of encryption… removes information from the public realm,” writes Hughes later in the piece, which is to say that encryption renders information inaccessible. “[W]e desire privacy,” Hughes confides, conjuring up a “we” that might not in fact be mobilized at all by the idiosyncratic value that “privacy” has come to name for him here, as witness (the whole sentence from which the last quotation was culled): “Since we desire privacy, we must ensure that each party to a transaction ha[s] knowledge only of that which is directly necessary for that transaction….”
For Hughes, “[t]o encrypt is to indicate the desire for privacy,” but elsewhere he supplements this point with the suggestion that “encryption” itself “is fundamentally a private act.” Privacy on this view is not so much the traditional space of the oikos into which one withdraws for relief and recuperation from the exactions of public life, nor is it a state of dignity, integrity, or security which can be either violated or enjoyed as such, but it emerges in Hughes’s “Manifesto” specifically as a way of acting.
Remember that in Hughes’ definition “privacy” is essentially a power of selective revelation, of self-description, against which he repeatedly arrays the countervailing threat of susceptibility to description by others. For Hughes the actual details that might be exposed in a particular violation of privacy seem a secondary consideration compared to the co-incident circumscription of the agency of the one who would control the terms on which any personal details whatever are revealed and apprehended in public as perfectly as possible.
“When my identity is revealed by the underlying mechanism of [a] transaction,” that is to say, when the terms of a transaction connect particular content to his name in ways that he does not perfectly control himself, then Hughes has insisted, “I have no privacy” at all. This conclusion initially seemed puzzlingly hyperbolic in the absence of a better accounting of the specific information involved in the revelation.
But it is clear now that the specifics of the personal information involved in the example are secondary to the impact of any uncontrolled descriptive or otherwise revelatory alterity at all that plays out against the grain of Hughes’ own self-descriptive agency. To be described on terms beyond one’s control would seem tantamount to an interminable exposure: “I cannot here selectively reveal myself; I must always reveal myself.”
Go to Next Section of Pancryptics
Go to Pancryptics Table of Contents
I wonder myself, to the contrary, if it would occur to many people at all to desire to prevent other individuals from testifying to their own memories of events in which we have likewise taken part in any kind of general way in the first place -– even if, say, one might easily imagine isolated instances where one might desire such a thing for fear of embarrassment or the like.
Hughes, however, seems to regard this perceived basic threat of being described by others on terms that are not fully under one’s control as a commonplace one, widespread enough in fact to provide a firm foundation from which to erect his more general case. “If many parties speak together in the same forum, each can speak to all the others and aggregate together knowledge about individuals and other parties,” he continues on. “The power of electronic communications has enabled such group speech,” and he then adds, “it will not go away merely because we might want it to,” clearly implying that indeed many (or even all: “we”) might so want it to.
“Since any information can be spoken of, we must ensure that we reveal as little as possible,” Hughes proposes, as if this were the most obvious suggestion in the world.
But isn’t it often, even usually, the case that it is perfectly harmless or even desirable that personal information be spoken of? What could be more commonplace? Even granting that there are conspicuous occasions when an unwanted disclosure of personal information is unappealing, it seems hyperbolic to generalize from these cases to the stronger claim that to be susceptible to description is to be intolerably vulnerable as such. In Hughes’ account it is almost as if the susceptibility to description, which would seem to say the least an ineradicable dimension of public life in its most elementary characterization, is always an intolerable violation, come what may, whatever form it takes, with whatever consequences.
Hughes points out that one “cannot expect governments, corporations, or other large, faceless organizations to grant us privacy out of their beneficence.” This is because “[i]t is to their advantage to speak of us, and we should expect that they will speak.” Here, once again, the violation of privacy in such a formulation seems less a matter of exposure to unwanted scrutiny than it is a matter particularly of bearing the burden of unwanted description.
And while it seems perfectly reasonable to want to be protected from inaccurate, misleading, fraudulent, or literally threatening descriptions that organizations might circulate to facilitate our manipulation or control it scarcely seems right to imply that every time it is to the advantage of an organization or individual to speak of us this will likewise disadvantage us to be so spoken of.
To take an example that tends to loom large in market libertarian accounts of properly functioning social orders, it is precisely because we do not control entirely the terms in which we are publicly described that “reputations” exert a normative force impelling better conduct from public actors, many of whom could no doubt produce public rationalizations to retroactively justify any conduct of their own, however unappealing. Again, it is difficult to generalize from a concern with particular abuses of description to a denigration of the public susceptibility to description as such.
Foundational to Hughes’ own case is a forceful distinction of privacy from secrecy. “Privacy is not secrecy,” he simply declares in his Manifesto’s second sentence. Although few would be tempted to collapse the meaning of the two in any case, Hughes manages to distinguish them in terms that render both concepts importantly idiosyncratic. “A private matter is something one doesn’t want the whole world to know,” he goes on to write, “but a secret matter is something one doesn’t want anybody to know.”
This might seem to suggest that the difference between secrecy and privacy for Hughes is simply a register of the degree to which one is exposed to uninvited scrutiny and so susceptible to unwanted description. But the discussion of privacy that follows in the piece belies any suggestion that privacy is just a matter of a more moderate or tentative withdrawal from the public sphere than the more absolute withdrawal that secrecy seems to be for him.
“If the content of my speech is available to the world,” writes Hughes later in essay, then “I have no privacy.” Note that Hughes does not suggest that one’s privacy is diminished by the accessibility of one’s speech, but utterly obliterated by this accessibility. Elsewhere, he writes that “[w]hen my identity is revealed by the underlying mechanism of [a] transaction, I have no privacy.” Once again, the phrasing here suggests not an impairment or diminution of privacy, but its obliteration.
Now, it is certainly true that growing numbers of people have left discernible traces of their purchasing habits, say, by using credit cards in their own names over telephone lines or via the Web, and likewise of their reading habits by consenting to register by name before accessing online versions of mainstream news-sources and popular magazines and the like. And it is true that by means of such traces the agents of corporations, governments, and other organizations can now discern patterns from which to compile personal profiles of extraordinary depth and predictive power. But even if many people will admit to a certain uneasiness about such unprecedented levels of scrutiny, and worry that new kinds of abuses are now possible that demand serious address in law and in policy, it is not right to suppose that this emerging state of affairs inevitably constitutes an outright obliteration of personal privacy rather than simply signifying a transformation of what privacy is coming to mean in everyday life.
It is not, after all, just corporations and governments and comparably authoritative organizations that pry into our personal affairs in the present day, but more and more often ordinary people who do so in their everyday commerce with one another. It is becoming commonplace for people to do an online search of a person’s name before meeting them for the first time or once a new acquaintance has been made. It isn’t clear that people experience the unprecedented exposure of personal information via the suddenly ubiquitous recourse to Google and other online search engines necessarily as a violation of their privacy at all, so much as a shift in the set of expectations on the basis of which they distinguish what properly counts as the public and the private as such, a shift in their sense of the authority and relevance of publicly available information and the ways in which it connects up to personal identity, and a shift in the sorts of demands they are likely to make of their privacy in the first place.
If nothing else, Hughes’ formulations on privacy, publicity, and secrecy seem to overlook a familiar dilemma of the published, namely, that as often as not publication is compatible with nearly perfect obscurity. To be vulnerable to scrutiny is not necessarily to be scrutinized, that information is available provides no assurance that it will be availed of –- all to the endless frustration of the many people who so crave attention that they provide content online for free.
Again, Hughes defines a “private matter” as something one doesn’t want the whole world to know, when the truth is few if any of us are ever in any kind of position to command the attention of the whole world under any circumstances. Similarly, to say of a “secret matter” that it is “something one doesn’t want anybody to know” seems a curiously implausible amplification of the usual sense of the term. Typically we think of secrecy more as a restriction than as an absolute truncation of the flow of information. Secrets may be clandestine, intimate, or even open, but almost never do they imply perfect silences, which is why so many popular sayings imply that only the dead can keep secrets entirely to themselves.
It seems fairly clear that Hughes’ curious pairing of the “secret” and the “private” in his piece derives, in fact, directly from the special technical use of the terms that describe the functions of the secret and the public keys in the asymmetric encryption systems on which cypherpunks dote so much in so many of their discussions, schemes, and utopian projections.
Recall that public key encryption systems rely for their effectiveness on the unique mathematical pairing of a secret key that must be revealed to no-one and a public key that literally anybody can access. Hughes seems to have taken up these technical usages, and then redeployed versions of them as political categories. “The act of encryption… removes information from the public realm,” writes Hughes later in the piece, which is to say that encryption renders information inaccessible. “[W]e desire privacy,” Hughes confides, conjuring up a “we” that might not in fact be mobilized at all by the idiosyncratic value that “privacy” has come to name for him here, as witness (the whole sentence from which the last quotation was culled): “Since we desire privacy, we must ensure that each party to a transaction ha[s] knowledge only of that which is directly necessary for that transaction….”
For Hughes, “[t]o encrypt is to indicate the desire for privacy,” but elsewhere he supplements this point with the suggestion that “encryption” itself “is fundamentally a private act.” Privacy on this view is not so much the traditional space of the oikos into which one withdraws for relief and recuperation from the exactions of public life, nor is it a state of dignity, integrity, or security which can be either violated or enjoyed as such, but it emerges in Hughes’s “Manifesto” specifically as a way of acting.
Remember that in Hughes’ definition “privacy” is essentially a power of selective revelation, of self-description, against which he repeatedly arrays the countervailing threat of susceptibility to description by others. For Hughes the actual details that might be exposed in a particular violation of privacy seem a secondary consideration compared to the co-incident circumscription of the agency of the one who would control the terms on which any personal details whatever are revealed and apprehended in public as perfectly as possible.
“When my identity is revealed by the underlying mechanism of [a] transaction,” that is to say, when the terms of a transaction connect particular content to his name in ways that he does not perfectly control himself, then Hughes has insisted, “I have no privacy” at all. This conclusion initially seemed puzzlingly hyperbolic in the absence of a better accounting of the specific information involved in the revelation.
But it is clear now that the specifics of the personal information involved in the example are secondary to the impact of any uncontrolled descriptive or otherwise revelatory alterity at all that plays out against the grain of Hughes’ own self-descriptive agency. To be described on terms beyond one’s control would seem tantamount to an interminable exposure: “I cannot here selectively reveal myself; I must always reveal myself.”
Go to Next Section of Pancryptics
Go to Pancryptics Table of Contents
MXI. From Privation to Discretion
The techno-libertarian discourse of secrecy condenses what would otherwise be separate and scarcely compatible efforts to shore up the efficaciousness and dignity of threateningly needy, fallible, vulnerable, lonely individuals together into a single project of reassurance I will describe as the register of the discretionary. To be discreet is to exhibit discernment, judiciousness, and a capacity to hold one’s tongue, while to be discrete is to be separate, integrated, distinct, and whole. And at the soul of discretion in especially the libertarian formulations that preoccupy here I find an endlessly provocative association of secrecy, decision, and individualization.
The Oxford English Dictionary defines discretion as it relates to persons as “showing discernment or judgment in the guidance of one’s own speech and action; judicious, prudent, circumspect, cautious,” and then goes on to elaborate this as a matter especially (emphasis in the original) of secrecy, of one who “can be silent when speech would be inconvenient.” The discretionary as an “act of discerning or judging… decision [my emphasis],” then readily gives way to the conjuration or substantiation of a succession of alleged traits or attainments, first, a “faculty of discerning,” then a “liberty or power of deciding, or of acting according to one’s own judgment or as one sees fit,” even, incredibly, “uncontrolled power of disposal,” and then, finally, the “power of a court of justice, or person acting in a judicial capacity, to decide… as to the punishment to be awarded or remedy to be applied.”
Discretion, discreteness, and secrecy all resonate with the implication of separation, demarcation, and seclusion; from the Latin discretionem, meaning “separation, distinction, and [in] later [usage] discernment,” and secernere, “to separate, divide off.” In this, the discretionary reenacts the gesture of the discourse of privacy more generally, in which (de)privation is figured as inducing positive, productive effects, and for which withdrawal is figured more as invigoration than as diminishment.
With these associations in mind, I want to turn my attention to yet another manifesto, this one aptly titled “A Cypherpunk’s Manifesto.” This brief piece was written by Eric Hughes, was widely circulated online in 1993, and so was book-ended by the publications of the more or less definitive versions of Tim May’s “Crypto Anarchist Manifesto” in 1992 and his essay “Crypto Anarchy and Virtual Communities” in 1994. Taken together, these three texts capture perfectly the flavor of the culture of cypherpunk advocacy at the height of its exuberance (whether irrational or not), and manage as well to specify the reach and relations of their chief claims and categories in a fairly consistently systematic way. It was just a couple of months after the first appearance of Hughes’ manifesto online in March 1993 that he appeared with Tim May (and a third Cypherpunk “founding father,” John Gilmore), bemasked and wrapped in an enormous American Flag on the infamous cover of the second issue of Wired Magazine.
Hughes’ essay begins with the claim that “[p]rivacy is necessary for an open society in the electronic age,” followed immediately by the arresting added qualification that “[p]rivacy is not secrecy.” Hughes later adds to these the third, crucial claim, that “[c]ypherpunks deplore regulations on cryptography, for encryption is fundamentally a private act.” To my mind, the chief conceptual innovation of the cypherpunk viewpoint appears in its clearest and most schematic form in this short piece by Hughes. And that innovation consists in the way these categories of the “secret” and the “open” stand in a relation to “privacy” (in its specifically libertarian-cypherpunk construal) that would supplement, or indeed largely displace, the conventional relation and distinction of the private and the public.
I will argue that this displacement of the standard public/private distinction functions in the service of a project to defend and in fact radically augment an insistently individualist conception of private agency considered predominately as a matter of personal control, a conception of agency which would seem to eliminate almost any recognizably public dimension at all.
Hughes defines privacy in this piece as “the power to selectively reveal oneself to the world,” and this would appear to be a fairly straightforward formulation of what I have described as an informational construal of privacy. The desire for privacy, on this construal, names the desire for a reasonable measure of control (or registers a perceived or feared loss of control) over the disclosure and circulation of personal information, especially information that is potentially harmful in some way, embarrassing, or unfairly damaging to one’s reputation.
Given this focus, Hughes’s initial claim that “[p]rivacy is necessary for an open society,” likewise seems genially uncontroversial. Relentless, unmotivated public scrutiny over all the details of one’s personal life (a state of affairs which has only recently become technically possible as more than an abstract ideal for any non-negligible portion of the population) would seem unduly oppressive and arbitrary to nearly anyone. Democratic societies may enshrine and encourage the value of free expression above most other values, and such societies may demand extraordinary levels of transparency over the conduct of individuals acting on behalf of significant or authoritative institutions to ensure that their citizens maintain the sense of having a real say in the public decisions that affect their lives. But even in such societies it will be commonplace to strive to strike a balance between the demands of these public values and demands to preserve a measure of individual privacy for their citizens, especially in cases for which personal conduct imposes little risk of public harms or social costs.
Go to Next Section of Pancryptics
Go to Pancryptics Table of Contents
The Oxford English Dictionary defines discretion as it relates to persons as “showing discernment or judgment in the guidance of one’s own speech and action; judicious, prudent, circumspect, cautious,” and then goes on to elaborate this as a matter especially (emphasis in the original) of secrecy, of one who “can be silent when speech would be inconvenient.” The discretionary as an “act of discerning or judging… decision [my emphasis],” then readily gives way to the conjuration or substantiation of a succession of alleged traits or attainments, first, a “faculty of discerning,” then a “liberty or power of deciding, or of acting according to one’s own judgment or as one sees fit,” even, incredibly, “uncontrolled power of disposal,” and then, finally, the “power of a court of justice, or person acting in a judicial capacity, to decide… as to the punishment to be awarded or remedy to be applied.”
Discretion, discreteness, and secrecy all resonate with the implication of separation, demarcation, and seclusion; from the Latin discretionem, meaning “separation, distinction, and [in] later [usage] discernment,” and secernere, “to separate, divide off.” In this, the discretionary reenacts the gesture of the discourse of privacy more generally, in which (de)privation is figured as inducing positive, productive effects, and for which withdrawal is figured more as invigoration than as diminishment.
With these associations in mind, I want to turn my attention to yet another manifesto, this one aptly titled “A Cypherpunk’s Manifesto.” This brief piece was written by Eric Hughes, was widely circulated online in 1993, and so was book-ended by the publications of the more or less definitive versions of Tim May’s “Crypto Anarchist Manifesto” in 1992 and his essay “Crypto Anarchy and Virtual Communities” in 1994. Taken together, these three texts capture perfectly the flavor of the culture of cypherpunk advocacy at the height of its exuberance (whether irrational or not), and manage as well to specify the reach and relations of their chief claims and categories in a fairly consistently systematic way. It was just a couple of months after the first appearance of Hughes’ manifesto online in March 1993 that he appeared with Tim May (and a third Cypherpunk “founding father,” John Gilmore), bemasked and wrapped in an enormous American Flag on the infamous cover of the second issue of Wired Magazine.
Hughes’ essay begins with the claim that “[p]rivacy is necessary for an open society in the electronic age,” followed immediately by the arresting added qualification that “[p]rivacy is not secrecy.” Hughes later adds to these the third, crucial claim, that “[c]ypherpunks deplore regulations on cryptography, for encryption is fundamentally a private act.” To my mind, the chief conceptual innovation of the cypherpunk viewpoint appears in its clearest and most schematic form in this short piece by Hughes. And that innovation consists in the way these categories of the “secret” and the “open” stand in a relation to “privacy” (in its specifically libertarian-cypherpunk construal) that would supplement, or indeed largely displace, the conventional relation and distinction of the private and the public.
I will argue that this displacement of the standard public/private distinction functions in the service of a project to defend and in fact radically augment an insistently individualist conception of private agency considered predominately as a matter of personal control, a conception of agency which would seem to eliminate almost any recognizably public dimension at all.
Hughes defines privacy in this piece as “the power to selectively reveal oneself to the world,” and this would appear to be a fairly straightforward formulation of what I have described as an informational construal of privacy. The desire for privacy, on this construal, names the desire for a reasonable measure of control (or registers a perceived or feared loss of control) over the disclosure and circulation of personal information, especially information that is potentially harmful in some way, embarrassing, or unfairly damaging to one’s reputation.
Given this focus, Hughes’s initial claim that “[p]rivacy is necessary for an open society,” likewise seems genially uncontroversial. Relentless, unmotivated public scrutiny over all the details of one’s personal life (a state of affairs which has only recently become technically possible as more than an abstract ideal for any non-negligible portion of the population) would seem unduly oppressive and arbitrary to nearly anyone. Democratic societies may enshrine and encourage the value of free expression above most other values, and such societies may demand extraordinary levels of transparency over the conduct of individuals acting on behalf of significant or authoritative institutions to ensure that their citizens maintain the sense of having a real say in the public decisions that affect their lives. But even in such societies it will be commonplace to strive to strike a balance between the demands of these public values and demands to preserve a measure of individual privacy for their citizens, especially in cases for which personal conduct imposes little risk of public harms or social costs.
Go to Next Section of Pancryptics
Go to Pancryptics Table of Contents
MX. e2e
The closest thing one can find to a core that would define the “nature” of the internet in any kind of abiding way is a principle which seems to function more than anything else as a formal institutionalization of the ongoing refusal of the internet to have a core or a nature at all. Proposed in a paper called “End-to-End Arguments in System Design,” by Jerome H. Salzer, David Reed and David Clark, the End-to-End Principle (e2e) states, modestly enough, that “functions placed at low levels of a system may be redundant or of little value when compared with the cost of providing them at that low level.”
Machines perform any number of functions that may be facilitated by networking them. The Principle distinguishes machines at the “ends” of a network from those within it that constitute the links through which information traverses the network to these ends. When you access the internet, say, the machine you use is a network end from your perspective. The machines in which information you access via the network reside are then ends as well. The end-to-end principle proposes that optimizing a network to facilitate any particular function will likely frustrate thereby the capacity of the network to accommodate as many functions as possible. And this implies that a truly flexible, robust, stable network should be instead one that is as simple and neutral as possible, with intelligence, complexity, and functionality pushed off onto the network’s ends, into its applications.
A rather philosophically minded Memo published in 1996 under the auspices of the Network Working Group, the “RFC (Request for Comments) 1958,” entitled “Architectural Principles of the Internet” discusses the end-to-end principle in a way that raises the stakes considerably. The key paragraph of RFC 1958, under a heading that baldly asks the question, “Is there an Internet Architecture?” offers in answer this pithy formulation: “Many members of the Internet community would argue that there is no architecture, but only a tradition, which was not written down for the first 25 years…. However, in very general terms, the community believes that the goal is connectivity, the tool is the Internet Protocol, and the intelligence is end to end rather than hidden in the network.” Quite apart from canonizing e2e as the heart of the internet, RFC 1958 does so in the context of a conjuration of “the community,” its “tradition,” and its “goal” or ideal (“connectivity”), all considered as inextricable from this question of architecture. What began as a rather straightforward solution to the problem of engineering a more flexible network, had come to be freighted with moral and political significance.
When Tim Berners-Lee, among other researchers at CERN (Centre Européen de Recherche Nucléaire), provided a way for documents created according to incompatible formats and standards to link to one another and to be accessed and displayed on any number of machines, and again in an open-ended number of different formats, he was acting as a member of the community celebrated in RFC 1958, driven by the same vision of the internet and what it is good for that the Memo likewise takes for granted. The set of protocols HTTP (the hypertext transfer protocol) and HTML (hypertext markup language) that created the World Wide Web in the 1980s, run atop the TCP/IP (Transmission Control Protocol/Internet Protocol) that defines the Internet, and they reference, re-enact, and thereby consolidate its commitment to e2e both as a sound principle of network design, but more emphatically as an expression of the community ethos that subsequently arose out of the experiences facilitated by the TCP/IP.
The same could be said of the decision Philip Zimmermann made to create PGP (or “Pretty Good Privacy”), a public-key encryption software package, and then to publish it online in 1991 and make it available to everyone for free. These actions made Zimmermann the target of an intense and celebrated criminal investigation, highlighting the absurd policy of the United States Government to treat cryptographic software as a munition and impose export restrictions on its circulation as a consequence. Despite the investigation, which was widely condemned as persecutory, PGP became by far the most widely used e-mail encryption program around the world. The government finally dropped its case against Zimmermann in 1996, the very year, remember, that RFC 1958 was published.
There is no question that these technical decisions have functioned in differing measures to re-enforce the ongoing association of the internet, in its indefinitely many incarnations and applications, with the end-to-end principle and hence with a broad commitment to a kind of openness and experimentalism. It is likewise true that however widely and deeply shared these values may be, they are governed by specifications of technical architecture quite as much or more than they are invigorated by the participation of the community that values them. As Lessig puts the point: “The design the Internet has now need not be its design tomorrow. Or more precisely, any design it has just now can be supplemented with other controls or other technology…. The code that defines the network at one time need not be the code that defines it later on. And as that code changes, the values the network protects will change as well.”
As we have seen, the internet, such as it is, is a network of networks. Its provision, implementation, and participation is driven by contending stakes and diverse stakeholders in multiple locations. There are vast and ramifying pressures that could drive the transformation of the protocols that presently define the various layers of the internet’s architectures -– pressures to maximize corporate profits, pressures to enhance national security, pressures to regulate all sorts of conduct online deemed immoral by passionate advocates.
I have suggested that privacy is a concept deployed in both negative and positive formulations. In this it has both a structural similarity to and a certain affinity with the concept of liberty with which it also shares a comparable primacy in the liberal imaginary. Recall that “negative” privacy names the demand for a freedom from interference from overbearing authorities, while “positive” privacy affirms particular culturally contingent conceptions of individual dignity or integrity. The special quandary of privacy discourse is that, as with liberty, negative conceptions of privacy are figured as neutral and are imagined to enlist a kind of universal assent and hence foundational force, when in fact they importantly rely for their intelligibility and power on the more contingent values and assumptions of positive conceptions which tend to be disdained or even disavowed altogether just as the negative formulations they buttress are mobilized and valorized.
When the end-to-end principle is made to bear more than the weight of pragmatic calculations about ways to make networks “neutral” enough to flexibly accommodate innovation, when the principle is made to signal as well a host of moral and cultural commitments, and especially a commitment to openness figured as the essence of freedom -- then the principle undergoes a curious transubstantiation through which it takes on some of the contours of the dilemma that already shapes these discourses of privacy and liberty. There is absolutely no surprise at all in the fact that self-styled libertarian privacy advocates would find the architectural implementation of “neutrality” via e2e a deeply compelling notion at a number of levels. And neither is it particularly surprising to discern in so much of their enthusiasm the ferocious disavowal of their vulnerability to contingent political processes, their imbrication within contending cultures, and their deep dependencies on the efforts and attainments of others.
Go to Next Section of Pancryptics
Go to Pancryptics Table of Contents
Machines perform any number of functions that may be facilitated by networking them. The Principle distinguishes machines at the “ends” of a network from those within it that constitute the links through which information traverses the network to these ends. When you access the internet, say, the machine you use is a network end from your perspective. The machines in which information you access via the network reside are then ends as well. The end-to-end principle proposes that optimizing a network to facilitate any particular function will likely frustrate thereby the capacity of the network to accommodate as many functions as possible. And this implies that a truly flexible, robust, stable network should be instead one that is as simple and neutral as possible, with intelligence, complexity, and functionality pushed off onto the network’s ends, into its applications.
A rather philosophically minded Memo published in 1996 under the auspices of the Network Working Group, the “RFC (Request for Comments) 1958,” entitled “Architectural Principles of the Internet” discusses the end-to-end principle in a way that raises the stakes considerably. The key paragraph of RFC 1958, under a heading that baldly asks the question, “Is there an Internet Architecture?” offers in answer this pithy formulation: “Many members of the Internet community would argue that there is no architecture, but only a tradition, which was not written down for the first 25 years…. However, in very general terms, the community believes that the goal is connectivity, the tool is the Internet Protocol, and the intelligence is end to end rather than hidden in the network.” Quite apart from canonizing e2e as the heart of the internet, RFC 1958 does so in the context of a conjuration of “the community,” its “tradition,” and its “goal” or ideal (“connectivity”), all considered as inextricable from this question of architecture. What began as a rather straightforward solution to the problem of engineering a more flexible network, had come to be freighted with moral and political significance.
When Tim Berners-Lee, among other researchers at CERN (Centre Européen de Recherche Nucléaire), provided a way for documents created according to incompatible formats and standards to link to one another and to be accessed and displayed on any number of machines, and again in an open-ended number of different formats, he was acting as a member of the community celebrated in RFC 1958, driven by the same vision of the internet and what it is good for that the Memo likewise takes for granted. The set of protocols HTTP (the hypertext transfer protocol) and HTML (hypertext markup language) that created the World Wide Web in the 1980s, run atop the TCP/IP (Transmission Control Protocol/Internet Protocol) that defines the Internet, and they reference, re-enact, and thereby consolidate its commitment to e2e both as a sound principle of network design, but more emphatically as an expression of the community ethos that subsequently arose out of the experiences facilitated by the TCP/IP.
The same could be said of the decision Philip Zimmermann made to create PGP (or “Pretty Good Privacy”), a public-key encryption software package, and then to publish it online in 1991 and make it available to everyone for free. These actions made Zimmermann the target of an intense and celebrated criminal investigation, highlighting the absurd policy of the United States Government to treat cryptographic software as a munition and impose export restrictions on its circulation as a consequence. Despite the investigation, which was widely condemned as persecutory, PGP became by far the most widely used e-mail encryption program around the world. The government finally dropped its case against Zimmermann in 1996, the very year, remember, that RFC 1958 was published.
There is no question that these technical decisions have functioned in differing measures to re-enforce the ongoing association of the internet, in its indefinitely many incarnations and applications, with the end-to-end principle and hence with a broad commitment to a kind of openness and experimentalism. It is likewise true that however widely and deeply shared these values may be, they are governed by specifications of technical architecture quite as much or more than they are invigorated by the participation of the community that values them. As Lessig puts the point: “The design the Internet has now need not be its design tomorrow. Or more precisely, any design it has just now can be supplemented with other controls or other technology…. The code that defines the network at one time need not be the code that defines it later on. And as that code changes, the values the network protects will change as well.”
As we have seen, the internet, such as it is, is a network of networks. Its provision, implementation, and participation is driven by contending stakes and diverse stakeholders in multiple locations. There are vast and ramifying pressures that could drive the transformation of the protocols that presently define the various layers of the internet’s architectures -– pressures to maximize corporate profits, pressures to enhance national security, pressures to regulate all sorts of conduct online deemed immoral by passionate advocates.
I have suggested that privacy is a concept deployed in both negative and positive formulations. In this it has both a structural similarity to and a certain affinity with the concept of liberty with which it also shares a comparable primacy in the liberal imaginary. Recall that “negative” privacy names the demand for a freedom from interference from overbearing authorities, while “positive” privacy affirms particular culturally contingent conceptions of individual dignity or integrity. The special quandary of privacy discourse is that, as with liberty, negative conceptions of privacy are figured as neutral and are imagined to enlist a kind of universal assent and hence foundational force, when in fact they importantly rely for their intelligibility and power on the more contingent values and assumptions of positive conceptions which tend to be disdained or even disavowed altogether just as the negative formulations they buttress are mobilized and valorized.
When the end-to-end principle is made to bear more than the weight of pragmatic calculations about ways to make networks “neutral” enough to flexibly accommodate innovation, when the principle is made to signal as well a host of moral and cultural commitments, and especially a commitment to openness figured as the essence of freedom -- then the principle undergoes a curious transubstantiation through which it takes on some of the contours of the dilemma that already shapes these discourses of privacy and liberty. There is absolutely no surprise at all in the fact that self-styled libertarian privacy advocates would find the architectural implementation of “neutrality” via e2e a deeply compelling notion at a number of levels. And neither is it particularly surprising to discern in so much of their enthusiasm the ferocious disavowal of their vulnerability to contingent political processes, their imbrication within contending cultures, and their deep dependencies on the efforts and attainments of others.
Go to Next Section of Pancryptics
Go to Pancryptics Table of Contents
MIX. Building Resistance In
Public-key encryption systems soon also demonstrated useful and unexpected applications that had not hitherto been associated with traditional cryptography at all. New forms of authentication for otherwise anonymous transactions were suddenly possible and soon implemented. Digital time-stamping of documents and the creation of reliable “digital signatures” for otherwise anonymous or pseudonymous participants in various kinds of transactions were among the authenticating applications that especially excited the interest of the Cypherpunks.
These applications can facilitate the ongoing protection of anonymous sources of information and whistleblowers, for example, or the otherwise difficult authentication of censored texts or politically dangerous reportage, or provide for the secure, ongoing pseudonymous disputation of experts on controversial subjects over online networks. They also make it possible to re-create the kind of anonymity that roughly prevails when one makes a purchase with cash -– a kind of anonymity that more or less evaporates once one makes the transition to purchasing by means primarily of conventional credit or debit cards or when shopping online or over the phone. Needless to say, regaining this kind of purchasing privacy isn’t only appealing for those who want to engage in illegal economic activity. It is easy enough to understand the desire for anonymity in making a mildly or even only potentially embarrassing purchase, for example, or to elude a torrent of unsolicited targeted advertising.
It remains mysterious just why the arrival of even these powerful new cryptographic applications would inspire in anybody the sense of an upcoming or impending transformation of society in the image of crypto anarchy, however. And it is interesting to note that according to Simon Singh, there is an alternative version available to the conventional history of the development of public-key cryptography itself which suggests lessons that may cut somewhat against the grain of the contours of the Cypherpunk imaginary. “Over the past twenty years,” writes Singh in The Code Book, “Diffie, Hellman, and Merkle have become world famous as the cryptogaphers who invented public-key cryptography, while [Ronald] Rivest, [Adi] Shamir, and [Leonard] Adleman have been credited with developing RSA [–- the acronym derives from their initials –-], the most beautiful [and, as it happens, influential] implementation of public-key cryptography.” Contrary to this canonical history, however, Singh points out that “[b]y 1975, James Ellis, Clifford Cocks, and Malcom Williamson had discovered all the fundamental aspects of public-key cryptography, yet they had to remain silent [since their work was undertaken under the auspices of the British Government and was classified top-secret].”
Singh insists, exactly rightly, that “[a]lthough G[overnment] C[ommunications] H[ead]Q[uarters] were the first to discover public-key cryptography, this should not diminish the achievements of the academics who rediscovered it.” But his next point is the more provocative one, to my mind. “It was the academics who were the first to realize the potential of public-key encryption, and it was they who drove its implementation. Furthermore, it is quite possible that GCHQ would never have revealed their work” at all. For me, then, this appendix to the story of the invention of public-key encryption provides an example of how an open research culture grasped the significance of a discovery and implemented it incomparably more effectively than a closed and controlled, secretive culture managed to do -– even when the subject of that research and of its practical implementation was a matter of the technical facilitation of secrecy itself.
“For a long time,” writes James Boyle in his essay “Foucault in Cyberspace: Surveillance, Sovereignty, and Hard-Wired Censors,” “the internet’s enthusiasts… believed that it would be largely immune from state regulation.... forestalled by the technology of the medium, the geographical distribution of its users and the nature of its content.” Boyle characterizes “[t]his tripartite immunity” as “a kind of Internet Holy Trinity, faith in [which] was a condition of acceptance into the community [of enthusiasts].” Boyle proposes that these “beliefs about the state’s supposed inability to regulate the Internet” stand in an indicative relationship to “a set of political and legal assumptions that [he calls] the jurisprudence of digital libertarianism.” Certainly all three premises of this Internet Trinity exert their force over the imagination of Tim May and the version of digital libertarianism embodied in his crypto anarchy.
But when Boyle alludes to the “technology of the medium” here, he is not referring to the computer-facilitated cryptographic transformation of network communications, but to the more general and ubiquitous protocols and technologies that constitute the internet as such. It is useful to dwell on these more general assumptions about the internet, because they constitute the wider technical and cultural context from which May’s crypto-anarchic case emerges and hence their examination puts us in a better position to understand how some of the Cypherpunks’ otherwise rather improbably apocalyptic conclusions might acquire a compelling veneer of plausibility, especially for many of the Internet’s early partisans and participants.
“The Internet was originally designed to survive a nuclear war,” notes Boyle in “Foucault in Cyberspace.” “[I]ts distributed architecture and its technique of packet switching were built around the problem of getting messages delivered despite blockages, holes and malfunctions.”
“Imagine the poor censor faced with such a system,” he continues. “There is no central exchange to seize and hold; messages actively ‘seek out’ alternative routes so that even if one path is blocked another may open up.” All this amounts to a “civil libertarian's dream.” This is especially so because “[t]he Net offers obvious advantages to the countries, research communities, cultures and companies that use it, but it is extremely hard to control the amount and type of information available; access is like a tap that only has two settings –- ‘off’ and ‘full.’” He concludes the point: “For the Net's devotees, most of whom embrace some variety of libertarianism, the Net's structural resistance to censorship –- or any externally imposed selectivity –- is ‘not a bug but a feature.’”
Although it is true that digital networks have managed to an important and appealing extent to flummox and resist efforts to regulate their content, it is also true, as Boyle is the first to insist himself, that this capacity for resistance is the consequence of particular decisions by coders, engineers, policy-makers, and many others any number of which could have been decided otherwise with considerably different consequences, and which remain to this day more susceptible to alteration than the “civil libertarian’s dream” recited above would seem to credit. The specificity and ongoing contingency of these decisions belies any technological determinism that would destine or commend as an “ought” the architecture of a given social order from the “is” of just which tools happen to prevail for a time here or there.
The “Internet” is, in its somewhat enigmatic classical definition, a network of networks. And so, for example, writes Lawrence Lessig, “[t]he Internet is not the telephone network,” though certainly sometimes “it,” or at any rate some of it, “sometimes [runs] on the telephone lines.” Similarly, the internet is not a cable network, nor a wireless network, while it just as surely partakes of these. Different regulations and architectural protocols govern these many networks the Network networks. The ongoing implementation of the internet is private, public, corporate, governmental, academic.
What people talk about when they talk about the “Internet” tends to consist of expectations they have formed for familiar machines, or of experiences such machines have facilitated for them.
And so, the internet is very likely not the “thing” you think it is. For one thing, whatever you think of it, to be sure the internet is already transforming into something else. Consider, for example, how breathtakingly different would be the experiences of surfing the web from a desktop in 1993, to moblogging via cell in 2003, to immersion in ubicomp environments in 2013 (I am assuming that the omnipresence of these jazz-riffs of kooky jargon will continue to provide one of the few constants among the successive generations of network access and interactivity). What difference does it make that in one generation of its life the internet stood in a literally definitive relation to the Department of Defense, and in another generation more to Amazon.com? What difference does it make that in one generation of its life, a majority of those who access the internet are primary speakers of English, but that in another generation only a minority are? What difference will it make when more non-sentient appliances than human beings routinely access the internet (cars constantly reporting the state of their maintenance to their manufacturer, refrigerators reporting the state of their contents to the grocery store, factories reporting emissions to state regulators, etc.)?
Go to Next Section of Pancryptics
Go to Pancryptics Table of Contents
These applications can facilitate the ongoing protection of anonymous sources of information and whistleblowers, for example, or the otherwise difficult authentication of censored texts or politically dangerous reportage, or provide for the secure, ongoing pseudonymous disputation of experts on controversial subjects over online networks. They also make it possible to re-create the kind of anonymity that roughly prevails when one makes a purchase with cash -– a kind of anonymity that more or less evaporates once one makes the transition to purchasing by means primarily of conventional credit or debit cards or when shopping online or over the phone. Needless to say, regaining this kind of purchasing privacy isn’t only appealing for those who want to engage in illegal economic activity. It is easy enough to understand the desire for anonymity in making a mildly or even only potentially embarrassing purchase, for example, or to elude a torrent of unsolicited targeted advertising.
It remains mysterious just why the arrival of even these powerful new cryptographic applications would inspire in anybody the sense of an upcoming or impending transformation of society in the image of crypto anarchy, however. And it is interesting to note that according to Simon Singh, there is an alternative version available to the conventional history of the development of public-key cryptography itself which suggests lessons that may cut somewhat against the grain of the contours of the Cypherpunk imaginary. “Over the past twenty years,” writes Singh in The Code Book, “Diffie, Hellman, and Merkle have become world famous as the cryptogaphers who invented public-key cryptography, while [Ronald] Rivest, [Adi] Shamir, and [Leonard] Adleman have been credited with developing RSA [–- the acronym derives from their initials –-], the most beautiful [and, as it happens, influential] implementation of public-key cryptography.” Contrary to this canonical history, however, Singh points out that “[b]y 1975, James Ellis, Clifford Cocks, and Malcom Williamson had discovered all the fundamental aspects of public-key cryptography, yet they had to remain silent [since their work was undertaken under the auspices of the British Government and was classified top-secret].”
Singh insists, exactly rightly, that “[a]lthough G[overnment] C[ommunications] H[ead]Q[uarters] were the first to discover public-key cryptography, this should not diminish the achievements of the academics who rediscovered it.” But his next point is the more provocative one, to my mind. “It was the academics who were the first to realize the potential of public-key encryption, and it was they who drove its implementation. Furthermore, it is quite possible that GCHQ would never have revealed their work” at all. For me, then, this appendix to the story of the invention of public-key encryption provides an example of how an open research culture grasped the significance of a discovery and implemented it incomparably more effectively than a closed and controlled, secretive culture managed to do -– even when the subject of that research and of its practical implementation was a matter of the technical facilitation of secrecy itself.
“For a long time,” writes James Boyle in his essay “Foucault in Cyberspace: Surveillance, Sovereignty, and Hard-Wired Censors,” “the internet’s enthusiasts… believed that it would be largely immune from state regulation.... forestalled by the technology of the medium, the geographical distribution of its users and the nature of its content.” Boyle characterizes “[t]his tripartite immunity” as “a kind of Internet Holy Trinity, faith in [which] was a condition of acceptance into the community [of enthusiasts].” Boyle proposes that these “beliefs about the state’s supposed inability to regulate the Internet” stand in an indicative relationship to “a set of political and legal assumptions that [he calls] the jurisprudence of digital libertarianism.” Certainly all three premises of this Internet Trinity exert their force over the imagination of Tim May and the version of digital libertarianism embodied in his crypto anarchy.
But when Boyle alludes to the “technology of the medium” here, he is not referring to the computer-facilitated cryptographic transformation of network communications, but to the more general and ubiquitous protocols and technologies that constitute the internet as such. It is useful to dwell on these more general assumptions about the internet, because they constitute the wider technical and cultural context from which May’s crypto-anarchic case emerges and hence their examination puts us in a better position to understand how some of the Cypherpunks’ otherwise rather improbably apocalyptic conclusions might acquire a compelling veneer of plausibility, especially for many of the Internet’s early partisans and participants.
“The Internet was originally designed to survive a nuclear war,” notes Boyle in “Foucault in Cyberspace.” “[I]ts distributed architecture and its technique of packet switching were built around the problem of getting messages delivered despite blockages, holes and malfunctions.”
“Imagine the poor censor faced with such a system,” he continues. “There is no central exchange to seize and hold; messages actively ‘seek out’ alternative routes so that even if one path is blocked another may open up.” All this amounts to a “civil libertarian's dream.” This is especially so because “[t]he Net offers obvious advantages to the countries, research communities, cultures and companies that use it, but it is extremely hard to control the amount and type of information available; access is like a tap that only has two settings –- ‘off’ and ‘full.’” He concludes the point: “For the Net's devotees, most of whom embrace some variety of libertarianism, the Net's structural resistance to censorship –- or any externally imposed selectivity –- is ‘not a bug but a feature.’”
Although it is true that digital networks have managed to an important and appealing extent to flummox and resist efforts to regulate their content, it is also true, as Boyle is the first to insist himself, that this capacity for resistance is the consequence of particular decisions by coders, engineers, policy-makers, and many others any number of which could have been decided otherwise with considerably different consequences, and which remain to this day more susceptible to alteration than the “civil libertarian’s dream” recited above would seem to credit. The specificity and ongoing contingency of these decisions belies any technological determinism that would destine or commend as an “ought” the architecture of a given social order from the “is” of just which tools happen to prevail for a time here or there.
The “Internet” is, in its somewhat enigmatic classical definition, a network of networks. And so, for example, writes Lawrence Lessig, “[t]he Internet is not the telephone network,” though certainly sometimes “it,” or at any rate some of it, “sometimes [runs] on the telephone lines.” Similarly, the internet is not a cable network, nor a wireless network, while it just as surely partakes of these. Different regulations and architectural protocols govern these many networks the Network networks. The ongoing implementation of the internet is private, public, corporate, governmental, academic.
What people talk about when they talk about the “Internet” tends to consist of expectations they have formed for familiar machines, or of experiences such machines have facilitated for them.
And so, the internet is very likely not the “thing” you think it is. For one thing, whatever you think of it, to be sure the internet is already transforming into something else. Consider, for example, how breathtakingly different would be the experiences of surfing the web from a desktop in 1993, to moblogging via cell in 2003, to immersion in ubicomp environments in 2013 (I am assuming that the omnipresence of these jazz-riffs of kooky jargon will continue to provide one of the few constants among the successive generations of network access and interactivity). What difference does it make that in one generation of its life the internet stood in a literally definitive relation to the Department of Defense, and in another generation more to Amazon.com? What difference does it make that in one generation of its life, a majority of those who access the internet are primary speakers of English, but that in another generation only a minority are? What difference will it make when more non-sentient appliances than human beings routinely access the internet (cars constantly reporting the state of their maintenance to their manufacturer, refrigerators reporting the state of their contents to the grocery store, factories reporting emissions to state regulators, etc.)?
Go to Next Section of Pancryptics
Go to Pancryptics Table of Contents
MVIII. Techniques of Secrecy
A whispered voice, a closed door, a sealed envelope –- these are all familiar, everyday techniques and technologies by means of which people routinely seek to maintain their secrets and preserve a measure of personal privacy.
Cryptography, the art of making ciphers and codes, provides an additional array of powerful techniques to accomplish the same purposes under different circumstances. Cryptographic techniques attempt to protect information from unwanted scrutiny by transforming or encrypting it into an otherwise unintelligible form called a cipher-text. This cipher-text ideally cannot be deciphered back into an intelligible plain-text without the use of a key to which only those who are the intended recipients of the information have access. The use of ever more powerful computers to facilitate the construction and application of encryption algorithms and keys has made the effort to discern the original plain-text from an encrypted cipher-text without recourse to its proper key incomparably more difficult than has been the case historically. This kind of code-breaking is called cryptanalysis. Cryptology is a more general term encompassing both cryptography and cryptanalysis.
There are two basic kinds of encryption scheme in contemporary cryptography, symmetric and asymmetric systems. In classic symmetric encryption or secret key cryptology, messages are enciphered and deciphered by recourse to a secret key available to all (but only) the relevant parties to a transaction. Such systems are called symmetrical simply because both the processes of scrambling text into cipher-text and descrambling cipher-text back to plain-text require access to exactly the same information. The obvious difficulty with such symmetric systems is their reliance on a secret key that cannot always itself be distributed with ease or comparable security. This dilemma constituted in fact one of the definitive quandaries of cryptography for centuries, but it was overcome in a series of breakthroughs in relatively recent history. The result is called asymmetric or public key cryptology.
Public key encryption, as we know it, was devised by 1976 by Whitfield Diffie (of whom Simon Singh writes: “In hindsight, he was the first cypherpunk” ), Martin Hellman, and Ralph Merkle. Asymmetric encryption schemes require not one but two keys, a public or published key available to everyone as well as a secret key known, as usual, only by deliberately chosen individuals, and often known only by a single person and never revealed to anyone else at all. These two coded keys stand in the unique mathematical relation to one another that once a text has been scrambled into cipher-text by means of the public key, it can be subsequently descrambled from cipher-text back into intelligible plan-text only by means of the private or secret key associated with it. With this breakthrough the dilemma of insecure key distribution was solved, and it became possible even for parties whose identities are secrets kept from one another to communicate and conduct transactions with one another in a way that was likewise perfectly secret to anyone but themselves.
Go to Next Section of Pancryptics
Go to Pancryptics Table of Contents
Cryptography, the art of making ciphers and codes, provides an additional array of powerful techniques to accomplish the same purposes under different circumstances. Cryptographic techniques attempt to protect information from unwanted scrutiny by transforming or encrypting it into an otherwise unintelligible form called a cipher-text. This cipher-text ideally cannot be deciphered back into an intelligible plain-text without the use of a key to which only those who are the intended recipients of the information have access. The use of ever more powerful computers to facilitate the construction and application of encryption algorithms and keys has made the effort to discern the original plain-text from an encrypted cipher-text without recourse to its proper key incomparably more difficult than has been the case historically. This kind of code-breaking is called cryptanalysis. Cryptology is a more general term encompassing both cryptography and cryptanalysis.
There are two basic kinds of encryption scheme in contemporary cryptography, symmetric and asymmetric systems. In classic symmetric encryption or secret key cryptology, messages are enciphered and deciphered by recourse to a secret key available to all (but only) the relevant parties to a transaction. Such systems are called symmetrical simply because both the processes of scrambling text into cipher-text and descrambling cipher-text back to plain-text require access to exactly the same information. The obvious difficulty with such symmetric systems is their reliance on a secret key that cannot always itself be distributed with ease or comparable security. This dilemma constituted in fact one of the definitive quandaries of cryptography for centuries, but it was overcome in a series of breakthroughs in relatively recent history. The result is called asymmetric or public key cryptology.
Public key encryption, as we know it, was devised by 1976 by Whitfield Diffie (of whom Simon Singh writes: “In hindsight, he was the first cypherpunk” ), Martin Hellman, and Ralph Merkle. Asymmetric encryption schemes require not one but two keys, a public or published key available to everyone as well as a secret key known, as usual, only by deliberately chosen individuals, and often known only by a single person and never revealed to anyone else at all. These two coded keys stand in the unique mathematical relation to one another that once a text has been scrambled into cipher-text by means of the public key, it can be subsequently descrambled from cipher-text back into intelligible plan-text only by means of the private or secret key associated with it. With this breakthrough the dilemma of insecure key distribution was solved, and it became possible even for parties whose identities are secrets kept from one another to communicate and conduct transactions with one another in a way that was likewise perfectly secret to anyone but themselves.
Go to Next Section of Pancryptics
Go to Pancryptics Table of Contents
MVII. Afterward
Recall that in Dorothy Denning’s piece “The Future of Cryptography,” she indicated that, contrary to Tim May and the Cypherpunks, crypto-anarchy was neither inevitable nor widely affirmed as desirable.
It is interesting to note that Denning denied the inevitability of crypto-anarchy largely as a part of her advocacy of “key escrow,” one of the technical schemes for maintaining official “backdoors” through which governments could decipher otherwise encrypted communications in the course of legitimate law enforcement efforts. The Cypherpunks (among others) relentlessly and, as it happens, correctly and quite successfully denigrated key escrow as an unworkable scheme, one that would be disastrous from both a security and a civil liberties standpoint.
Although she did not make this argument herself, it is instead ultimately because she was right to deny that crypto-anarchy was widely affirmed as desirable that she was right likewise to deny it was inevitable. In the piece itself, however, Denning seems to worry most palpably that despite her misplaced faith in key escrow Tim May might be right after all to expect the momentum of technological development itself to eventuate in a rough-and-tumble crypto-anarchic near-lawlessness he might desire himself but which she and most others would deplore.
Like Shirky, Denning seems to credit the possibility that encryption technologies might impel social architectures in directions that would yield outcomes few would desire outright. In Shirky’s piece, an extraordinary amount of the weight of this premise is borne by the modest assumption that “once a user starts encrypting messages and files, it’s often easier to encrypt everything than to pick and choose.” But I submit that however convenient it might seem to routinely encrypt and thereby control the scope of all file-sharing practices, this would likely produce unacceptable sequestration effects in the distribution and circulation of content onto the internet, the whole point of much of which is to be as widely and freely available as possible. To value openness is to value serendipitous effects of collaboration that cannot be either perfectly predicted or even modestly assured, and to accept a real measure of uncertainty, vulnerability, and expense in exchange for promises that cannot be fully characterized and may not be fulfilled.
In an “Afterword” appended to the 2001 re-publication of “The Future of Cryptography” Denning says of her original article that it “is overly alarmist.” She re-assesses with clear relief and perhaps some small measure of smug satisfaction, the social significance of encryption in general and of May’s crypto anarchist vision in particular, saying that “[w]hereas encryption has posed significant problems for law enforcement, even derailing some investigations, the situation in no way resembles anarchy. In most of the cases with which I am familiar, law-enforcement succeeded in obtaining the evidence they needed for conviction. The situation does not call for domestic controls on cryptography, and I do not advocate their enactment [any longer].”
I regret that Denning’s re-assessment here was not an occasion as well for her to revisit for long the earlier moment in which these technologies inspired such dread and desire in her and in so many others who contemplated their developmental trajectories into the near future in which we now live. Denning’s re-assessment in her “Afterword” feels rather like a dismissal of the concerns that once so forcefully exercised her imagination. So, too, Shirky’s suggestions about routine encryption utilities humming in the background of everyday online architectures, however interesting and useful they may be, really amount to a quite profound domestication of the Cypherpunk’s vision, to the extent that he would propose as a “success” for them anything short of the arrival by way of these utilities of the crypto anarchy they desired and expected to come to pass.
Quite a few questions remain for me. Why did the rather implausibly drastic social transformations the Cypherpunks incorrectly anticipated as looming inevitabilities especially compel the attention of so many people who otherwise consummately understood the technical and scientific dimensions of the technologies with which the market-libertarian crypto-anarchists concerned themselves? What models of agency and dignity are expressed and consoled in the delineation of these fantastic expectations and idealizations? How does the manifest Cypherpunk preference for technical over political interventions in the service of the achievement of desired social ends connect to other currents in the culture and politics of the so-called information age? If it is true that the crypto anarchists’ moment has come and gone, can we say for sure that it is gone for good? And where on earth did this peculiar sensibility come from?
Go to Next Section of Pancryptics
Go to Pancryptics Table of Contents
It is interesting to note that Denning denied the inevitability of crypto-anarchy largely as a part of her advocacy of “key escrow,” one of the technical schemes for maintaining official “backdoors” through which governments could decipher otherwise encrypted communications in the course of legitimate law enforcement efforts. The Cypherpunks (among others) relentlessly and, as it happens, correctly and quite successfully denigrated key escrow as an unworkable scheme, one that would be disastrous from both a security and a civil liberties standpoint.
Although she did not make this argument herself, it is instead ultimately because she was right to deny that crypto-anarchy was widely affirmed as desirable that she was right likewise to deny it was inevitable. In the piece itself, however, Denning seems to worry most palpably that despite her misplaced faith in key escrow Tim May might be right after all to expect the momentum of technological development itself to eventuate in a rough-and-tumble crypto-anarchic near-lawlessness he might desire himself but which she and most others would deplore.
Like Shirky, Denning seems to credit the possibility that encryption technologies might impel social architectures in directions that would yield outcomes few would desire outright. In Shirky’s piece, an extraordinary amount of the weight of this premise is borne by the modest assumption that “once a user starts encrypting messages and files, it’s often easier to encrypt everything than to pick and choose.” But I submit that however convenient it might seem to routinely encrypt and thereby control the scope of all file-sharing practices, this would likely produce unacceptable sequestration effects in the distribution and circulation of content onto the internet, the whole point of much of which is to be as widely and freely available as possible. To value openness is to value serendipitous effects of collaboration that cannot be either perfectly predicted or even modestly assured, and to accept a real measure of uncertainty, vulnerability, and expense in exchange for promises that cannot be fully characterized and may not be fulfilled.
In an “Afterword” appended to the 2001 re-publication of “The Future of Cryptography” Denning says of her original article that it “is overly alarmist.” She re-assesses with clear relief and perhaps some small measure of smug satisfaction, the social significance of encryption in general and of May’s crypto anarchist vision in particular, saying that “[w]hereas encryption has posed significant problems for law enforcement, even derailing some investigations, the situation in no way resembles anarchy. In most of the cases with which I am familiar, law-enforcement succeeded in obtaining the evidence they needed for conviction. The situation does not call for domestic controls on cryptography, and I do not advocate their enactment [any longer].”
I regret that Denning’s re-assessment here was not an occasion as well for her to revisit for long the earlier moment in which these technologies inspired such dread and desire in her and in so many others who contemplated their developmental trajectories into the near future in which we now live. Denning’s re-assessment in her “Afterword” feels rather like a dismissal of the concerns that once so forcefully exercised her imagination. So, too, Shirky’s suggestions about routine encryption utilities humming in the background of everyday online architectures, however interesting and useful they may be, really amount to a quite profound domestication of the Cypherpunk’s vision, to the extent that he would propose as a “success” for them anything short of the arrival by way of these utilities of the crypto anarchy they desired and expected to come to pass.
Quite a few questions remain for me. Why did the rather implausibly drastic social transformations the Cypherpunks incorrectly anticipated as looming inevitabilities especially compel the attention of so many people who otherwise consummately understood the technical and scientific dimensions of the technologies with which the market-libertarian crypto-anarchists concerned themselves? What models of agency and dignity are expressed and consoled in the delineation of these fantastic expectations and idealizations? How does the manifest Cypherpunk preference for technical over political interventions in the service of the achievement of desired social ends connect to other currents in the culture and politics of the so-called information age? If it is true that the crypto anarchists’ moment has come and gone, can we say for sure that it is gone for good? And where on earth did this peculiar sensibility come from?
Go to Next Section of Pancryptics
Go to Pancryptics Table of Contents
MVI. P2P, Not Anarchy
In an interesting recent essay, portentiously entitled “The RIAA [Recording Industry Association of America] Succeeds Where the Cypherpunks Failed,” Clay Shirky has suggested that popular but illegal file-sharing practices may breathe new life into the Cypherpunks’ conspicuously stalled revolution.
“Even after the death of [the] Clipper [Chip] and [Philip Zimmermann’s] launch of PGP [the freely-available “Pretty Good Privacy” public-key encryption scheme about which I will say more in the next section], the Government discovered that for the most part, users didn’t want to encrypt their communications.” Shirky goes on to point out, that “[t]hough business users encrypt sensitive data to hide it from one another, the use of encryption to hide private communications from the Government has been limited mainly to techno-libertarians and a small criminal class.”
Shirky draws from these facts a couple of intriguing conclusions. First, he asserts that “[t]he most effective barrier to the spread of encryption has turned out to be not control but apathy.” And, then, second, he proposes that the reason the average user is apathetic about encryption is because “the average user has little to hide, and so hides little.”
There are a number of important, if obvious, objections to these formulations. I would object, for one thing, to the apparent insinuation that routine encryption would function primarily to hide wrong-doing (the expression “I have nothing to hide” is typically taken as synonymous with “I have done nothing wrong”), when clearly encrypted transactions are just as likely to protect people from innocent embarrassments as from criminal prosecutions. And I would also object that it remains an open question just how effective legal and technical controls on communications might turn out to be, whatever the historical record has been on this question up to now. Just because internet users have always (or usually) managed so far to circumvent attempts to limit privacy and free expression online, mostly through technical means, shouldn’t inspire overconfidence that this will always be the case.
But my deeper objection to Shirky’s assertion is that I think it is wrong to assume that individuals are simply apathetic about the prospect of facilitating secrecy through routine encryption. When Shirky goes on to assert that “[t]he Cypherpunk fantasy of a culture that routinely hides both legal and illegal activities from the state has been defeated by a giant distributed veto,” I think the word “veto” here aptly denotes a more active general repudiation of the Cypherpunk vision than his earlier suggestions about apathy imply.
I think that there is in fact a widespread and constantly renewed affirmation of the value of a certain level of personal exposure over secrecy in online communications (and in public life more generally), and that it is this embrace of “openness” that constitutes the more abiding hurdle to the kind of exhaustive adoption of encryption on which advocates for crypto-anarchy would have to rely for the implementation of their vision of social transformation. There is no question that individuals have consistently resisted the imposition of enforced regimes of openness on terms that conspicuously favor the specific interests of officials and functionaries of governments or, for that matter, corporations over their own. Also, the widespread use of online pseudonyms is an ongoing testament to the value of obscurity in the expression of unpopular or untested ideas.
But what seems to me by far more conspicuous and definitive of the internet, such as it is, is scarcely its facilitation of anonymous or pseudonymous transactions, however crucial that function may be. It is instead the dizzying wealth of signed, self-published, free content available online, all of it vying for attention, all of it soliciting annotation, appropriation, citation, and dissent.
It is this apparently inexhaustible wealth of creative content that continually re-invigorates the internet and the web, and it testifies to the fact that whatever the vulnerability and costs of exposure, these are mostly imagined to be amply repaid by the public pleasures of attention, connection, rivalry, collaboration, discourse, and exchange. It is easy to misinterpret the significance of the many vicissitudes in the ongoing technical and legal development of the internet whenever we forget that both secrecy and openness are conspicuous values in an overwhelming variety of currently constituted online cultures. And it is easy in turn to disavow the significance of openness, in particular, whenever we forget that openness ineradicably implies some level of exposure, some acceptance of vulnerability, some loss of control.
“Napster” is the name of a file-sharing service that allowed its users to log onto servers through which they exchanged .mp3 files stored on the computers of other users so long as they remained logged onto the system themselves. At the height of its popularity many millions of users were logging onto Napster and trading files in this way. The RIAA sued Napster on behalf of its members, because in their view the service facilitated copyright infringement and piracy on an unprecedented scale. Although the RIAA succeeded in its efforts to shut down Napster for a time, individuals who had been participating in file-sharing practices there for the most part simply shifted from Napster to the use of one of many similar but alternative available systems. Because of this, the RIAA has subsequently also shifted its attention, and has started to sue individual file-sharers directly. This, in turn, has motivated many file-sharers to adopt file-sharing applications that use encryption to ensure their anonymity and insulate them from prosecution, precisely as the Cypherpunks long anticipated.
But is there anything in this state of affairs that suggests the arrival of Tim May’s “crypto-anarchy” may be more immanent? Are we to be haunted once more by its more proximate specter? Surely not. As Shirky puts the point himself, “the broadening adoption of encryption is not because users have become libertarians, but because they have become criminals; to a first approximation, every PC owner under the age of 35 is now a felon.” He goes on to illustrate the point with the obvious analogy to Prohibition. “By making it unconstitutional for an adult to have a drink in their own home, Prohibition created a cat and mouse game between law enforcement and millions of citizens engaged in an activity that was illegal but popular.”
Of course, many individuals who participate in strictly illegal but nonetheless widely popular versions of file-sharing practices do so with the conviction that these practices should not be illegal at all and with the reasonable expectation that eventually (as was likewise the case with the adult consumption of alcohol during its brief Prohibition) they will become legal.
There is little reason to think that the RIAA’s own interested interpretation of file-sharing practices by recourse to the metaphor of the theft of a CD from a music store, say, will necessarily prevail over the metaphor of music enthusiasts gathering together in a public space to listen to music together. Nor is there much reason, frankly, to accept hyperbolic claims that file-sharing practices will destroy the recording industry altogether. It seems more likely that the industry would accommodate these practices as a kind of promotion or advertising of their product over the longer-term. And, of course, there is even less reason to believe that the destruction of the recording industry altogether would have much of an impact in any case on the continued creation, circulation, and enjoyment of music which, after all, did not come into existence as a consequence of the efforts of either the recording industry or its RIAA in the first place.
Shirky concludes that the most significant longer-term consequence of the music industry’s quixotic effort to police file-sharing practices is precisely “the long-predicted and oft-delayed spread of encryption” to which he appends as a concluding thesis a re-statement of his essay’s title: “The RIAA is succeeding where the Cypherpunks failed.”
But “success,” remember, for the Cypherpunks requires nothing short of the arrival of crypto-anarchy, and the routine recourse to encryption techniques to facilitate currently illegal file-sharing practices looks to me no more like a step toward the arrival of crypto-anarchy than does the similarly routine recourse to encryption to facilitate the electronic funds transfers that make online commerce possible, regularly deposit paychecks and tax refunds into savings accounts, or pay bills via automatic payment services.
Neither does it seem right to suggest that crypto-anarchy is a state into which we might be ineluctably drifting, as perhaps Shirky means to imply when he writes “[b]ecause encryption is becoming something that must run in the background, there is now an incentive to make its adoption as easy and transparent to the user as possible.” And so, in consequence of this, “[i]t’s too early to say how widely casual encryption use will spread.”
But just what projected state of affairs is supposed to be conjured up by the use of the word “widely” here? Clearly, it is not the least bit too early to predict that the routine recourse to encryption can be well nigh ubiquitous, and indeed already is. But certainly neither is there any reason to think the value of technologically facilitated secrecy will ever sufficiently trump the value of technologically facilitated openness in a way that would provoke a transformation of society in the image of crypto-anarchy and, hence, allow the Cypherpunks at long last their longed-for “success” and secession.
Go to Next Section of Pancryptics
Go to Pancryptics Table of Contents
“Even after the death of [the] Clipper [Chip] and [Philip Zimmermann’s] launch of PGP [the freely-available “Pretty Good Privacy” public-key encryption scheme about which I will say more in the next section], the Government discovered that for the most part, users didn’t want to encrypt their communications.” Shirky goes on to point out, that “[t]hough business users encrypt sensitive data to hide it from one another, the use of encryption to hide private communications from the Government has been limited mainly to techno-libertarians and a small criminal class.”
Shirky draws from these facts a couple of intriguing conclusions. First, he asserts that “[t]he most effective barrier to the spread of encryption has turned out to be not control but apathy.” And, then, second, he proposes that the reason the average user is apathetic about encryption is because “the average user has little to hide, and so hides little.”
There are a number of important, if obvious, objections to these formulations. I would object, for one thing, to the apparent insinuation that routine encryption would function primarily to hide wrong-doing (the expression “I have nothing to hide” is typically taken as synonymous with “I have done nothing wrong”), when clearly encrypted transactions are just as likely to protect people from innocent embarrassments as from criminal prosecutions. And I would also object that it remains an open question just how effective legal and technical controls on communications might turn out to be, whatever the historical record has been on this question up to now. Just because internet users have always (or usually) managed so far to circumvent attempts to limit privacy and free expression online, mostly through technical means, shouldn’t inspire overconfidence that this will always be the case.
But my deeper objection to Shirky’s assertion is that I think it is wrong to assume that individuals are simply apathetic about the prospect of facilitating secrecy through routine encryption. When Shirky goes on to assert that “[t]he Cypherpunk fantasy of a culture that routinely hides both legal and illegal activities from the state has been defeated by a giant distributed veto,” I think the word “veto” here aptly denotes a more active general repudiation of the Cypherpunk vision than his earlier suggestions about apathy imply.
I think that there is in fact a widespread and constantly renewed affirmation of the value of a certain level of personal exposure over secrecy in online communications (and in public life more generally), and that it is this embrace of “openness” that constitutes the more abiding hurdle to the kind of exhaustive adoption of encryption on which advocates for crypto-anarchy would have to rely for the implementation of their vision of social transformation. There is no question that individuals have consistently resisted the imposition of enforced regimes of openness on terms that conspicuously favor the specific interests of officials and functionaries of governments or, for that matter, corporations over their own. Also, the widespread use of online pseudonyms is an ongoing testament to the value of obscurity in the expression of unpopular or untested ideas.
But what seems to me by far more conspicuous and definitive of the internet, such as it is, is scarcely its facilitation of anonymous or pseudonymous transactions, however crucial that function may be. It is instead the dizzying wealth of signed, self-published, free content available online, all of it vying for attention, all of it soliciting annotation, appropriation, citation, and dissent.
It is this apparently inexhaustible wealth of creative content that continually re-invigorates the internet and the web, and it testifies to the fact that whatever the vulnerability and costs of exposure, these are mostly imagined to be amply repaid by the public pleasures of attention, connection, rivalry, collaboration, discourse, and exchange. It is easy to misinterpret the significance of the many vicissitudes in the ongoing technical and legal development of the internet whenever we forget that both secrecy and openness are conspicuous values in an overwhelming variety of currently constituted online cultures. And it is easy in turn to disavow the significance of openness, in particular, whenever we forget that openness ineradicably implies some level of exposure, some acceptance of vulnerability, some loss of control.
“Napster” is the name of a file-sharing service that allowed its users to log onto servers through which they exchanged .mp3 files stored on the computers of other users so long as they remained logged onto the system themselves. At the height of its popularity many millions of users were logging onto Napster and trading files in this way. The RIAA sued Napster on behalf of its members, because in their view the service facilitated copyright infringement and piracy on an unprecedented scale. Although the RIAA succeeded in its efforts to shut down Napster for a time, individuals who had been participating in file-sharing practices there for the most part simply shifted from Napster to the use of one of many similar but alternative available systems. Because of this, the RIAA has subsequently also shifted its attention, and has started to sue individual file-sharers directly. This, in turn, has motivated many file-sharers to adopt file-sharing applications that use encryption to ensure their anonymity and insulate them from prosecution, precisely as the Cypherpunks long anticipated.
But is there anything in this state of affairs that suggests the arrival of Tim May’s “crypto-anarchy” may be more immanent? Are we to be haunted once more by its more proximate specter? Surely not. As Shirky puts the point himself, “the broadening adoption of encryption is not because users have become libertarians, but because they have become criminals; to a first approximation, every PC owner under the age of 35 is now a felon.” He goes on to illustrate the point with the obvious analogy to Prohibition. “By making it unconstitutional for an adult to have a drink in their own home, Prohibition created a cat and mouse game between law enforcement and millions of citizens engaged in an activity that was illegal but popular.”
Of course, many individuals who participate in strictly illegal but nonetheless widely popular versions of file-sharing practices do so with the conviction that these practices should not be illegal at all and with the reasonable expectation that eventually (as was likewise the case with the adult consumption of alcohol during its brief Prohibition) they will become legal.
There is little reason to think that the RIAA’s own interested interpretation of file-sharing practices by recourse to the metaphor of the theft of a CD from a music store, say, will necessarily prevail over the metaphor of music enthusiasts gathering together in a public space to listen to music together. Nor is there much reason, frankly, to accept hyperbolic claims that file-sharing practices will destroy the recording industry altogether. It seems more likely that the industry would accommodate these practices as a kind of promotion or advertising of their product over the longer-term. And, of course, there is even less reason to believe that the destruction of the recording industry altogether would have much of an impact in any case on the continued creation, circulation, and enjoyment of music which, after all, did not come into existence as a consequence of the efforts of either the recording industry or its RIAA in the first place.
Shirky concludes that the most significant longer-term consequence of the music industry’s quixotic effort to police file-sharing practices is precisely “the long-predicted and oft-delayed spread of encryption” to which he appends as a concluding thesis a re-statement of his essay’s title: “The RIAA is succeeding where the Cypherpunks failed.”
But “success,” remember, for the Cypherpunks requires nothing short of the arrival of crypto-anarchy, and the routine recourse to encryption techniques to facilitate currently illegal file-sharing practices looks to me no more like a step toward the arrival of crypto-anarchy than does the similarly routine recourse to encryption to facilitate the electronic funds transfers that make online commerce possible, regularly deposit paychecks and tax refunds into savings accounts, or pay bills via automatic payment services.
Neither does it seem right to suggest that crypto-anarchy is a state into which we might be ineluctably drifting, as perhaps Shirky means to imply when he writes “[b]ecause encryption is becoming something that must run in the background, there is now an incentive to make its adoption as easy and transparent to the user as possible.” And so, in consequence of this, “[i]t’s too early to say how widely casual encryption use will spread.”
But just what projected state of affairs is supposed to be conjured up by the use of the word “widely” here? Clearly, it is not the least bit too early to predict that the routine recourse to encryption can be well nigh ubiquitous, and indeed already is. But certainly neither is there any reason to think the value of technologically facilitated secrecy will ever sufficiently trump the value of technologically facilitated openness in a way that would provoke a transformation of society in the image of crypto-anarchy and, hence, allow the Cypherpunks at long last their longed-for “success” and secession.
Go to Next Section of Pancryptics
Go to Pancryptics Table of Contents
MV. What Is Manifest
In “True Nyms and Crypto Anarchy,” May’s contribution to a collection of essays paying tribute to Vinge on the occasion of the republication of the novella True Names, he is still making his familiar case, if in slightly different terms, but now well over a decade after offering up his manifesto’s initial assurances of impending revolution: “The combination of strong, unbreakable public key cryptography and virtual network communities in cyberspace will produce profound changes in the nature of economic and social systems.” That claim seems genial enough, after all, until he clarifies these “profound changes” more particularly in the next sentence as a matter again of the technological implementation of “[c]rypto anarchy… the cyberspatial realization of anarcho-capitalism, transcending national boundaries and freeing individuals to consensually make the economic arrangements they wish to make.”
From the technological facilitation of secrecy, then, will inevitably arise a comprehensive privatization of the terms and institutions of public life.
Through just what steps, on the basis of just what assumptions does May keep arriving at such a radical conclusion? How is it that rather oracular proposals such as that “[a] phase change is coming,” are imagined to follow logically as conclusions from premises such as that “[s]trong crypto [May’s term for readily available, theoretically unbreakable computer-facilitated encryption techniques] provides a technological means of ensuring the practical freedom to read and write what one wishes to”?
Is it really the assumption here, then, that it is just the fact that not everybody everywhere is quite so free as they might otherwise be about what they read and write that stands as the solitary obstacle in the way of the otherwise inevitable and wished-for arrival of the “phase change” presumably embodied in the “realization of anarcho-capitalism”? Would a more perfect technological facilitation of secrecy really be enough to smash the state and suffuse societies with “unobstructed” commercial activity in the way May seems to pine for? Or has May simply consistently mistaken for logical inevitability what amounts to the intensity of the lure for him of some personally compelling, possibly idiosyncratic, desire of his?
“The State” –- which May rather quaintly seems to regard as a monolithic and uniformly dastardly entity rather than as any kind of densely multilateral co-ordinated congeries of both competing and co-operating socio-cultural institutions and state apparatuses –- “will of course try to slow or halt the spread of this technology, citing national security concerns, use of the technology by drug dealers and tax evaders, and fears of societal disintegration.”
Curiously, May endorses the legitimacy of these concerns and provides scant comfort to those who would find them worrying. Instead, he quite enthusiastically elaborates: “Many of these concerns will be valid; crypto anarchy will allow national secrets to be traded freely and will allow illicit and stolen materials to be traded. An anonymous computerized market will even make possible abhorrent markets for assassinations and extortion. Various criminal and foreign elements will be active users of CryptoNet.” May offers no reassurances and proposes no remedies for these concerns, but simply recurs to his triumphalism: “But this will not halt the spread of crypto anarchy.”
One of the most visible critics of the cypherpunk vision, Dorothy Denning, seemed for a time to be convinced by May’s account that the developmental momentum aroused by the emergence of especially public key encryption technologies threatened the institutions of legitimate governance in a way that might indeed be well nigh irresistible.
But for her the consequences of such a state of affairs were the furthest imaginable thing from desirable. “Proponents argue that crypto anarchy is the inevitable -– and highly desirable –- outcome of the release of public key cryptography into the world,” wrote Denning, sounding the alarm in an early essay. “Although May limply asserts that anarchy does not mean lawlessness and social disorder, the absence of government would lead to exactly these states of chaos. I do not want to live in an anarchistic society –- if one could be called a society at all -– and I doubt many would.” And so, while May himself may have drawn from a selective survey of emerging and projected technical developments the heartening if rash impression that the world around him was crystallizing inevitably and quick into the image of his heart’s desire, Denning’s own reading proposed its appalled antithesis: “[T]he crypto anarchists’ claims come close to asserting that the technology will take us to an outcome that most of us would not choose.”
In a section of his essay “Crypto Anarchy and Virtual Communities” entitled “Implications,” May concedes that “[m]any thoughtful people are worried about some possibilities made apparent by strong crypto and anonymous communication systems.” It is difficult to imagine that Denning or many of his other critics would draw much in the way of comfort from the discussion that follows this admission. “Abhorrent markets may arise,” May admits. “For example, anonymous systems and untraceable digital cash have some obvious implications for the arrangement of contract killings and such.” (The insouciance of that “and such” inspires particular comfort, I must say.) As May continues on he seems almost to wax enthusiastic in delineating possible dangers: “Think of anonymous escrow services which hold the digital money until the deed is done. Lots of issues here.… [L]iquid markets in information… may make [corporate and national] secrets much harder to keep…. New money-laundering approaches are another area to explore.”
In the essay’s conclusion he takes up the theme again, no more reassuringly this time around. “Cryto anarchy has some messy aspects, of this there can be little doubt,” he jocularly intones. “From” what he has for some reason decided would amount to “relatively unimportant things like price fixing and insider trading to” what he has curiously decided are “more serious things like economic espionage, the undermining of corporate knowledge ownership, to” what we can all agree are “extremely dark things like anonymous markets for killings.”
Back in the “Implications” section of the piece, May follows his laundry list of worrisome criminal activities that might be facilitated by robust encryption techniques with a new paragraph beginning with the sentence: “The implications for personal liberty are of course profound.” The reader is possibly flabbergasted to realize that May does not mean this statement to summarize the obvious worry that all of the dangerous developments that precede it would seem to threaten personal liberty in a profound way, but to suggest instead that despite these threats encryption will nevertheless strengthen personal liberty just because, as he goes on to say next, “[n]o longer can nation-states tell their citizen-units [?] what they can have access to, not if these citizens can access the cyberspace world through anonymous systems.”
Later in his conclusion, May offers up this brief and unexpected argumentative digression: “Let us not forget that nation-states have, under the guise of protecting us from others, killed more than 100 million people in this century alone. Mao, Stalin, Hitler, and Pol Pot, just to name the most extreme examples. It is hard to imagine any level of digital contract killings ever coming close to nationalistic barbarism.” One wonders if it really is, after all, quite so hard to imagine globe-girding networks of criminal gangs managing to use technology to co-ordinate and accomplish acts of devastation comparable to those of the criminal states of the twentieth century. But be that as it may, it is certainly curious to say the least to see totalitarian brutality offered up as an argument for anarchic lawlessness! Part of what is surprising in this argumentative move is that it treats tyrannical and totalitarian violences as straightforward if “extreme” expressions of some more normal or even definitive governmental violence, rather than, as is more customary in accounts of these phenomena, a pathological violation of legitimate governance. May even seems to suggest that the more conventional expectation that good governments will properly protect their citizens from harms is in fact nothing but a pretense (“the guise of protecting us from others”), masking this deeper, more constitutive expression of violence.
It is unfortunate that May did not go on to develop this line of his argument at greater length or in any kind of systematic way. Suffice it to say, without more to go on, it would seem on the face of it to be easily possible at once to sympathize with May’s frustrations with the forms of nationalism, militarism, and totalitarianism he notes without likewise sympathizing with his advocacy of crypto-anarchy. And given his advocacy elsewhere of the technological facilitation of secrecy in particular to protect individual liberties, it would be interesting to hear more about the relation of his viewpoint to the many accounts of modern militarism for which it is in fact their secrecy that best characterizes militarist cultures, and in which it is the rise and consolidation of institutional secrecy (state secrets, secret treaties, secret ops, classified documents, black budgets, censorship, cover-ups, and the rest) that enables and abets militarism in its worst abuses. Further, given his advocacy elsewhere of technical interventions over political organizing or the democratic recourse to representative governance to achieve a more congenial social order, it would be interesting to hear more about the relation of his viewpoint to the many accounts of especially totalitarian violence that highlight the special role of modern technology in these unprecedented eruptions, systemizations, and administrations of violence.
Near the close of the essay May offers up an unexpectedly hesitant response to his own rhetorical question whether crypto anarchy, despite the worries and threats it would seem to entail, is ultimately a “Good Thing” or not. His answer, a stirring: “Mostly yes.” And a few sentences later comes a stunning parenthetic admission that shows the extent to which his rhetoric here relies on the constant complementary recourse to arguments from both inevitability and desire: “I don’t think we have much of a choice in embracing crypto anarchy or not,” he insists, “so I choose to focus on the bright side.” Now, given all that has come before it, of just what is this bright side supposed finally to consist?
Returning to the “Implications” section of the essay I am intrigued to note a paragraph in which May invokes inevitability yet again, but this time to make a somewhat different sort of case than has been usual for him elsewhere. “Something that is inevitable [under crypto anarchy] is the increased role of individuals, leading to a new kind of elitism,” he proposes. When he goes on to characterize this elite as “[t]hose who are comfortable with the tools described here [and who] can [hence] avoid the restrictions and taxes that others cannot,” it becomes difficult to avoid the suspicion that at least part of the appeal of crypto-anarchy for May is a matter of simple straightforward opportunism. He adds, in an oddly insinuating tone, that “[i]f local laws can be bypassed technologically, the implications are pretty clear.” As it happens, it seems to me that the implications of such a state of affairs are not the least bit clear, until we know first whether or not such a technologically empowered person is in fact law abiding or not. Once again, the claim of inevitability would seem to function for May more forcefully as an expression of desire than as a reliable prediction of actual outcomes.
Go to Next Section of Pancryptics
Go to Pancryptics Table of Contents
From the technological facilitation of secrecy, then, will inevitably arise a comprehensive privatization of the terms and institutions of public life.
Through just what steps, on the basis of just what assumptions does May keep arriving at such a radical conclusion? How is it that rather oracular proposals such as that “[a] phase change is coming,” are imagined to follow logically as conclusions from premises such as that “[s]trong crypto [May’s term for readily available, theoretically unbreakable computer-facilitated encryption techniques] provides a technological means of ensuring the practical freedom to read and write what one wishes to”?
Is it really the assumption here, then, that it is just the fact that not everybody everywhere is quite so free as they might otherwise be about what they read and write that stands as the solitary obstacle in the way of the otherwise inevitable and wished-for arrival of the “phase change” presumably embodied in the “realization of anarcho-capitalism”? Would a more perfect technological facilitation of secrecy really be enough to smash the state and suffuse societies with “unobstructed” commercial activity in the way May seems to pine for? Or has May simply consistently mistaken for logical inevitability what amounts to the intensity of the lure for him of some personally compelling, possibly idiosyncratic, desire of his?
“The State” –- which May rather quaintly seems to regard as a monolithic and uniformly dastardly entity rather than as any kind of densely multilateral co-ordinated congeries of both competing and co-operating socio-cultural institutions and state apparatuses –- “will of course try to slow or halt the spread of this technology, citing national security concerns, use of the technology by drug dealers and tax evaders, and fears of societal disintegration.”
Curiously, May endorses the legitimacy of these concerns and provides scant comfort to those who would find them worrying. Instead, he quite enthusiastically elaborates: “Many of these concerns will be valid; crypto anarchy will allow national secrets to be traded freely and will allow illicit and stolen materials to be traded. An anonymous computerized market will even make possible abhorrent markets for assassinations and extortion. Various criminal and foreign elements will be active users of CryptoNet.” May offers no reassurances and proposes no remedies for these concerns, but simply recurs to his triumphalism: “But this will not halt the spread of crypto anarchy.”
One of the most visible critics of the cypherpunk vision, Dorothy Denning, seemed for a time to be convinced by May’s account that the developmental momentum aroused by the emergence of especially public key encryption technologies threatened the institutions of legitimate governance in a way that might indeed be well nigh irresistible.
But for her the consequences of such a state of affairs were the furthest imaginable thing from desirable. “Proponents argue that crypto anarchy is the inevitable -– and highly desirable –- outcome of the release of public key cryptography into the world,” wrote Denning, sounding the alarm in an early essay. “Although May limply asserts that anarchy does not mean lawlessness and social disorder, the absence of government would lead to exactly these states of chaos. I do not want to live in an anarchistic society –- if one could be called a society at all -– and I doubt many would.” And so, while May himself may have drawn from a selective survey of emerging and projected technical developments the heartening if rash impression that the world around him was crystallizing inevitably and quick into the image of his heart’s desire, Denning’s own reading proposed its appalled antithesis: “[T]he crypto anarchists’ claims come close to asserting that the technology will take us to an outcome that most of us would not choose.”
In a section of his essay “Crypto Anarchy and Virtual Communities” entitled “Implications,” May concedes that “[m]any thoughtful people are worried about some possibilities made apparent by strong crypto and anonymous communication systems.” It is difficult to imagine that Denning or many of his other critics would draw much in the way of comfort from the discussion that follows this admission. “Abhorrent markets may arise,” May admits. “For example, anonymous systems and untraceable digital cash have some obvious implications for the arrangement of contract killings and such.” (The insouciance of that “and such” inspires particular comfort, I must say.) As May continues on he seems almost to wax enthusiastic in delineating possible dangers: “Think of anonymous escrow services which hold the digital money until the deed is done. Lots of issues here.… [L]iquid markets in information… may make [corporate and national] secrets much harder to keep…. New money-laundering approaches are another area to explore.”
In the essay’s conclusion he takes up the theme again, no more reassuringly this time around. “Cryto anarchy has some messy aspects, of this there can be little doubt,” he jocularly intones. “From” what he has for some reason decided would amount to “relatively unimportant things like price fixing and insider trading to” what he has curiously decided are “more serious things like economic espionage, the undermining of corporate knowledge ownership, to” what we can all agree are “extremely dark things like anonymous markets for killings.”
Back in the “Implications” section of the piece, May follows his laundry list of worrisome criminal activities that might be facilitated by robust encryption techniques with a new paragraph beginning with the sentence: “The implications for personal liberty are of course profound.” The reader is possibly flabbergasted to realize that May does not mean this statement to summarize the obvious worry that all of the dangerous developments that precede it would seem to threaten personal liberty in a profound way, but to suggest instead that despite these threats encryption will nevertheless strengthen personal liberty just because, as he goes on to say next, “[n]o longer can nation-states tell their citizen-units [?] what they can have access to, not if these citizens can access the cyberspace world through anonymous systems.”
Later in his conclusion, May offers up this brief and unexpected argumentative digression: “Let us not forget that nation-states have, under the guise of protecting us from others, killed more than 100 million people in this century alone. Mao, Stalin, Hitler, and Pol Pot, just to name the most extreme examples. It is hard to imagine any level of digital contract killings ever coming close to nationalistic barbarism.” One wonders if it really is, after all, quite so hard to imagine globe-girding networks of criminal gangs managing to use technology to co-ordinate and accomplish acts of devastation comparable to those of the criminal states of the twentieth century. But be that as it may, it is certainly curious to say the least to see totalitarian brutality offered up as an argument for anarchic lawlessness! Part of what is surprising in this argumentative move is that it treats tyrannical and totalitarian violences as straightforward if “extreme” expressions of some more normal or even definitive governmental violence, rather than, as is more customary in accounts of these phenomena, a pathological violation of legitimate governance. May even seems to suggest that the more conventional expectation that good governments will properly protect their citizens from harms is in fact nothing but a pretense (“the guise of protecting us from others”), masking this deeper, more constitutive expression of violence.
It is unfortunate that May did not go on to develop this line of his argument at greater length or in any kind of systematic way. Suffice it to say, without more to go on, it would seem on the face of it to be easily possible at once to sympathize with May’s frustrations with the forms of nationalism, militarism, and totalitarianism he notes without likewise sympathizing with his advocacy of crypto-anarchy. And given his advocacy elsewhere of the technological facilitation of secrecy in particular to protect individual liberties, it would be interesting to hear more about the relation of his viewpoint to the many accounts of modern militarism for which it is in fact their secrecy that best characterizes militarist cultures, and in which it is the rise and consolidation of institutional secrecy (state secrets, secret treaties, secret ops, classified documents, black budgets, censorship, cover-ups, and the rest) that enables and abets militarism in its worst abuses. Further, given his advocacy elsewhere of technical interventions over political organizing or the democratic recourse to representative governance to achieve a more congenial social order, it would be interesting to hear more about the relation of his viewpoint to the many accounts of especially totalitarian violence that highlight the special role of modern technology in these unprecedented eruptions, systemizations, and administrations of violence.
Near the close of the essay May offers up an unexpectedly hesitant response to his own rhetorical question whether crypto anarchy, despite the worries and threats it would seem to entail, is ultimately a “Good Thing” or not. His answer, a stirring: “Mostly yes.” And a few sentences later comes a stunning parenthetic admission that shows the extent to which his rhetoric here relies on the constant complementary recourse to arguments from both inevitability and desire: “I don’t think we have much of a choice in embracing crypto anarchy or not,” he insists, “so I choose to focus on the bright side.” Now, given all that has come before it, of just what is this bright side supposed finally to consist?
Returning to the “Implications” section of the essay I am intrigued to note a paragraph in which May invokes inevitability yet again, but this time to make a somewhat different sort of case than has been usual for him elsewhere. “Something that is inevitable [under crypto anarchy] is the increased role of individuals, leading to a new kind of elitism,” he proposes. When he goes on to characterize this elite as “[t]hose who are comfortable with the tools described here [and who] can [hence] avoid the restrictions and taxes that others cannot,” it becomes difficult to avoid the suspicion that at least part of the appeal of crypto-anarchy for May is a matter of simple straightforward opportunism. He adds, in an oddly insinuating tone, that “[i]f local laws can be bypassed technologically, the implications are pretty clear.” As it happens, it seems to me that the implications of such a state of affairs are not the least bit clear, until we know first whether or not such a technologically empowered person is in fact law abiding or not. Once again, the claim of inevitability would seem to function for May more forcefully as an expression of desire than as a reliable prediction of actual outcomes.
Go to Next Section of Pancryptics
Go to Pancryptics Table of Contents
MIV. Manifesto
Given his commitment to the idealized “capitalism” of market libertarianism, it is intriguing to note that Tim May’s most influential article, “The Crypto Anarchist Manifesto,” both begins and ends by genuflecting in the direction of another Manifesto, written in a considerably different discursive key. “A specter is haunting the modern world,” May’s “Manifesto” proposes in its opening words, “the specter of crypto anarchy.” And repeatedly in this and other texts, May assures his readers through comparable formulations that the extraordinary outcomes he sketches and predicts are not only desirable (to him) but are freighted with inevitability. “Technology has let the genie out of the bottle,” he proposes in an image that characteristically collapses confident prediction with heady wish-fulfillment. It is a potent conjunction of themes that he will offer up in countless variations throughout his writings.
“Computer technology,” May suggests, “is on the verge of providing the ability for individuals and groups to communicate and interact with each other in a totally anonymous manner.” He continues: “Two persons may exchange messages, conduct business, and negotiate electronic contracts without ever knowing the True Name [a direct reference to the novella by Vinge, which he simply assumes his readership will recognize], or legal identity, of the other.” The widespread adoption of encryption techniques will either ineluctably prompt or will itself actually already constitute (just which is not entirely clear) nothing short of a “revolution,” he insists. By this term he means of course anything but the revolutionary event conjured up by the Marxian specter he subversively cites. May’s revolution emerges not through active political organization or social struggle or from the exposure of ideological mystifications, but would seem to arise for him instead more or less spontaneously as a straightforward entailment of the development and then use of new kinds of tools.
“The technology for this revolution -– and it surely will be both a social and economic revolution,” he writes, is “based upon public-key encryption… and various software protocols for interaction, authentication, and verification.” I will expand on the technical details to which he refers in the next section, but for now I want to concentrate on May’s reliance here on the evocation of the sheer force of developmental momentum to render plausible his suggestion that we are “haunted” so palpably by the approach of what would otherwise surely seem an utterly implausible near-term near-total social transformation. “[O]nly recently have computer networks and personal computers attained sufficient speed to make the[se] ideas practically realizable. And the next ten years will bring enough additional speed to make the ideas economically feasible and essentially unstoppable.”
Needless to say, it is not true that everything that is economically feasible is thereby rendered an unstoppable destiny. Between feasibility and inevitability in May’s formulation there must reside at least one assumption that has not been articulated explicitly and which possibly has not yet been adequately interrogated. For example, perhaps May believes that once his crypto-anarchy has been rendered practically realizable as well as economically feasible it is sure to be unstoppable because such an outcome is, whatever appearances to the contrary, so very widely desired. Or perhaps he assumes that such an outcome is likely to be deemed desirable by people in a unique position to implement it. Or perhaps he expects this radical outcome to arise as a secondary consequence of other more routinely desired outcomes. Reading May’s statement now, well over ten years after its initial publication and wide circulation, there is no question that the looming locomotive whose gathering momentum had once so impressed his attention has subsequently veered somewhat off its rails somewhere.
May has suggested that his skewed citations of Marx in his own Manifesto were a bit of “whimsy” on his part. But his choices now seem to have been freighted with, to say the least, a certain fatality. Jacques Derrida has mischievously remarked in his Specters of Marx, that the “haunting” of Europe by looming Revolution registers a more distended than appalled “anticipation,” one that was and remains “at once impatient, anxious, and fascinated. It won’t be long,” he writes. “But how long it is taking.” The “specter” of crypto-anarchy, like that of revolution, still throws off sparks and portents for partisans, but the grip of its fascination has gone a touch languorous these days.
Go to Next Section of Pancryptics
Go to Pancryptics Table of Contents
“Computer technology,” May suggests, “is on the verge of providing the ability for individuals and groups to communicate and interact with each other in a totally anonymous manner.” He continues: “Two persons may exchange messages, conduct business, and negotiate electronic contracts without ever knowing the True Name [a direct reference to the novella by Vinge, which he simply assumes his readership will recognize], or legal identity, of the other.” The widespread adoption of encryption techniques will either ineluctably prompt or will itself actually already constitute (just which is not entirely clear) nothing short of a “revolution,” he insists. By this term he means of course anything but the revolutionary event conjured up by the Marxian specter he subversively cites. May’s revolution emerges not through active political organization or social struggle or from the exposure of ideological mystifications, but would seem to arise for him instead more or less spontaneously as a straightforward entailment of the development and then use of new kinds of tools.
“The technology for this revolution -– and it surely will be both a social and economic revolution,” he writes, is “based upon public-key encryption… and various software protocols for interaction, authentication, and verification.” I will expand on the technical details to which he refers in the next section, but for now I want to concentrate on May’s reliance here on the evocation of the sheer force of developmental momentum to render plausible his suggestion that we are “haunted” so palpably by the approach of what would otherwise surely seem an utterly implausible near-term near-total social transformation. “[O]nly recently have computer networks and personal computers attained sufficient speed to make the[se] ideas practically realizable. And the next ten years will bring enough additional speed to make the ideas economically feasible and essentially unstoppable.”
Needless to say, it is not true that everything that is economically feasible is thereby rendered an unstoppable destiny. Between feasibility and inevitability in May’s formulation there must reside at least one assumption that has not been articulated explicitly and which possibly has not yet been adequately interrogated. For example, perhaps May believes that once his crypto-anarchy has been rendered practically realizable as well as economically feasible it is sure to be unstoppable because such an outcome is, whatever appearances to the contrary, so very widely desired. Or perhaps he assumes that such an outcome is likely to be deemed desirable by people in a unique position to implement it. Or perhaps he expects this radical outcome to arise as a secondary consequence of other more routinely desired outcomes. Reading May’s statement now, well over ten years after its initial publication and wide circulation, there is no question that the looming locomotive whose gathering momentum had once so impressed his attention has subsequently veered somewhat off its rails somewhere.
May has suggested that his skewed citations of Marx in his own Manifesto were a bit of “whimsy” on his part. But his choices now seem to have been freighted with, to say the least, a certain fatality. Jacques Derrida has mischievously remarked in his Specters of Marx, that the “haunting” of Europe by looming Revolution registers a more distended than appalled “anticipation,” one that was and remains “at once impatient, anxious, and fascinated. It won’t be long,” he writes. “But how long it is taking.” The “specter” of crypto-anarchy, like that of revolution, still throws off sparks and portents for partisans, but the grip of its fascination has gone a touch languorous these days.
Go to Next Section of Pancryptics
Go to Pancryptics Table of Contents
Thursday, April 21, 2005
MIII. “California Ideology” Among the First Generation
In an influential essay that first appeared online in 1995, Richard Barbrook and Andy Cameron likewise write of the emergence of a curious “heterogeneous orthodoxy for the coming information age [called] the California Ideology.” They characterize this new orthodoxy as “a bizarre fusion of the cultural bohemianism of San Francisco with the high-tech industries of Silicon Valley… promiscuously combin[ing] the free-wheeling spirit of the hippies and the entrepreneurial zeal of the yuppies.” More specifically, the California Ideology weds “a profound faith in the emancipatory potential of the new information technologies,” with a “passionate advoca[cy] of what appears to be an impeccably libertarian form of politics.”
Alongside what Cameron and Barbrook charitably characterize as an appealing advocacy of individual liberty, these libertarian “technoboosters” -– of whom the Cypherpunks are definitely some – who preoccupy the attention of “The California Ideology” are “reproducing some of the most atavistic features of American society, especially those derived from the bitter legacy of slavery. Their utopian vision of California depends on a willful blindness toward the other, much less positive features of life on the West Coast – racism, poverty, and environmental degradation.”
But quite apart from the familiar disavowal by “rugged individualists” of their broad interdependence with others as at least co-incident collaborators in sociality, not to mention their disavowal of more particular dependencies on the efforts, accomplishments, or ongoing exploitation of others, there seems to be an inexplicable ignorance of institutional history in play here as well. “For the first twenty years of its existence, the Net’s development was almost completely dependent on the much reviled American federal government. Whether via the U.S. military or through the universities, large amounts of taxpayers’ dollars went into building the Net infrastructure, and subsidizing the cost of using its services.”
Barbrook and Cameron then turn their focus more specifically onto the actual California of their “California Ideologues”: “On top of these public subsidies, the West Coast high-tech industrial complex has been feasting off the fattest pork-barrel in history for decades…. For those not blinded by free-market dogmas, it [is] obvious that Americans have always had state planning: they call it the defense budget.”
Paulina Borsook expresses the same exasperation at the rather facile and yet unexpectedly widespread disavowals that sometimes seem to underwrite the anti-governmental hostility of techno-libertarian sensibilities, enraptured by the prospect of encryption techniques that might circumvent despised regulations and even, many would appear to devoutly wish, smash the state altogether. The techno-libertarian, she notes with astonishment, “simply ignores... ongoing government funding for work-study jobs and for land grant universities. Indirect government subsidy (defense electronics contracts) created and nurtured the microelectronics industry and its companion infrastructure (middle-class home-mortgage guarantees and deductions for its laborers). Federal and state institutions provide an operable legal system… which ensures that the courts can remedy disputes over intellectual property squabbles and corporate espionage.”
As she bluntly puts the point: “Where would you want to do business in the year 2000?” And as she goes on to amplify the terms of her own rhetorical question she offers up as its alternatives, on the one hand, the instability of Russia at the close of the twentieth century taken (possibly somewhat hyperbolically) as an expression of libertarian theory in practice –- precisely as did Lawrence Lessig in his own framing of related questions in his book Code -– to which she then counterposes, on the other hand, again, the California of Barbrook’s and Cameron’s own puzzled fascination: “In Russia, where [presumably] there’s no regulation, no central government, no rule of law; or in Northern California, where the roads are mostly well paved and well patrolled and trucks and airplanes are safer than not, where the power grid is usually intact and the banking system mostly fraud-free and mostly works… where people mostly don’t have to pay protection money, and the majority of law enforcement personnel are not terribly corrupt or brutal?”
Although her point here is surely well taken and the bad faith of market-ideologues well-worthy of the strongest exposure and critique, it is difficult to shake the sense that in staging her confrontation between “Russia” and “California” of all places, Borsook is mobilizing as exemplary figures here landscapes that are especially fraught with dread and desire in the conventional American political imaginary. How much can they finally illuminate, then, rather than simply reverse but still re-invoke the terms in play for crypto-anarchists alike when they conjure up their own comparably fantastic countervailing thundercloud terrains of dread and desire, terrains with names like the “State” and the “Net”?
Go to Next Section of Pancryptics
Go to Pancryptics Table of Contents
Alongside what Cameron and Barbrook charitably characterize as an appealing advocacy of individual liberty, these libertarian “technoboosters” -– of whom the Cypherpunks are definitely some – who preoccupy the attention of “The California Ideology” are “reproducing some of the most atavistic features of American society, especially those derived from the bitter legacy of slavery. Their utopian vision of California depends on a willful blindness toward the other, much less positive features of life on the West Coast – racism, poverty, and environmental degradation.”
But quite apart from the familiar disavowal by “rugged individualists” of their broad interdependence with others as at least co-incident collaborators in sociality, not to mention their disavowal of more particular dependencies on the efforts, accomplishments, or ongoing exploitation of others, there seems to be an inexplicable ignorance of institutional history in play here as well. “For the first twenty years of its existence, the Net’s development was almost completely dependent on the much reviled American federal government. Whether via the U.S. military or through the universities, large amounts of taxpayers’ dollars went into building the Net infrastructure, and subsidizing the cost of using its services.”
Barbrook and Cameron then turn their focus more specifically onto the actual California of their “California Ideologues”: “On top of these public subsidies, the West Coast high-tech industrial complex has been feasting off the fattest pork-barrel in history for decades…. For those not blinded by free-market dogmas, it [is] obvious that Americans have always had state planning: they call it the defense budget.”
Paulina Borsook expresses the same exasperation at the rather facile and yet unexpectedly widespread disavowals that sometimes seem to underwrite the anti-governmental hostility of techno-libertarian sensibilities, enraptured by the prospect of encryption techniques that might circumvent despised regulations and even, many would appear to devoutly wish, smash the state altogether. The techno-libertarian, she notes with astonishment, “simply ignores... ongoing government funding for work-study jobs and for land grant universities. Indirect government subsidy (defense electronics contracts) created and nurtured the microelectronics industry and its companion infrastructure (middle-class home-mortgage guarantees and deductions for its laborers). Federal and state institutions provide an operable legal system… which ensures that the courts can remedy disputes over intellectual property squabbles and corporate espionage.”
As she bluntly puts the point: “Where would you want to do business in the year 2000?” And as she goes on to amplify the terms of her own rhetorical question she offers up as its alternatives, on the one hand, the instability of Russia at the close of the twentieth century taken (possibly somewhat hyperbolically) as an expression of libertarian theory in practice –- precisely as did Lawrence Lessig in his own framing of related questions in his book Code -– to which she then counterposes, on the other hand, again, the California of Barbrook’s and Cameron’s own puzzled fascination: “In Russia, where [presumably] there’s no regulation, no central government, no rule of law; or in Northern California, where the roads are mostly well paved and well patrolled and trucks and airplanes are safer than not, where the power grid is usually intact and the banking system mostly fraud-free and mostly works… where people mostly don’t have to pay protection money, and the majority of law enforcement personnel are not terribly corrupt or brutal?”
Although her point here is surely well taken and the bad faith of market-ideologues well-worthy of the strongest exposure and critique, it is difficult to shake the sense that in staging her confrontation between “Russia” and “California” of all places, Borsook is mobilizing as exemplary figures here landscapes that are especially fraught with dread and desire in the conventional American political imaginary. How much can they finally illuminate, then, rather than simply reverse but still re-invoke the terms in play for crypto-anarchists alike when they conjure up their own comparably fantastic countervailing thundercloud terrains of dread and desire, terrains with names like the “State” and the “Net”?
Go to Next Section of Pancryptics
Go to Pancryptics Table of Contents
MII. Taking the First Generation Seriously
Lawrence Lessig published his book Code in 1999, expressing there both skepticism about the faith of technophilic market enthusiasts that cyberspatial transactions would be immune to government regulation, and perplexity at their monomaniacal attention to government regulation in particular as the primary or even sole source of the kinds of interference that might pose trouble for them.
By the time he published his second book, The Future of Ideas just two years later in 2001 the "irrational exuberance" of the so-called new digital economy that had underwritten so much of the triumphalism of market libertarian theory had largely evaporated (for the moment). Corporations claiming special vulnerability to digital piracy had managed to so extend the copyright protections they enjoyed that they were threatening longstanding traditions of fair use and creative expression. Meanwhile, the technical demands of broadband access threatened the end-to-end principle that had defined the architectures and much of the ethos of online networking. All the while, deliriously proliferating viruses and unsolicited torrents of spam were measling over the once pristine prairie of the electronic frontier with brothels and billboards.
But needless to say, things appeared quite different just a decade ago. In an address delivered in 1996, Dorothy Denning introduced her audience to “the phrase crypto anarchy,” which Cypherpunks like Tim May had “coined to suggest the impending arrival of a Brave New World in which governments, as we know them, have crumbled, disappeared, and been replaced by virtual communities of individuals doing as they wish without interference.” Specifically, she worried, it was the development of powerful and ubiquitous encryption technologies, and especially a technique called public key encryption, that would be the trigger for this impending arrival of awful lawlessness, hence the name crypto anarchy.
In the next section I will explore at greater length some of the assumptions brought to light in Denning’s pithy early formulations; namely, what it can mean to propose that such a radical transformation of society might be all at once inevitable, impending, and implementable simply by means of the adoption of new technologies, and whether or not many would actually find such an outcome desirable in the first place. Later still, I will explore further the key connection Denning highlights in suggesting that for the Cypherpunks the technological facilitation of secrecy would seem especially and uniquely to bolster a particular conception of agency, figured here as a matter of the discretionary, of “individuals doing what they wish.” For now, I will just note that in this formulation it is “interference” in particular (as opposed to, say, incapacity, distraction, or any number of other things) that is faulted for the frustration of this longed-for agency, and that the source of interference is described primarily or even exclusively as simply, governments.
To an important extent, this anti-governmental ire was simply provoked by the fact that the same government that had subsidized the creation and maintenance of the internet throughout its short life, had otherwise apparently largely failed to notice the extraordinarily diverse, stunningly inventive, and exponentially growing community of its active users –- until a handful of security agencies and elected representatives in the early 1990s seemed quite suddenly to perceive this community and many of their activities as a profound and looming threat. Concerns about the proliferation of pornography online, hateful and otherwise “offensive” speech there, about the unlawful digital reproduction and circulation of copyrighted materials, and about the impact of encryption techniques on law-enforcement inspired a series of legal and technical initiatives that seemed both capricious and clumsy to much of the online community that had coalesced in an era of benign governmental neglect. Representative and particularly reviled among these initiatives were the Communications-Assisted Law Enforcement Act (CALEA), which passed in 1995, and the Communications Decency Act, a diminished version of which passed in 1998 as the Online Child Protection Act. CALEA, together with the so-called “Clipper Chip” initiative sought to ensure that governments would retain the capacity to eavesdrop and otherwise access online communications, while the Communications Decency Act proposed to impose stringent and censorious standards that would limit online expression in unprecedented ways.
As David Brin wrote in 1997, in a piece that was already elegiac in tone despite the fact that the “Crypto Wars” were then still well underway, these Acts and initiatives “faded from our agenda as courts overruled parts of [them], other portions were superseded in legislation, and large fractions proved impotent or unenforceable in the face of ever-changing technology.” Resistance to these initiatives was nevertheless a profoundly politicizing (and in some respects constitutive) event for many online communities, and for many “first generation” cyberspatial privacy advocates this resistance took on a curious but characteristic kind of anti-governmental ferocity that was deeper and more sweeping than comparable struggles for civil liberty or reform initiatives usually seem to inspire.
Attending a conference on cryptography sponsored by Apple Computer in 1996, Paulina Barsook observed with a certain perplexity what seemed like a sweeping anti-governmental fervor in many of its attendees: “Listening to presentations in one of the absolutely featureless auditoriums on Apple’s… corporate campus, I felt not so different from when I used to hang around the fringes of the Weatherpeople in the late ’60s.” She goes on to amplify that, “[t]hirty years after the time when I used to listen in on discussions of what used to be called the student protest movement, I was observing the same kind of righteous rage, familiar to any watcher of techno-libertarians, at the [so-called] stupid and evil government.”
On the very first page of his book Code, Lawrence Lessig describes as its inaugural inspiration his own perplexed reaction to comparable attitudes and assumptions expressed in a couple of speeches he discovered in an online archive. They had been delivered originally three years before he had stumbled upon them himself, at the annual “Computers, Freedom, and Privacy” conference in 1996 (at roughly the same time that Denning was delivering over in Australia the address I mentioned a moment ago and Borsook was attending that Apple cryptography conference).
In one of the speeches, writes Lessig, the author spoke “about ‘ubiquitous law enforcement,’ made possible by ‘fine-grained distributed systems’; through computer chips linked by the Net to every part of social life.” This architecture was already under construction at the time, Lessig points out: the author was talking about nothing more than the Internet. “As this network of control became woven into every part of social life, it would be just a matter of time” Lessig continued to summarize the author’s argument, “before the government claimed its fair share of control. Each new generation of code would increase the power of government. The future would be a world of perfect regulation, and the architecture of distributed computing -– the Internet and its attachments –- would make that perfection possible.”
The author of this extraordinarily nervous anti-governmental tract? Vernor Vinge.
Go to Next Section of Pancryptics
Go to Pancryptics Table of Contents
By the time he published his second book, The Future of Ideas just two years later in 2001 the "irrational exuberance" of the so-called new digital economy that had underwritten so much of the triumphalism of market libertarian theory had largely evaporated (for the moment). Corporations claiming special vulnerability to digital piracy had managed to so extend the copyright protections they enjoyed that they were threatening longstanding traditions of fair use and creative expression. Meanwhile, the technical demands of broadband access threatened the end-to-end principle that had defined the architectures and much of the ethos of online networking. All the while, deliriously proliferating viruses and unsolicited torrents of spam were measling over the once pristine prairie of the electronic frontier with brothels and billboards.
But needless to say, things appeared quite different just a decade ago. In an address delivered in 1996, Dorothy Denning introduced her audience to “the phrase crypto anarchy,” which Cypherpunks like Tim May had “coined to suggest the impending arrival of a Brave New World in which governments, as we know them, have crumbled, disappeared, and been replaced by virtual communities of individuals doing as they wish without interference.” Specifically, she worried, it was the development of powerful and ubiquitous encryption technologies, and especially a technique called public key encryption, that would be the trigger for this impending arrival of awful lawlessness, hence the name crypto anarchy.
In the next section I will explore at greater length some of the assumptions brought to light in Denning’s pithy early formulations; namely, what it can mean to propose that such a radical transformation of society might be all at once inevitable, impending, and implementable simply by means of the adoption of new technologies, and whether or not many would actually find such an outcome desirable in the first place. Later still, I will explore further the key connection Denning highlights in suggesting that for the Cypherpunks the technological facilitation of secrecy would seem especially and uniquely to bolster a particular conception of agency, figured here as a matter of the discretionary, of “individuals doing what they wish.” For now, I will just note that in this formulation it is “interference” in particular (as opposed to, say, incapacity, distraction, or any number of other things) that is faulted for the frustration of this longed-for agency, and that the source of interference is described primarily or even exclusively as simply, governments.
To an important extent, this anti-governmental ire was simply provoked by the fact that the same government that had subsidized the creation and maintenance of the internet throughout its short life, had otherwise apparently largely failed to notice the extraordinarily diverse, stunningly inventive, and exponentially growing community of its active users –- until a handful of security agencies and elected representatives in the early 1990s seemed quite suddenly to perceive this community and many of their activities as a profound and looming threat. Concerns about the proliferation of pornography online, hateful and otherwise “offensive” speech there, about the unlawful digital reproduction and circulation of copyrighted materials, and about the impact of encryption techniques on law-enforcement inspired a series of legal and technical initiatives that seemed both capricious and clumsy to much of the online community that had coalesced in an era of benign governmental neglect. Representative and particularly reviled among these initiatives were the Communications-Assisted Law Enforcement Act (CALEA), which passed in 1995, and the Communications Decency Act, a diminished version of which passed in 1998 as the Online Child Protection Act. CALEA, together with the so-called “Clipper Chip” initiative sought to ensure that governments would retain the capacity to eavesdrop and otherwise access online communications, while the Communications Decency Act proposed to impose stringent and censorious standards that would limit online expression in unprecedented ways.
As David Brin wrote in 1997, in a piece that was already elegiac in tone despite the fact that the “Crypto Wars” were then still well underway, these Acts and initiatives “faded from our agenda as courts overruled parts of [them], other portions were superseded in legislation, and large fractions proved impotent or unenforceable in the face of ever-changing technology.” Resistance to these initiatives was nevertheless a profoundly politicizing (and in some respects constitutive) event for many online communities, and for many “first generation” cyberspatial privacy advocates this resistance took on a curious but characteristic kind of anti-governmental ferocity that was deeper and more sweeping than comparable struggles for civil liberty or reform initiatives usually seem to inspire.
Attending a conference on cryptography sponsored by Apple Computer in 1996, Paulina Barsook observed with a certain perplexity what seemed like a sweeping anti-governmental fervor in many of its attendees: “Listening to presentations in one of the absolutely featureless auditoriums on Apple’s… corporate campus, I felt not so different from when I used to hang around the fringes of the Weatherpeople in the late ’60s.” She goes on to amplify that, “[t]hirty years after the time when I used to listen in on discussions of what used to be called the student protest movement, I was observing the same kind of righteous rage, familiar to any watcher of techno-libertarians, at the [so-called] stupid and evil government.”
On the very first page of his book Code, Lawrence Lessig describes as its inaugural inspiration his own perplexed reaction to comparable attitudes and assumptions expressed in a couple of speeches he discovered in an online archive. They had been delivered originally three years before he had stumbled upon them himself, at the annual “Computers, Freedom, and Privacy” conference in 1996 (at roughly the same time that Denning was delivering over in Australia the address I mentioned a moment ago and Borsook was attending that Apple cryptography conference).
In one of the speeches, writes Lessig, the author spoke “about ‘ubiquitous law enforcement,’ made possible by ‘fine-grained distributed systems’; through computer chips linked by the Net to every part of social life.” This architecture was already under construction at the time, Lessig points out: the author was talking about nothing more than the Internet. “As this network of control became woven into every part of social life, it would be just a matter of time” Lessig continued to summarize the author’s argument, “before the government claimed its fair share of control. Each new generation of code would increase the power of government. The future would be a world of perfect regulation, and the architecture of distributed computing -– the Internet and its attachments –- would make that perfection possible.”
The author of this extraordinarily nervous anti-governmental tract? Vernor Vinge.
Go to Next Section of Pancryptics
Go to Pancryptics Table of Contents
Subscribe to:
Posts (Atom)