Using Technology to Deepen Democracy, Using Democracy to Ensure Technology Benefits Us All
Friday, April 22, 2005
MX. e2e
The closest thing one can find to a core that would define the “nature” of the internet in any kind of abiding way is a principle which seems to function more than anything else as a formal institutionalization of the ongoing refusal of the internet to have a core or a nature at all. Proposed in a paper called “End-to-End Arguments in System Design,” by Jerome H. Salzer, David Reed and David Clark, the End-to-End Principle (e2e) states, modestly enough, that “functions placed at low levels of a system may be redundant or of little value when compared with the cost of providing them at that low level.”
Machines perform any number of functions that may be facilitated by networking them. The Principle distinguishes machines at the “ends” of a network from those within it that constitute the links through which information traverses the network to these ends. When you access the internet, say, the machine you use is a network end from your perspective. The machines in which information you access via the network reside are then ends as well. The end-to-end principle proposes that optimizing a network to facilitate any particular function will likely frustrate thereby the capacity of the network to accommodate as many functions as possible. And this implies that a truly flexible, robust, stable network should be instead one that is as simple and neutral as possible, with intelligence, complexity, and functionality pushed off onto the network’s ends, into its applications.
A rather philosophically minded Memo published in 1996 under the auspices of the Network Working Group, the “RFC (Request for Comments) 1958,” entitled “Architectural Principles of the Internet” discusses the end-to-end principle in a way that raises the stakes considerably. The key paragraph of RFC 1958, under a heading that baldly asks the question, “Is there an Internet Architecture?” offers in answer this pithy formulation: “Many members of the Internet community would argue that there is no architecture, but only a tradition, which was not written down for the first 25 years…. However, in very general terms, the community believes that the goal is connectivity, the tool is the Internet Protocol, and the intelligence is end to end rather than hidden in the network.” Quite apart from canonizing e2e as the heart of the internet, RFC 1958 does so in the context of a conjuration of “the community,” its “tradition,” and its “goal” or ideal (“connectivity”), all considered as inextricable from this question of architecture. What began as a rather straightforward solution to the problem of engineering a more flexible network, had come to be freighted with moral and political significance.
When Tim Berners-Lee, among other researchers at CERN (Centre Européen de Recherche Nucléaire), provided a way for documents created according to incompatible formats and standards to link to one another and to be accessed and displayed on any number of machines, and again in an open-ended number of different formats, he was acting as a member of the community celebrated in RFC 1958, driven by the same vision of the internet and what it is good for that the Memo likewise takes for granted. The set of protocols HTTP (the hypertext transfer protocol) and HTML (hypertext markup language) that created the World Wide Web in the 1980s, run atop the TCP/IP (Transmission Control Protocol/Internet Protocol) that defines the Internet, and they reference, re-enact, and thereby consolidate its commitment to e2e both as a sound principle of network design, but more emphatically as an expression of the community ethos that subsequently arose out of the experiences facilitated by the TCP/IP.
The same could be said of the decision Philip Zimmermann made to create PGP (or “Pretty Good Privacy”), a public-key encryption software package, and then to publish it online in 1991 and make it available to everyone for free. These actions made Zimmermann the target of an intense and celebrated criminal investigation, highlighting the absurd policy of the United States Government to treat cryptographic software as a munition and impose export restrictions on its circulation as a consequence. Despite the investigation, which was widely condemned as persecutory, PGP became by far the most widely used e-mail encryption program around the world. The government finally dropped its case against Zimmermann in 1996, the very year, remember, that RFC 1958 was published.
There is no question that these technical decisions have functioned in differing measures to re-enforce the ongoing association of the internet, in its indefinitely many incarnations and applications, with the end-to-end principle and hence with a broad commitment to a kind of openness and experimentalism. It is likewise true that however widely and deeply shared these values may be, they are governed by specifications of technical architecture quite as much or more than they are invigorated by the participation of the community that values them. As Lessig puts the point: “The design the Internet has now need not be its design tomorrow. Or more precisely, any design it has just now can be supplemented with other controls or other technology…. The code that defines the network at one time need not be the code that defines it later on. And as that code changes, the values the network protects will change as well.”
As we have seen, the internet, such as it is, is a network of networks. Its provision, implementation, and participation is driven by contending stakes and diverse stakeholders in multiple locations. There are vast and ramifying pressures that could drive the transformation of the protocols that presently define the various layers of the internet’s architectures -– pressures to maximize corporate profits, pressures to enhance national security, pressures to regulate all sorts of conduct online deemed immoral by passionate advocates.
I have suggested that privacy is a concept deployed in both negative and positive formulations. In this it has both a structural similarity to and a certain affinity with the concept of liberty with which it also shares a comparable primacy in the liberal imaginary. Recall that “negative” privacy names the demand for a freedom from interference from overbearing authorities, while “positive” privacy affirms particular culturally contingent conceptions of individual dignity or integrity. The special quandary of privacy discourse is that, as with liberty, negative conceptions of privacy are figured as neutral and are imagined to enlist a kind of universal assent and hence foundational force, when in fact they importantly rely for their intelligibility and power on the more contingent values and assumptions of positive conceptions which tend to be disdained or even disavowed altogether just as the negative formulations they buttress are mobilized and valorized.
When the end-to-end principle is made to bear more than the weight of pragmatic calculations about ways to make networks “neutral” enough to flexibly accommodate innovation, when the principle is made to signal as well a host of moral and cultural commitments, and especially a commitment to openness figured as the essence of freedom -- then the principle undergoes a curious transubstantiation through which it takes on some of the contours of the dilemma that already shapes these discourses of privacy and liberty. There is absolutely no surprise at all in the fact that self-styled libertarian privacy advocates would find the architectural implementation of “neutrality” via e2e a deeply compelling notion at a number of levels. And neither is it particularly surprising to discern in so much of their enthusiasm the ferocious disavowal of their vulnerability to contingent political processes, their imbrication within contending cultures, and their deep dependencies on the efforts and attainments of others.
Go to Next Section of Pancryptics
Go to Pancryptics Table of Contents
Machines perform any number of functions that may be facilitated by networking them. The Principle distinguishes machines at the “ends” of a network from those within it that constitute the links through which information traverses the network to these ends. When you access the internet, say, the machine you use is a network end from your perspective. The machines in which information you access via the network reside are then ends as well. The end-to-end principle proposes that optimizing a network to facilitate any particular function will likely frustrate thereby the capacity of the network to accommodate as many functions as possible. And this implies that a truly flexible, robust, stable network should be instead one that is as simple and neutral as possible, with intelligence, complexity, and functionality pushed off onto the network’s ends, into its applications.
A rather philosophically minded Memo published in 1996 under the auspices of the Network Working Group, the “RFC (Request for Comments) 1958,” entitled “Architectural Principles of the Internet” discusses the end-to-end principle in a way that raises the stakes considerably. The key paragraph of RFC 1958, under a heading that baldly asks the question, “Is there an Internet Architecture?” offers in answer this pithy formulation: “Many members of the Internet community would argue that there is no architecture, but only a tradition, which was not written down for the first 25 years…. However, in very general terms, the community believes that the goal is connectivity, the tool is the Internet Protocol, and the intelligence is end to end rather than hidden in the network.” Quite apart from canonizing e2e as the heart of the internet, RFC 1958 does so in the context of a conjuration of “the community,” its “tradition,” and its “goal” or ideal (“connectivity”), all considered as inextricable from this question of architecture. What began as a rather straightforward solution to the problem of engineering a more flexible network, had come to be freighted with moral and political significance.
When Tim Berners-Lee, among other researchers at CERN (Centre Européen de Recherche Nucléaire), provided a way for documents created according to incompatible formats and standards to link to one another and to be accessed and displayed on any number of machines, and again in an open-ended number of different formats, he was acting as a member of the community celebrated in RFC 1958, driven by the same vision of the internet and what it is good for that the Memo likewise takes for granted. The set of protocols HTTP (the hypertext transfer protocol) and HTML (hypertext markup language) that created the World Wide Web in the 1980s, run atop the TCP/IP (Transmission Control Protocol/Internet Protocol) that defines the Internet, and they reference, re-enact, and thereby consolidate its commitment to e2e both as a sound principle of network design, but more emphatically as an expression of the community ethos that subsequently arose out of the experiences facilitated by the TCP/IP.
The same could be said of the decision Philip Zimmermann made to create PGP (or “Pretty Good Privacy”), a public-key encryption software package, and then to publish it online in 1991 and make it available to everyone for free. These actions made Zimmermann the target of an intense and celebrated criminal investigation, highlighting the absurd policy of the United States Government to treat cryptographic software as a munition and impose export restrictions on its circulation as a consequence. Despite the investigation, which was widely condemned as persecutory, PGP became by far the most widely used e-mail encryption program around the world. The government finally dropped its case against Zimmermann in 1996, the very year, remember, that RFC 1958 was published.
There is no question that these technical decisions have functioned in differing measures to re-enforce the ongoing association of the internet, in its indefinitely many incarnations and applications, with the end-to-end principle and hence with a broad commitment to a kind of openness and experimentalism. It is likewise true that however widely and deeply shared these values may be, they are governed by specifications of technical architecture quite as much or more than they are invigorated by the participation of the community that values them. As Lessig puts the point: “The design the Internet has now need not be its design tomorrow. Or more precisely, any design it has just now can be supplemented with other controls or other technology…. The code that defines the network at one time need not be the code that defines it later on. And as that code changes, the values the network protects will change as well.”
As we have seen, the internet, such as it is, is a network of networks. Its provision, implementation, and participation is driven by contending stakes and diverse stakeholders in multiple locations. There are vast and ramifying pressures that could drive the transformation of the protocols that presently define the various layers of the internet’s architectures -– pressures to maximize corporate profits, pressures to enhance national security, pressures to regulate all sorts of conduct online deemed immoral by passionate advocates.
I have suggested that privacy is a concept deployed in both negative and positive formulations. In this it has both a structural similarity to and a certain affinity with the concept of liberty with which it also shares a comparable primacy in the liberal imaginary. Recall that “negative” privacy names the demand for a freedom from interference from overbearing authorities, while “positive” privacy affirms particular culturally contingent conceptions of individual dignity or integrity. The special quandary of privacy discourse is that, as with liberty, negative conceptions of privacy are figured as neutral and are imagined to enlist a kind of universal assent and hence foundational force, when in fact they importantly rely for their intelligibility and power on the more contingent values and assumptions of positive conceptions which tend to be disdained or even disavowed altogether just as the negative formulations they buttress are mobilized and valorized.
When the end-to-end principle is made to bear more than the weight of pragmatic calculations about ways to make networks “neutral” enough to flexibly accommodate innovation, when the principle is made to signal as well a host of moral and cultural commitments, and especially a commitment to openness figured as the essence of freedom -- then the principle undergoes a curious transubstantiation through which it takes on some of the contours of the dilemma that already shapes these discourses of privacy and liberty. There is absolutely no surprise at all in the fact that self-styled libertarian privacy advocates would find the architectural implementation of “neutrality” via e2e a deeply compelling notion at a number of levels. And neither is it particularly surprising to discern in so much of their enthusiasm the ferocious disavowal of their vulnerability to contingent political processes, their imbrication within contending cultures, and their deep dependencies on the efforts and attainments of others.
Go to Next Section of Pancryptics
Go to Pancryptics Table of Contents
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment