Using Technology to Deepen Democracy, Using Democracy to Ensure Technology Benefits Us All

Friday, April 22, 2005

MXIII. Privacy Under Control

It is interesting to notice that while Hughes initially figures his project as relatively modest and defensive, within the space of only a few pages his ambitions have taken on the superlative, transcendentalizing cadences into which technophilia seems almost inevitably to be drawn.

At the outset, Hughes evokes the scene of a privacy the value of which is generally affirmed and widely enjoyed. “People have been defending their own privacy for centuries with whispers, darkness, envelopes, closed doors, secret handshakes, and couriers.” But this familiar and bucolic world of privacy is newly threatened by the unprecedented technological empowerment of unscrupulous authorities. The relative anonymity of cash purchasing -– “[w]hen I purchase a magazine at a store and hand cash to the clerk, there is no need to know who I am” -– has been displaced by electronically mediated purchasing in which “my identity is revealed by the underlying mechanism of the transaction.” The modest annoyances of gossip are exacerbated by digital networked communication into deeper threats: “Information is fleeter of foot, has more eyes, knows more, and understands less than Rumor.” Given these new threats, Hughes proposes that “[w]e must defend our own privacy if we expect to have any…”

But in the space of just two sentences Hughes’s tone changes extraordinarily. Rather than representing an unprecedented threat to the general enjoyment of modest privacies, technology becomes instead an engine through which Hughes imagines an unprecedented expansion and augmentation of privacy, and of the private agentic selves that would exercise it more perfectly: “The technologies of the past did not allow for strong privacy, but electronic technologies do.” It is finally this “strong” privacy that Hughes is championing in his essay, a privacy and a private self that is not so much tremulous as tremendous, shored up and rendered invulnerable by encryption, rendered more perfectly autonomous by anonymity, a sovereign self enthroned at the scene of decision.

The initial plausibility of Hughes’s inaugural claim that “[p]rivacy is necessary for an open society in an electronic age,” has itself come to look less sure-footed by now. Of just what does the “openness” consist in Hughes’s conception of an “open society” that would facilitate privacy in his understanding of it? Has Hughes mistaken for an “open” society simply one in which all the intentions of private actors that are registered in contractual, deliberate, explicitly discretionary terms are fully respected? What manner of openness after all readily reconciles with his desire for such strong personal control over the terms of public information and social interaction?

Hughes insists early on that “freedom of speech, even more than privacy, is fundamental to an open society,” and so “[w]e seek not to restrict any speech at all.” On the contrary, cypherpunks would radically impoverish spontaneous sociality and through ubiquitous encryption restrict instead the circumstances in which anyone could have occasion to speak in untoward ways at all! Or as Hughes points out, “to reveal one’s identity with assurance when the default is anonymity requires the cryptographic signature” -– which is just to say that under the hypothesized state of crypto-anarchy it would be literally impossible to definitively appear in public at all except by deliberate intention and in terms that are explicitly under one’s control. But just how “open” finally would we call a society that actually managed to so perfectly privatize the terms of the public disclosure of selves?

In making his case for the promotion of privacy, Hughes repeatedly makes recourse to this register of the discretionary and to the primacy of explicit intentions: “Privacy is the power to selectively reveal oneself to the world.” … “[W]e must ensure that each party to a transaction can have knowledge only of what is directly necessary for that transaction.” … “An anonymous system empowers individuals to reveal their identity when desired and only when desired; this is the essence of privacy.” … “If I say something, I want it heard only by those for whom I intend it.”

Certainly, at least part of an informational construal of privacy amounts to an ongoing demand that one’s intentions be respected in matters of the public circulation of at least some kinds of personal information. But I submit that the participants in social transactions are hardly always in a position to know themselves the terms of disclosure that are “directly necessary” to any given transaction. I submit that because our actions have unintended and unforeseeable consequences, for both good and ill, and because self-knowledge is imperfect and incomplete, to say the least, one is never in fact in a position to know fully what one intends in the matter of disclosing oneself in public, one never knows completely for whom one intends one’s descriptions to be heard, one cannot always know when another’s speech will be (or in fact was) welcome or unwanted, or when the susceptibility to description otherwise than one intends will be far from threatening, but rather emancipatory, redemptive, or deeply pleasurable.

For Hughes privacy is discretionary, a kind of deliberate act, and just as it is the case that “[t]o encrypt is to indicate the desire for privacy” -– indeed “encryption is [the] fundamentally… private act” -– Hughes also intriguingly suggests that “to encrypt with weak cryptography is to indicate not too much desire for privacy.” To the extent that technological development is ongoing, and hence that strong technologies are constantly rendered weaker by the development of more powerful technologies over time, it is interesting that Hughes seems to invite the implication here that the intelligible indication of a desire for privacy might require therefore an interminable maintenance of the most sophisticated and powerful technologies on offer, since obsolescence might be taken as a signal that one’s desire for privacy is on the wane. Imagine a hacker who, in uncovering a vulnerability in a hitherto secure system, discerns thereby the “intention” of her victim to be exposed to attack in the first place. The solitary, controlled and controlling, superlatively prostheticized cypherpunk would proceed then from what might seem a somewhat hyperbolic inaugural anxiety about a threatening susceptibility to indiscriminate public disclosure, to an arms race of interminable augmentation for which any relaxation might be construed as the disclosure of a literal invitation to devastation.

In The Human Condition, Hannah Arendt wrote that “[t]o live an entirely private life means above all… to be deprived of the reality that comes from being seen and heard by others, to be deprived of an ‘objective’ relationship with them that comes from being related to and separated from them through the intermediary of a common world of things.” Hughes and the rest of the cypherpunks would protest no doubt that they do not mean to withdraw entirely into private seclusions, but to gain through encryption techniques a renewed measure of control over the terms in which they appear in public. But I think that the terms of the control they seek over social interactions would altogether eliminate their spontaneity (the price of which, after all, as the cypherpunks themselves warn again and again, is to take on a real and abiding vulnerability to others) and so substitute for the “objectivity” of an Arendtian improvisatory, collaborative negotiation of a world in common the interminable expressions of canned subjectivities, of atoms in the void.

Go to Next Section of Pancryptics
Go to Pancryptics Table of Contents

4 comments:

David said...

The effect of instituting a cryptographic ideal in cyber-communication would definitely involve less spontaneity (if not reduce the measure of improvisation to negligible levels). In fact, the necessary effects of conventional net-comunication seem to include a significant loss of spontaneity to begin with. Having to form sentences; and type these sentences; erasing the gestural and more directly physical ways of conveying information (facial expressions etc.,)--these are all anti-improvisational buffers resulting in semantic objectivities. While the idea of having to consciously decide and deliberatively act in accordance with identity signatures when desiring to 'show yourself,' would be frustrating and clearly indicative of a somewhat 'closed' social system, I'm curious to know how you see Hughes' plan as much different than providing more choice for communicators (especially since imagining a day when encryption software could be turned on and off, like pop-up blockers are now, doesn't seem quixotic)?

Dale Carrico said...

I actually think there is quite a lot to like in many of the encryption tools that preoccupy the Cypherpunks, and there are contexts in which I sound a bit like a Cypherpunk myself on privacy questions, or at any rate a staunch civil libertarian.

What I find most intriguing in the case these particular Cypherpunks make in these canonical early texts of theirs is what they illuminate about the curious assumptions these advocates hold about what individual agency consists of, what political life is good for, what public life amounts to. Where I am going with this should become clearer when I put these viewpoints in conversation with advocates for "transparency" like David Brin, and especially when I go on to talk about the odd things people seem to want from virtual reality.

As for spontaneity, a couple very quick points. I think that it is more the way in which Cypherpunks would limit the conditions in which action is received than the terms in which it is released that impoverishes the idea of freedom it seems to champion. And as for choices: choice is always less than freedom, indeed choice is a domestication of freedom, a selection of options from a menu rather than the introduction of something new into a shared world of peers.

That's glib and quick, but hopefully more clarifying answers will be forthcoming in the sections of the diss to come... Thanks for reading and for the very welcome comments.

David said...

Dale,

yes. these comments make things much more perspicuous for me. particularly, thinking about the changes in received information, allows me to see the lack of spontaneity as necessarily restrictive. im curious to see how your synthesis of the ideas discussed above works out in line and symbol.

i'll continue to read and post my responses as comments. as someone who is thinking and working in related spaces, I enjoy your writing and theorizing considerably.

thank you for sharing your ideas, and for your attention,

david

Clickbank Mall said...

Nice blog. Have you seen your google rating? BlogFlux It's Free and you can add a Little Script to your site that will tell everyone your ranking. I think yours was a 3. I guess you'll have to check it out.

Tip Of The Day
Click Fraud and How to Deter It


Pay per click (PPC) advertising continues to gain popularity in the online marketing world as an effective and inexpensive way to drive targeted visitors to web sites. Research firm eMarketer reported that between 2002 and 2003 the paid search listing market grew 175 percent.

Major trusted search properties such as Google, Overture, FindWhat, Search123 and Kanoodle, all offer PPC campaigns in which you pay only when someone clicks through your banner ad or link. But PPC also has an enemy--click fraud--and understanding what it is and what to do about it should also be a key part of your PPC campaign.

What is Click Fraud?
Click fraud is when someone or something generates illegitimate hits on your banner or text advertisement causing you to pay for worthless clicks. AS PPC campaigns have grown in popularity and keyword prices and bidding have become more competetive, click fraud is on the rise.

Online marketers are becoming increasingly worried about the prospect of click fraud. According to CNET News, some marketing executives estimate that "up to 20 percent of fees in certain advertising categories continue to be based on nonexistent consumers in today's search industry."

This estimate is certainly unsettling for advertisers who, recently, have been paying hefty amounts bidding on desirable search terms. Financial analysts report that in the year 2004 advertisers are paying an average of 45 cents per click. Compare this to 40 cents in 2003 and 30 cents in 2002 the bidding wars continue to rise.

Who's Doing it and Why?
Click fraud perpetrators are most often motivated by trying to increase revenues from affiliate networks or attempting to damage competitors' revenues by forcing them to pay for worthless clicks. The Google Adsense program, in which affiliates receive payment for clicks whether they are real or not, has caused great concern for Google and has intensified its focus on click fraud.

Those engaged in click fraud use a variety of techniques to generate false clicks. Low cost international workers from all over the world are hired to locate and click on ads. The Times of India provided investigative reporting on payment for manual click fraud happening in India. Unethical companies may pay their own employees to click on competitor ads. Last but not least, click fraud can be generated by online robots programmed to click on advertiser or affiliate ads. Some companies go to great lengths creating intricate software that allows for this to happen.

How Can You Deter It?
Many advertisers know about the possibility of click fraud but generally haven't done much in the past to prevent it. Some feel that if they complain to any of the search conglomerates, it could ruin their free listings. Others feel like the problem is beyond them.

"It is a bigger problem, but folks just don't want to take the time to track it down because it's a complex problem," stated John Squire, of web analytics firm Coremetrics, to CNET. "Given that some of the largest marketers manage up to 1 million keywords in a campaign the data can be difficult to crunch."

Companies who do understand and report click fraud to search engine properties have had success receiving refunds for fraudulent clicks. For those advertisers who want to address the possibility of click fraud in PPC campaigns, good option do exists. At the most basic level, advertisers can use general auditing many have been known to compile lists of sites that generate high numbers of clicks but not sales. This will indeed put up a red flag.

On the other hand, because click fraud is advancing at such frequency, click fraud detection companies and software have been popping up all over the country. Let's take a look at some of the options:

- WhosClickingWho.com - This fraud detector tracks all PPC search engines, detects multiple IP's, and even pops up a "ClickMinder" after a potential abuser clicks repeatedly over five times.
- ClickDetective - ClickDetective allows you to track return visitors to your site and alerts you if there is evidence that your site may be under attack. Its reports show you every click in real time rather than a summary hours later.
- BogusClick - BogusClick can help advertisers determine competitor IP addresses, originating PPC search engines and/or partner sites involved, as well as keywords used.
- Clicklab - Clicklab employs a score-based click fraud detection system that applies a series of tests to each visitor session and assigns scores. Calculations are made to indicate bad/good sessions to show an advertiser the quality of traffic.

Click fraud is a big problem in search engine marketing that's only going to get bigger in the future. It is wise for any online advertiser to implement some auditing system. Why continue to waste precious campaign money?!

=============================================
Flash Tools