Using Technology to Deepen Democracy, Using Democracy to Ensure Technology Benefits Us All

Friday, May 12, 2006

Smart's "Laws on Technology"

Nato Welch recently called my attention to John Smart's "Laws on Technology," over the course of a discussion on the technoliberation discussion list. I thought Smart's Laws were interesting and useful to a point, but I'll admit that I found their framing rather disturbing in some ways.

Let me walk through some of my perplexities in real time, and please take these as exploratory readings rather than settled beliefs. Here is Smart's First Law of Technology:

"1. Technology learns ten million times faster than you do."

First, a very quick initial point: I know that there is some lovely writing that plays around with this metaphor (for example, Brand's How Buildings Learn), but I actually don't think it is a mere quibble to insist that the technophiles really need to get much more careful about how they deploy the metaphor of "learning" to artifacts. Artifactual responsiveness, adaptation, and change do not always deserve to be called "learning." There is something quite pernicious that begins to happen when one pretends to assume a perspective from which the categories of ethics and the categories of engineering become indistinguishable. More on this later (and let me say that seeing those first two words the first time -- that is, "technology learns" -- I already suspected that there would be "more on this later." By this I mean to point out that Smart's Laws look to me to be an instance in a technophilic genre with very familiar moves, and hence part of what I also mean to critique here through the discussion of Smart's piece is that wider genre). The "Law" continues:

"We all better get used to this fact of modern life."

Speaking of very familiar moves, here, already, is another one: The typical technophilic threat, a daydream clothed in the cadences of prophecy, the usual prelude to a reductionist model of technological development as a socially indifferent accumulation of useful inventions (usually imagined to be an arrow on a Cartesian grid rocketing inevitably and ever-more-rapidly up, up, up!), a model that will denigrate or simply airily decry as an irrelevance any mention of democratic politics. Look what comes next:

"Whether you measure it by communication (input, output) rates, computation (memory, processing) or replication rates, technological evolutionary development generally runs at least seven orders of magnitude (10 million times) faster than genetic systems with regard to important rates of computational change."

Notice that what was termed "learning" in the Law seems now to be described in terms of "evolutionary development" and "rates of computational change." I know to expect this sort of thing, but I do want to pause to say that there may be important differences afoot here that make a difference. When some technophiles go on to say what appear to be curiously counterintuitive things about "post-biological" intelligences you would do well to notice that the groundwork for the plausibility for such claims tends to be prepared for them by the apparently innocuous but actually exactly equally curiously counterintuitive things they say about biological intelligence in the first place (like pretending they know what intelligence consists of in the first place, like denying that so far it has always been definitively embodied, like daydreaming about the smooth function of technologies in general, like denying the extent to which what counts as development is articulated by social formations, etc.).

"We should expect great things from our technological progeny, relatively soon in the human story."

Suddenly artifacts are metaphorically humanized through their redescription as "progeny"? Suddenly we are meant to be assured that we should "expect great things" in store? This, even though it is unclear why we shouldn't expect unprecedentedly hideous things instead given the disjunction between human learning and technodevelopmental change indifferent to this human learning? And finally we get a gratuitous reassurance that this story is the "human story," after all? This, even when so much about the "Law" might seem to go against the grain and the scale of readily intelligible "human stories" in their present formation?

If anything, I fear Smart's "Second Law" amplifies these concerns for me:

"2. Humans are selective catalysts, not controllers, of technological evolutionary development."

Initially this seemed to me a sensible, even crucial insight. Namely, that no individual or social positionality, however knowledgeable or influential, unilaterally or intentionally drives or controls technodevelopment. This seems to me true, and importantly so. For me, this is so because technodevelopment isn't a program, it is a world. And human worlds are defined by the irreducible plurality of their interdependent stakeholders, just as human actions undertaken in human worlds always have unintended consequences.

But, then I saw how Smart fleshed out the "Law" in his own terms:

"Technology's development is primarily directed by both latent universal information (hidden in the laws and boundary conditions of our unique universe) and it's own emerging learning capacity."

It seems to me there are no human beings in this picture at all. What I took to be a recognition of the radical openness of human worlds and human actions, seems instead the conjuration of an endless joyless merciless mechanism. Technodevelopment may as well be a crystallization process in some mineral lattice caught by time-lapse photography. Brute technological capacity foams spontaneously outward to embrace what a no less brute Universe makes logically available to it.

Now, I am the last one to deny that what consensus science provisionally takes to be the laws of the universe offers up the best descriptions on offer of the relevant boundary conditions to what is technologically possible, and, further, that the scientific discovery of intrumentally and predictively powerful consensus scientific descriptions and experimentally powerful engineering implementations are an important measure of technodevelopment.

But this level of description just simply radically underdetermines the actual articulation of the trajectory of technodevelopment, which is a moment by moment track of compromise formations between the stakeholders to these developments, articulated by contingent social, cultural, moral, architectural constraints quite as much as abstract epistemological considerations. Technodevelopment has a human face even if it is not beholden to individual intent. This is crucial to understanding technodevelopment as social struggle.

Smart goes on to concede:

"Only secondarily is its development directed by human intent. We cannot stop the developmental progression of a wide range of technological advances, but we can massively delay and dampen the evolutionary path of harmful technological applications (e.g., nuclear weapons proliferation, CBW research, pesticides and pollutants, first generation nuclear power technology), while greatly accelerating and improving the balancing and beneficial technologies (e.g., communications, computation, automation, transparency, immune systems R&D), and phasing them in in ways that improve, rather than disrupt, human political, economic, and cultural systems. There lies the essence of our individual and social choice."

Although Smart's formulation kinda sorta reluctantly backdoors social struggle in through the evocation of the various scenes following those parenthetical "e.g.'s" at the end, the fact is that the larger argumentative scene that frames them here is an absolutely antipolitical one, a figure of change, development, and order as a spontaneous mineral crystallization followed by what looks like a consignment of human choice, in its essence, to the "choice" of whether or not we will passively ride the wave of this spontaneous crystallization construed as "progress" (a political category, after all) or frustrate it through meddling.

I mentioned before that the initially implausible but not exactly astonishing evacuation of embodiment and sociality at the heart of reductive characterizations of consciousness sets the scene for the subsequent in fact quite astonishing claims that many technophiliacs start making about artificial superintelligence when they really speak their minds. Just so, too, believe me, those who accept uncritically mechanistic or spontaneous metaphors and models of technodevelopment -- whatever their conscious or explicit political commitments may be -- seem to me to become especially susceptible to what would otherwise seem altogether absurd market libertarian hooks that happen to dangle within reach of their mouths, or seem at any rate to be curiously reduced to wriggling infantile helplessness in the face of such formulations despite their frank absurdity.

Then we get to Smart's "Third Law":

"3. The first generation of any technology is often dehumanizing. The second generation is generally indifferent to humanity. The third generation, with luck, becomes net humanizing."

I would argue that this "Law" is in fact an overgeneralization from the parochial conditions that prevail under a social formation (our own) in which technodevelopmental research and development is driven almost exclusively by multinational corporate-military organizations.

The abstraction of artifacts generated through these technodevelopmental forces corresponds to the abstract inhuman distance of these funding and distributive organizations from human majorities (you might think of it as an alienation thesis applied to disruptive technodevelopmental change).

But isn't it actually easy to remember and easy to imagine, however, artifacts that respond quite immediately and directly to human needs on a human scale? (Granting that human needs and humane scales hardly need to be considered "natural," but must be seen to change historically from epoch to epoch, in large part in consequence of the vicissitudes of technodevelopmental transformations themselves.)

Smart continues:

"The consequences of this law are frequently self-evident."

The highly questionable and selective history he uses to illustrate this point suggests the contrary:

"But it's wide applicability is often forgotten, from civilization (first generation was the age of monarchy, slavery, and perpetual state warfare), to industrialization (first generation was the polluted, dehumanizing, child labor utilizing factory), to automobiles (first generation uses dirty fossil fuels, and originally had few safety features), to televisions (first generation are noninteractive, and separate us as often as they socialize us), to calculators (first generation cause us to lose even mental calculation skills even when we wish to retain them, as they have no educational software abilities), to computers (first generation are expensive and have terrible interfaces and are restricted to an educated technological elite), to the internet (first generation is buggy, primitive, hacker-infested, and far too anonymous), to cell phones (first generation increase motor vehicle accidents, requiring too much human attention)."

It is difficult to know where to begin. Look how the timescales and the generality deployed from example to example wildly diverges. Civilization as such is analogized to cell phones? Aren't we still in the "first generation" of industrialization and television according to the terms of his criteria? And doesn't it matter that, hence, any such monolithic construal of their histories first of all fails to demonstrate the "humanization" the Law itself predicts but also proceeds at a level of generality for which many of the interesting vicissitudes that would preoccupy, say, serious scholars of either "industrialization" or "television" are treated as essentially undifferentiated? Does it matter that the way in which what is taken as the "first generation" of each of these developments is said to be dehumanizing radically differs from instance to instance? How is this a "Law" at all?

Smart concludes:

"I'm sure it is a constant challenge to our designers to think deeply and minimize the inevitable dehumanization that occurs with any new technological deployment. Three steps forward, two steps back, six steps forward, two steps back —- the eternal dance of accelerating change."

The challenge here is one to "designers." But what about private funders and public granters, regulators, educators, tinkerers, and the rest? To whom and to what are they beholden, then? Or are they, too, all "designers" in this formulation? Or do they even register as real in this conjuration of the technodevelopmental terrain? What work is it doing for Smart to assume in advance that any new technological deployment will inevitably be "dehumanizing" in this way? Just what must "we" settle for if this is so?

That emerging technologies always risk "dehumanization" in some (one hopes, better defined) sense is a powerful curb on uncritical technophilia, and hence no doubt very much to be cherished. But that word "inevitable" still trips a number of alarms in my mind.

[Update: As you will discover when you click the link to the text I am reading here (and certainly you should actually read his piece rather than relying on my own gloss of it here), Smart has modified his formulations to introduce some welcome caveats as well as to eliminate the references to "inevitability." This scarcely satisfies all my concerns with the rhetoric of the piece, but it seems to me this new version foregrounds what was interesting in his argument while backgrounding the moves that made it seem facile.]

Nevertheless, let me conclude by drawing a more positive moral from my engagement with this interesting piece by Smart:

Whenever technodevelopment is driven by the abstract antidemocratizing urgencies of parochial short-term corporate-military profit-making it may indeed be best to assume that technodevelopment will be more dehumanizing than need be. That is to say, it will impose undue costs on the most vulnerable, it will proceed with an eye to short term gains over long-term consequences, it will displace public risks and damage onto the vulnerable and onto future generations, and will exacerbate injustice and consolidate the privileges of elites by disproportionately distributing developmental benefits and profits to them and not to others. Even in the absence of what I believe to be the necessary political struggle to render technodevelopment radically more democratic, sustainable, and fair, this technodevelopmental process is, according to Smart, already sometimes ameliorated or "humanized" through retroactive piecemeal reforms, reappropriations, and refinements of disruptive technologies.

If this is true, then it makes sense -- and this is where Nato and I found our way to in discussing Smart's piece -- to champion a2k (Access to Knowledge) and copyfight struggles: First, to widen, democratize, and so accelerate this more modest technodevelopmental humanization, and, second, to more radically undermine the abstract corporate-military technodevelopmental regime as such, and faciliate the peer-to-peer alternative that will consign corporate-militarism to the dustbin of history.

Those of us who take up such a politics will do well, however, always to bear in the forefront of our minds the extent to which the technodevelopmental forces we take up and struggle to shape to democratic, emancipatory, and progressive ends are social and political and cultural through and through and that metaphors of "spontaneity" always speak stealthily with the voice of constituted authorities, even when we ourselves imagine we are fighting the resistence against such authority.

Social struggle is not spontaneous. Social struggle is not natural. Social struggle is not inevitably progressive.

We must strive to make these things happen. And we are on our own. That is to say, we just have each other.

1 comment:

Anonymous said...

an exquisite post! you hit the nail on the head ascribing Smart the dubious distinction of technophiliac (once acquaintance of mine).

sure, it's a genre game - i'd actually contend that the genre is not just potentially but actually pathological - now, not just possibly so in the future. highly exotericized mind forms (if you'll permit the buddhism) create such disembodied 'self-experience' as to be adequately regarded as dissociative. sufficiently empowered dissociative mind sets can realize only harm, usually as some vainglorious casting of narcissisus himself. the untethered 'self' is quite discompassionately misanthropic, abident only to its own fear-complexes and uncatharsized nefariousness - reductively absolutizing cognitive intelligence as if it were some glory road to godhead itself. the true religiosity as irony of this genre is not lost on me.

ken wilber aptly described the cultural project before us as the "rational reconstruction of the transrational." this, an alternative to the current rational overconstruction of the rational - the personal reification of self, ie, naricissus, in order that we might never feel death, bloated as our corpses may have already become. these technophiliacs are, if nothing else, prerationalists wisely masquerading as rationalists, presumptively placing themselves atop the heap of the developmental possibily, blinded, alright, by the science they think they alone own!
-Farsam
samfar@gmail.com