In a world where every step we take is increasingly mediated by digital networks and devices, we are going to increasingly find ourselves governed by automated software regimes. Call it “algorithmic regulation” or “embedded governance” or “automated law enforcement,” these built-in systems are sure to become ubiquitous. They will be watching for stock market fraud and issuing speeding tickets. They will doubtless be quicker to act, more all-seeing and less forgiving than the human-populated bureaucracies that preceded them. Advocates of greater bureaucratic efficiency may well be happier in an algorithmically regulated future. But Adam Manley’s example raises a serious question that has a pretty obvious answer. When the network automatically delivers its ruling, who will be better positioned to contest the inevitable miscarriages of justice sure to follow? The little guy, or the well-capitalized corporation?Leonard's qualification seems odd: "Advocates of greater bureaucratic efficiency may well be happier in an algorithmically regulated future." Why would the inequitable, non-responsive outcomes of automated governance be properly described as "greater bureaucratic efficiency" in the first place? In their non-responsiveness and inequity algorithmic governance is bad governance, and it would seem something of a backhanded compliment to say: It's terrible government, but it is efficiently enforced. It evokes the old Woody Allen bit: They have such bad food here. Oh, I know, and such small portions!
It seems to me that it is the phony appearance of government efficiency enabled precisely by its non-responsiveness that peddlers of the "efficiency" of algorithmic government are really championing. This seems all the more obvious when Leonard quotes meme-hustler Tim O'Reilly promising elite technocrats will make "embedded" governance ever more "sophisticated." If what is attractive about algorithmic governance is an "embeddedness" that renders it invisible, ubiquitous, stubbornly unresponsive to majorities one really has to ask: "Sophisticated" at what? "Sophisticated" for whom?
Although Leonard makes the general point that all systems fail and the plutocrats will be the only ones with the resources to demand accountability in the face of such failures in non-responsive algorithmic governance, his examples suggest the darker reality that plutocrats are also well-positioned opportunistically to exploit this difference, systematically to abuse non-responsive algorithmic governance to their benefit at the expense of the poor, the powerless and the precarious.
Look for another digital boom for the plutocratic pigs at the trough, this time in digitally-facilitated, robotically-repeated harassment and frivolous automated lawsuits everyday folks lack the time or connections to fend off even if the law is objectively on their side. We are still picking up the pieces from previous and ongoing "booms" of this kind, in which digitally-facilitated fraudulent banksters bundle bad assets into phony prime commodities, or in which digitally-facilitated locust-financiers embezzle and skim real value via global high-speed networked split-second pseudo-investment transactions, and so on.
I have no doubt in the world that techno-utopian plutocrats and their futurological shills are thrilled at such a proliferation of new opportunities for gaming systems for parochial profit-taking. They tend to rationalize this kind of cheating and looting as the demonstration, after all, of their superior intelligence and ability and risk-taking, proving their fitness for aristocratic rule. What greater joy for plutocratic looters, especially when the cries of pain and injustice provoked by their bad behavior are rendered as good as silent by layers of algorithmic red-robot-tape. Hey, you know who made the trains run on time? Such efficiency!
Among the techno-blatherers there are of course more than enough peddlers of anti-democratizing consumer complacency and enablers of anti-democratizing plutocratic parasitism to appreciate the facilitation of each of these outcomes by algorithmic governance, but I think we are discerning as well the ideology of artificial intelligence in play here. Jaron Lanier has warned that a faith in the possibility of artificial intelligence -- and, indeed, a faith that software writers and vendors today are pilgrims of a sort, coding along the road to artificial superintelligence -- has led to a prevalence of crappy software. We are given user-unfriendly objectively wrong autocorrect spell-checkers and profoundly misleading interpretations of texts in the form of word clouds and superficial assessments of credit-worthiness via searches of impoverished records of personal buying histories and on and on and on. Invested in the dream of the imminent arrival of superintelligent AI, ambitious coders overlook the crappiness and user-unfriendliness of their would-be intelligent software and treat it instead as avatars or embryos of the Robot God to come. Guarding, guiding, glorifying their would-be god, they invest code with a standing, authority, and reliability equal to or even greater than that of the actually intelligent users who rummage through the ruins.
In the background of the technocratic, in fact techno-utopian, championing of the "efficiency of algorithmic governance" is the fantasy that regulatory software agents will not in fact be unresponsive to citizens and users -- mind you, this always really means "the citizens and users who count, among whom I certainly will always be one" anyway -- that Big Data will overthrow the Big Brother of democratically responsive governance to install, step by step by step, the loving grace of the Big Daddy surely on the way along the road coding the superintelligent robot god of their heart's desire. In practice today and as an ideal for tomorrow, one should never mistake for democratic allies the partisans of artificial intelligence who eagerly exploit software for parochial profit-taking today while pining for the dictatorship of "friendly superintelligent AI" in The Future.