Using Technology to Deepen Democracy, Using Democracy to Ensure Technology Benefits Us All

Tuesday, May 21, 2013

AI Ideologues Cheer "Helpful" Watson

Artificial intelligence does not exist, but the ideologues of artificial intelligence continue to force the rest of the world to pay for their refusal to accept this obvious fact. As I never tire of repeating, "Computer science in its theological guise aims less at the ultimate creation of artificial intelligence than in the ubiquitous imposition of artificial imbecillence."

Writes Manoj Saxena, General Manager of IBM's Watson Solutions (what really are the problems to which Watson provides "solutions" -- believe me, you don't want to know):
Social and technological shifts are driving rapid change, altering ways in which individuals interact with one another, learn, and attend to their personal and business needs. These shifts offer the potential to strengthen the relationships between companies and their customers -- enabling more individual and directed communication and allowing organizations to cater to individual needs.
Notice that we begin with the usual facile futurological frame of "rapid" "disruptive" "accelerating" change. This article of futurological faith is now common sense, apparently it need not be argued for or even elaborated anymore, everybody already knows that we are on a rocketship to the unimaginable sooper-footure, whatever the impediments to and stasis of actual worldly infrastructural affordances, whatever the dearth of actual paradigm shattering research breakthroughs once we get past the press release hype, whatever the stratification in the distributions to their actual stakeholders of the actual costs, risks, and benefits of technoscientific change, whatever the real world impacts of catastrophic climate change, weapons proliferation, ramifying pandemic vectors, wealth-concentration, plummeting wellness and reported satisfaction indices, whatever, whatever, whatever.

Notice also that quite apart from the flabbergasting falseness of the accelerationalism frame, once accepted this frame creates the rhetorical opportunity to pretend any predictive claim, no matter its plausibility given the actually existing state of knowledge or available resources or political priorities, is just as plausible as any other -- after all, who knows where disruptive and accelerating change may lead from what is possible now? Once actual knowledge, resources, and political priorities are sublimed away as considerations for "foresight" deliberation what remain are the highly energetic and entertaining promptings of appetite, panic, resentment, denial. (I leave to the side the neoliberal companion ideology to futurological ideology in play here -- although this pairing is also usual -- here blithely recommending emancipatory "relationships" between consumers and corporations through brand-loyalty and logo-identification. Another thing I never tire of repeating: "Modern advertising began a century ago by deceiving us that there were substantial differences between mass-produced consumer goods according to the brands they bear, and has succeeded by now, a century later, in deceiving us that there are substantial differences between mass-produced consumers according to the brands we buy.") But let us return to our can-do will-do screw-you futurologist:

Yet, for many, today’s online customer experiences lack personalization, timeliness and trust. But what if companies could offer their customers the kind of personalized and knowledgeable assistance when they’re online or on the phone that people have come to expect from top-flight customer service delivered in person? We believe that a new generation of cognitive systems will do just that. They will provide individuals with intelligent personal digital assistants that interact with them, answer their questions, and help them make complex purchasing decisions or solve problems they’re having with products like cell phones, computers and consumer electronics devices.
The "yet" with which this formulation begins is very important, and we should dwell on it, because it marks a blink-brief and begrudging admission that online experiences in the actual world suck, after which you will notice that Saxena simply barrels on through to a "what if" that doesn't actually name any actually existing state of affairs, but which he proceeds to pretend is a more palpable reality than the actual reality testified to under that now-disavowed "yet."

Saxena's faith in a futurological world not seen is indeed just that, faith -- attested to by the literally faithful pronouncement that follows: "We believe that a new generation of cognitive systems will do just that." You would be wrong to assume that the "we" conjured here is only a reference to Saxena's collueagues in the firm, and not a broader conjuration of the community of AI-ideologues and futurologists who drive this now-prevalent corporate-military discourse. The faithly substance of that belief must remain in the forefront of the reader's attention as a litany of "predictions" about not-yet and yet "features" is then trotted out behind the advertorial "will" -- "intelligent personal digital assistants that interact with them, answer their questions, and help them make complex purchasing decisions or solve problems they’re having" and so on.

Again, it is crucial to grasp that computer programs are not "intelligent," they are not "individual," they are not "personal," they are not "knowledgeable," they do not "advise," they do not "solve problems" (though of course we may solve and create problems through our uses and misuses of them). Every claim Saxena makes premised on affirmations to the contrary is an absolute deception, and possibly also a self-deception. His just-so story continues on:
A first step in this journey happens this week, when IBM introduces the Watson Engagement Advisor. The technology underlying this service is based on IBM Watson, the computer that beat former grand-champions on the TV quiz show Jeopardy!. Our research and development staff has made Watson 75 percent smaller, 25 percent faster, and have been working hard to improve Watson’s ability to answer consumer-oriented questions. For the first time, with the engagement advisor, we’re bringing Watson to the masses.
I note in passing that the "first step in this journey" formulation is once again a completely faithly utterance, and we are being asked in fact to substitute for the contemplation of the actual qualities of the program in question and the actual problems its introduction will obviously bring for actual consumers, the contemplation instead of this program's appearance on the scene as a kind of Burning Bush, portending "The Future" of superintelligent Robot Gods ending history and solving all our problems for us... we are being diverted from the facts of Watson to the question of what Watson "represents" to AI-ideologues and other corporate-militarists who are bringing "The Future" to the masses, whether we like it or not.

Indeed, you may begin to grasp the actual stakes of this marketing appropriation of political language by comparing "the masses" to whom IBM is bringing their program to the "many" he grudgingly admitted early on have terrible, frustrating experiences online trying to access information or trying to solve their actual problems. Given the institutional sites in which Watson is presumably being introduced as the primary informational and problem-solving interface I think the terrible actual experiences well on the way for a whole lot of precarious mis-informed advertisement-harrassed time-scarce consumers constitute a rich field of political stakes and substance being altogether ignored in all this profitable techno-wizbangery.

Speaking of techno-wizbangery, I cannot help but draw your attention to the fact that Watson did not "beat" a grand-champion on Jeopardy! Watson mediated a scam in which a team of programmers cheated against a grand-champion through recourse to a vast database the champion did not have access to. Neither will Watson give consumer advice in the face of their perplexities, but it will mediate the diversion of frustrated consumers down parochially profitable channels of attention and decision pre-selected by "service-providers" and the programers working for them to achieve just these ends.
Consumers will be able to experience this new level of personalized service through the brands they already have relationships with -- their banks and investment advisors, their phone service providers, insurance companies, favorite stores and other trusted organizations. For instance, a bank might offer Watson directly to customers on Web sites and mobile devices to help give them insights regarding retirement and various types of savings instruments like 401K accounts.
Just imagine the "insightful" advertizing to which these consumers are sure to be subjected as they seek answers to their urgent questions about healthcare coverage and retirement planning! What could possibly go wrong? --h/t "JimF"

2 comments:

erickingsley said...

"Neither will Watson give consumer advice in the face of their perplexities, but it will mediate the diversion of frustrated consumers down parochially profitable channels of attention and decision pre-selected by "service-providers" and the programers working for them to achieve just these ends."

This. Exactly this is what people like Manoj Saxena mean when they say shit like "the potential to strengthen the relationships between companies and their customers". This kind of babble ALWAYS means more adverts and push-sells, just that and nothing more.

Your "Personal Digital Assistant" will "assist" you in buying more IBM [or whatever entity is paying] products. It will "help" you make decisions by pushing you to buy more stuff. Duh. All that other PR crap this guy spews is so much smoke.

jimf said...

And you thought autocorrect/automplete was bad!

Welcome to the Kafkaesque realms of dealing with a
customer (dis)service artificial not-very-intelligence.

I hope somebody (**besides** IBM) has the foresight to
record some of these. They're bound to be funny
(though not, of course, for the human on the line
at the time).