Using Technology to Deepen Democracy, Using Democracy to Ensure Technology Benefits Us All

Sunday, November 25, 2012

Robot Cultists Still in the Woods Without A Compass

The New Yorker's Gary Marcus skips effortlessly through most of the steps in the utterly damning critique of the peddlers of Artificial Imbecillence (follow the link for the still pithy, unexpurgated version), but I amplify certain points -- in italics -- as I read along:
I.B.M. has just announced the world’s grandest simulation of a brain, all running on a collection of ninety-six of the world’s fastest computers. The project is code-named Compass, and its initial goal is to simulate the brain of the macaque monkey (commonly used in laboratory studies of neuroscience). In sheer scale, it’s far more ambitious than anything previously attempted, and it actually has almost ten times as many neurons as a human brain…
Although they are not actually neurons at all. This matters enormously. Literally.
The premise behind [the] approach is that... the best way to build smart machines is to build computers that work more like brains. Of course, brains aren’t better than machines at every type of thinking (no rational person would build a calculator by emulating the brain, for instance, when ordinary silicon is far more accurate), but we are still better than machines at many important tasks, including common sense, understanding natural language, and interpreting complex images. Whereas traditional computers largely work in serial (one step after another), neuromorphic systems work in parallel, and draw their inspiration as much as possible from the human brain…
Mind you, this is only according to our current, probably decisively inadequate, understanding of the human brain, and only so long as we are pretending things that aren't really alike really are (electrochemical dispositions in organismic brains, say, and wiring in electronic devices).
Ray Kurzweil, for instance… has, quite literally, bet on the neuromorphic engineers, wagering twenty thousand dollars that machines could pass the Turing Test by 2029 by using simulations built on detailed brain data (that he anticipates will be collected by nanobots)….
A multi-millionaire pop-tech circus barker hawking his latest futurological door-stop bets twenty thousand bucks he might find under a sofa cushion that imaginary computer super-intelligence vouchsafed by his prior assumption of the disanalogy of brains as computers (disanalogous because brains aren't computers) and depending further on the arrival of imaginary nanobots likely vouchsafed by the usual futurological assumption of the disanalogy of reliably programmable self-replicating nanobots and biological cells (disanalogous because cells can't do anything like what nano-cornucopiasts want nanobots to do)...? Forgive me if I refrain from applauding the audacity of the gesture.
[W]e still know too little about how individual neurons work to know how to put them together into viable networks.
Although we do know enough to notice that these aren't actually neurons.
For more than twenty-five years, scientists have known the exact wiring diagram of the three hundred and two neurons in the C. Elegans roundworm, but in at least half a dozen attempts nobody has yet succeeded in building a computer simulation that can accurately capture the complexities of the simple worm’s nervous system…
Quite so, but notice that we are now describing the worm's nervous system as "wiring." Notice, too, that we are speaking in terms of intelligence "capture" through "simulation." Actually think what is implied by this metaphor: does a mirror capture the visage it reflects, does a photograph capture the soul of the one it depicts? Such rhetorical capitulation to the figurative reframing of organismic intelligence in non-biological terms and re-smuggling of dualism back into a presumably materialist story of consciousness through a figurative "migration" of intelligence via simulation actually fuels the discourse of artificial intelligence even as the resulting program serially fails (as again here), setting the inevitable response to the failure as a matter of amplifying the terms already orchestrated by these metaphors for the next failure.

Not only is our scientific understanding of intelligence more modest than the peddlers of artificial intelligence insist (rendering them artificially imbecillent), but our supple, rich, multidimensional everyday understanding of intelligence in actual human and historical life is brutalized through our concession of the applicability of the term to the awkward impoverished puppets the peddlers of artificial intelligence produce (rendering us all artificially imbecillent).
Until we have a deeper understanding of the brain, giant arrays of idealized neurons will tell us less than we might have hoped. Simply simulating individual neurons without knowing more about how the brain works at the circuit level is like throwing Legos in a pile and hoping that they create a castle; what we really need are directions for creating the castle, but this can only come from psychologists and neuroscientists working together closely to try to understanding the kinds of circuits out of which minds are made…
Again, as far as it goes, quite so. And so again, why declare the brain has a "circuit level" at all, why declare the mind is "made" (by whom?) "of circuits"? Why encourage these people?
Moore’s Law [is] the idea that computers are rapidly increasing in power, doubling every eighteen to twenty-four months. I.B.M.’s latest success is a testament to that law; even ten years ago, a machine of Compass’s scope was almost inconceivable. But what’s not doubling every eighteen to twenty-four months is our understanding of how the brain actually works, of the computations and circuits [grrr!] that are [sic] underlie neural function. In debates about when artificial intelligence will come, many writers emphasize how cheap computation has become. But what I.B.M. shows is that you can have all the processing power in the world, but until you know how to put it all together, you still won’t have anything nearly as smart as the human brain.
Whenever I hear Moore's Law mentioned I feel it is my duty to supplement it with Lanier's less well-known because less-consoling corollary to Moore's Law: "As processors become faster and memory becomes cheaper, software becomes correspondingly slower and more bloated, using up all available resources."

I mostly approve Marcus's elegant critique of Compass, but I must conclude with complaints. Setting aside the recourse once more to metaphors (circuits, computation, capture) that invigorate the ignorant projects he otherwise sensibly disdains, I want to say that conceding his point that we may yet fundamentally misunderstand the phenomenon of intelligence -- even granting intelligence is material and not somehow supernatural, as definitely I do assume -- the proper debate to be having may well be not when but if "artificial intelligence" will come. Also, conceding his many points about its limits and failures, the proper thing to say about Compass is not that is isn't nearly as smart as a human brain but that "smart" is not a word that properly applies to such a device at all, else we risk losing the sense of what is to be nourished and cherished in the unique indispensable smartness of humans and other animals who share and make the sensible world together with us here and now.

No comments: