Using Technology to Deepen Democracy, Using Democracy to Ensure Technology Benefits Us All
Tuesday, August 07, 2007
A Quick Comment on "Intelligence" and Politics
This is adapted from a Comment of mine from the curiously ongoing conversation arising out of a post a few days back, Singularitarianism Makes Your Brains Fall Out:
Intelligence seems to me to be a matter of forming, grasping, and applying abstractions in ways that facilitate our various ends (instrumental, moral, esthetic, ethical, political, and so on). These ends are irreducibly plural, arise in irreducibly plural contexts, and are immensely dynamic and importantly unpredictable.
This makes the business of intelligence incomparably more complex than the things that pass for "intelligence" in much discourse on the topic.
Confronted with discussions of intelligence, and especially in technocentric versions of such discussions (technocentric means "'technology'-focussed"), I have noticed that if I substitute for what are claimed to be abstract considerations of "intelligence" in these discussions what amount instead to concrete considerations of "class advantage" or "incumbent privilege" (and the rhetoric through which these latter considerations are best expressed), well, it is really extraordinary how much clearer and just how different such discussions often suddenly become for me.
All this is certainly true where talk turns tediously to self-declared "geniuses" and gurus in marginal sub(cult)ures "charismatically" demanding attention, devotion, sometimes outright obedience and, usually, cash in exchange for their variously salvational efforts in the face of some Superlative technodevelopmental prediction or other (cybernetic immortality, cybernetic totalitarian overlords, nanoabundant paradise, nanogoo apocalypse, superhuman medical enhancement, bioengineered slave armies, and so on).
It reappears, to be sure, in much of the "serious" discourse of technocrats who discern (sometimes "reluctantly") the need for the "smart people" to solve dictatorially the complicated problems that beset everybody in an unprecedentedly complicated and quick-paced world (this "everybody" consisting presumably of mostly folks who are "less smart" than necessary to grasp these complications, these problems, or their solutions, too bad for them).
This politics of class, incumbency, aristocracy (usually in the self-appointed "meritocratic" variation favored by incumbents in nominally democratic societies) stealthily -- and possibly, for some, unconsciously -- invigorates an enormous amount of the various handwaving exercises of Superlative Technophiliacs enthusing about entitative artificial superintelligence or posthuman enhanced superintelligence.
Time and time again these discourses rely -- as they must, since they refer to non-existing or, er, "not-yet-existing," phenomena -- on figurative conjurations of "futural" ideal exemplars which are usually just absurd reductios of the various distorted and impoverished visions of what "intelligence" consists of that are affirmed by their technocentric advocates -- usually reductios of an intelligence conceived as a dull numbers-cruncher, or neoliberal market-fundamentalist "maximizer," or dot-eyed instrumentalist with no love or poetry in him, or a ruggedly individualistic atom in an asocial void, etc. etc. etc.
Against these retrofuturist rhetorics I would call everybody's attention once again, to the extraordinary distributed creative expressivity and networked collaborative problem solving intelligence of emerging peer-to-peer formations, the promising responsivenesses, responsibilities, diversities, resiliences, dynamisms of the intelligences facilitated by these formations. Against the Superlative corporate-militarist retro-futurisms with all their Monster Movie inconography (the lip-smacking desire and dread of the hyper-individualist cyborg superman savior gangster, the hysterical fear of the mob that re-emerges in the specter of clone armies-upload armies-viral software armies-nanotech goo overwhelming the earth, and so on), we find everywhere around us emerging, democratizing, even mainstreaming, Technoprogressive alternative iconography, online education, agitation, fundraising, and organizing, critical decentralized blogospheric pushback against consolidated-broadcast media formations, people-powered-politics, a burgeoning creative commons of freely accessible intellectual content, personal expression, solicitation of feedback, planetary communities and cultures.
Intelligence seems to me to be a matter of forming, grasping, and applying abstractions in ways that facilitate our various ends (instrumental, moral, esthetic, ethical, political, and so on). These ends are irreducibly plural, arise in irreducibly plural contexts, and are immensely dynamic and importantly unpredictable.
This makes the business of intelligence incomparably more complex than the things that pass for "intelligence" in much discourse on the topic.
Confronted with discussions of intelligence, and especially in technocentric versions of such discussions (technocentric means "'technology'-focussed"), I have noticed that if I substitute for what are claimed to be abstract considerations of "intelligence" in these discussions what amount instead to concrete considerations of "class advantage" or "incumbent privilege" (and the rhetoric through which these latter considerations are best expressed), well, it is really extraordinary how much clearer and just how different such discussions often suddenly become for me.
All this is certainly true where talk turns tediously to self-declared "geniuses" and gurus in marginal sub(cult)ures "charismatically" demanding attention, devotion, sometimes outright obedience and, usually, cash in exchange for their variously salvational efforts in the face of some Superlative technodevelopmental prediction or other (cybernetic immortality, cybernetic totalitarian overlords, nanoabundant paradise, nanogoo apocalypse, superhuman medical enhancement, bioengineered slave armies, and so on).
It reappears, to be sure, in much of the "serious" discourse of technocrats who discern (sometimes "reluctantly") the need for the "smart people" to solve dictatorially the complicated problems that beset everybody in an unprecedentedly complicated and quick-paced world (this "everybody" consisting presumably of mostly folks who are "less smart" than necessary to grasp these complications, these problems, or their solutions, too bad for them).
This politics of class, incumbency, aristocracy (usually in the self-appointed "meritocratic" variation favored by incumbents in nominally democratic societies) stealthily -- and possibly, for some, unconsciously -- invigorates an enormous amount of the various handwaving exercises of Superlative Technophiliacs enthusing about entitative artificial superintelligence or posthuman enhanced superintelligence.
Time and time again these discourses rely -- as they must, since they refer to non-existing or, er, "not-yet-existing," phenomena -- on figurative conjurations of "futural" ideal exemplars which are usually just absurd reductios of the various distorted and impoverished visions of what "intelligence" consists of that are affirmed by their technocentric advocates -- usually reductios of an intelligence conceived as a dull numbers-cruncher, or neoliberal market-fundamentalist "maximizer," or dot-eyed instrumentalist with no love or poetry in him, or a ruggedly individualistic atom in an asocial void, etc. etc. etc.
Against these retrofuturist rhetorics I would call everybody's attention once again, to the extraordinary distributed creative expressivity and networked collaborative problem solving intelligence of emerging peer-to-peer formations, the promising responsivenesses, responsibilities, diversities, resiliences, dynamisms of the intelligences facilitated by these formations. Against the Superlative corporate-militarist retro-futurisms with all their Monster Movie inconography (the lip-smacking desire and dread of the hyper-individualist cyborg superman savior gangster, the hysterical fear of the mob that re-emerges in the specter of clone armies-upload armies-viral software armies-nanotech goo overwhelming the earth, and so on), we find everywhere around us emerging, democratizing, even mainstreaming, Technoprogressive alternative iconography, online education, agitation, fundraising, and organizing, critical decentralized blogospheric pushback against consolidated-broadcast media formations, people-powered-politics, a burgeoning creative commons of freely accessible intellectual content, personal expression, solicitation of feedback, planetary communities and cultures.
Subscribe to:
Post Comments (Atom)
4 comments:
Dale wrote:
> [T]he various handwaving exercises of Superlative Technophiliacs
> enthusing about entitative artificial superintelligence or
> posthuman enhanced superintelligence. . . rely. . .
> on figurative conjurations of "futural" ideal exemplars which
> are usually just absurd reductios of the various distorted and
> impoverished visions of what "intelligence" consists of that
> are affirmed by their technocentric advocates -- usually reductios
> of an intelligence conceived as a dull numbers-cruncher, or
> neoliberal market-fundamentalist "maximizer," or dot-eyed
> instrumentalist with no love or poetry in him, or a ruggedly
> individualistic atom in an asocial void, etc. etc. etc.
In an age in which Science is revered as a source of power,
money, and truth (though the customary lip-service puts the
priorities in the reverse order), it isn't surprising that
asserting a monopoly on "rationality" itself, or "intelligence",
or "science", is appealing as a rhetorical and political strategy
to amplify claims on attention and authority. As, in an earlier
age (or in contemporary residues of those times and cultures
which, alas, aren't so residual) an appeal to divine sanction
might have been similarly employed.
The irony, of course, is that the more uncompromisingly this
monopoly is asserted (even in the teeth of criticism from
what passes as the professional scientific community itself),
the less reason there is to believe such an assertion (or the
less reason there should be, for anybody who is, in fact, a thinker
of independent intelligence, rather than a true believer).
I've been trying for about 3 weeks to come up with a post/podcast on the subject of intelligence. It has been quite difficult, since the more I read about intelligence, the more I tend to see it as something like one of those paintings that looks like something coherent from a distance, but that starts looking more and more like little colored blobs the closer you get.
I've read plenty on cognitive science, developmental psychology, intelligence testing, the supposed implications of IQ, and the supposed meaning of "g" (general intelligence) -- and yet, I still think there's something oddly wrong with the entire way in which intelligence is often discussed.
(I really need to work on articulating why I think this, lest I receive nothing but responses of, "g is an established phenomenon with reams of data to back it up".)
It's difficult to describe the nature of the aforementioned "odd wrongness", but you've touched on it here in asserting that:
Intelligence seems to me to be a matter of forming, grasping, and applying abstractions in ways that facilitate our various ends (instrumental, moral, esthetic, ethical, political, and so on). These ends are irreducibly plural, arise in irreducibly plural contexts, and are immensely dynamic and importantly unpredictable.
It's the "plural" bit, I think, that many people seem to miss. Or, if they aren't missing it, they don't seem to talk about it much. Even if someone does manage to come up with a means of describing "general intelligence" in terms of it being the thing that usually enables humans to figure stuff out and innovate and build things, etc., that description will still only apply to a tremendously narrow range of possible cognitive states and modes.
Humans aren't the only life form capable of thinking and learning, and within humanity there are numerous different neurological configurations -- some of which lend themselves to low "assessed" scores on common tests, but which allow the people so configured to understand, think, relate, and create at very high levels of complexity.
Also, I am quite convinced at this point that a lot of what people use to "test" intelligence has more to do with speed than with actual processing ability; what of people who can figure out nearly anything, given enough time, but who perform poorly on tests and realtime assessments?
It doesn't sound (correct me if I'm wrong) as if you actually think anyone is going to produce a dictatorial AI that will ruthlessly "optimize" all the humans on the planet as meat-matter (or Matrixesque "power cells", perhaps), so I am guessing you are simply distressed at the superlative proclamations of AI promoters. If these folks truly cannot acknowledge the plurality of intelligence(es), or if they are truly mistaking status quo bias for insight into What Works Best, then efforts to transmit a clue to such persons ought to continue.
That said, I don't think the idea of developing an artificial intelligence is that weird (no weirder than trying to develop a space shuttle or an artificial heart, at least). My take on the matter is, if someone wants to try to develop and AI of any kind, they should just go ahead and do it. No need for superlative proclamations; surely, research grants have been applied to far more frivolous endeavors.
Anne Corwin wrote:
> Humans aren't the only life form capable of thinking and learning. . .
Indeed, and part of the confusion surrounding the definition
of "intelligence" arises from the fact that some people
assume that they're talking about something that is unique
to human beings.
Here's a more encompassing definition, from
_Darwin Machines and the Nature of Knowledge_
by Henry C. Plotkin, 1993, Harvard University Press
http://www.amazon.com/exec/obidos/ASIN/0674192818/qid=974465462/sr=1-2/107-6292930-3771726
(The author is described on the book jacket as a professor of
psychobiology at University College in London):
-------------------------------------------
The uncertain futures problem concerns an organism
going through life, equipped only with
instructions given at conception (and hence
perhaps only correct at that time) on how
to survive, and having to interact with a
world that may be different from that in which
its life began. . .
How can such changes be tracked? The only
way to do it is to evolve tracking devices
whose own states can be altered and held
in that altered condition for the same period
of time as the features of the world which
they are tracking...
...Think of the world as comprising sets of
features, some unchanging and others changing
at different rates and with different degrees
of regularity... [I]f the frequency of
change is less than that set by generational
deadtime for extracting genetic instructions
from the gene pool and then returning them
to it, then the conservative component of
the primary heuristic [Darwinian natural selection
via differentially successful survival and
reproduction] will be able to 'see'
these changes and will furnish adaptatons
to match them. But if the frequency of
change is faster than the frequency set by
generational deadtime, then though the primary
heuristic will be able to see the long-term
stabilities upon which these changes are
superimposed and will be able to detect the
margin within which these more rapid changes
occur, in order to track the precise values
of these changes the primary heuristic will
have to evolve devices that operate at a much
higher frequency -- at a frequency high
enough to be able to track these values.
If the high-frequency changes are unstable,...
the tracking device... need [only] command
an immediate compensatory response...
However, if these changes... [are] short-term
stabilities, then these tracking devices
must comprise a secondary heuristic that is
able to change and maintain new states
that match those features of the world that
are being tracked. Such brain mechanisms...
are what we know... as rationality or
intelligence.
-------------------------------------------
> I still think there's something oddly wrong with the
> entire way in which intelligence is often discussed.
One of the unfortunate aspects of transhumanist circles
is that so many of the folks there are **obsessed**
with "my SAT scores are higher than yours" games.
The kind of people who end up joining Mensa.
These folks are immensely invested in a narrow kind
of documented "intelligence" as a badge of status.
The narrowness can be breathtaking -- some of them
would claim that if you aren't interested in math
(many of the folks who "can't" do math simply don't
find it all that interesting, or were turned off at
a tender age by egregiously bad pedagogy) then you
aren't "intelligent" in the "most significant" sense
of the word.
There are a lot of people in the world who are plenty
damn smart but who don't obsess about IQ as a
defining aspect of their self-image. They simply have
better things to do. See, e.g., Sam Vaknin's discussion
of the "cerebral" vs. the "somatic" narcissist:
http://www.healthyplace.com/communities/personality_disorders/narcissism/journal_21.html
Dale wrote:
> . . .an intelligence conceived as a dull numbers-cruncher, or
> neoliberal market-fundamentalist "maximizer," or dot-eyed
> instrumentalist with no love or poetry in him, or a ruggedly
> individualistic atom in an asocial void, etc. etc. etc.
http://www.theregister.co.uk/2007/07/31/william_davies_web20/
Anne Corwin wrote:
> I don't think the idea of developing an artificial intelligence
> is that weird (no weirder than trying to develop a space shuttle
> or an artificial heart, at least).
I can't imagine that anybody who has ever engaged in any way with
(or has even **heard** of) the transhumanists, extropians, or
singularitarians would consider the idea of artificial intelligence
"weird".
In fact, the notion of AI is so ingrained in popular culture (and has
been for so many years -- who doesn't know about R2D2 and C3PO
these days, even if fewer remember HAL9000 or Robbie, and fewer
still have ever heard about R. Daneel Olivaw or Adam Link?) that it's
a cliche. So much so that it's hard to imagine **anybody** in the 21st century
(even people over 60, or people in third-world countries who have
seen American movies) thinking of AI as "weird" (as in Twilight Zone or X-Files weird).
Everybody understands the cinematic convention of the talking metal
man (or the talking plastic android, or the talking computer).
Klaatu barada nikto. If I only had a heart. You are the Kirk,
the creator. Just what do you think you're doing, Dave? I have such a
bad case of dust contamination. Danger, Will Robinson! There is another
system!
The frustration (mine and, I presume, Dale's) is more complicated
than that. It has to do with the narrowness and implausibility of
an ideology or quasi-religion that Jaron Lanier has labelled
"cybernetic totalism", q.v. It has to do with the putative
paradisiacal or apocalyptic consequences of AI demanding,
D*E*M*A*N*D*I*N*G immediate attention, resources, and backgrounding
of all other concerns. It has to do with AI as a "mcguffin"
for cultish group formation and guru-ish proclamations of
certainty about where the world is headed. It has to do with the
narrow and wrong-headed perpetuation of certain quite outmoded
stereotypes, prejudices, and assumptions about what an AI would be like and
how it would work, and about the nature of "intelligence" itself --
assumptions which have deep philosophical and political roots
and implications.
> My take on the matter is, if someone wants to try to develop and
> AI of any kind, they should just go ahead and do it. No need for
> superlative proclamations. . .
Indeed.
Post a Comment