Using Technology to Deepen Democracy, Using Democracy to Ensure Technology Benefits Us All
Friday, September 28, 2007
Still More on Superlativity
Nick Tarleton writes: Of course the technological ability to do something does not mean it will be done. But, if I’m understanding Dale’s argument correctly, I think it fails to take into account the degree to which MNT and AGI can empower small groups to achieve Superlative goals on their own, largely independent of any “social, cultural, and political forces”.
Depending on just how “largely” you mean by “largely independent” here I probably do decisively disagree with the idea that particular radical idealized technodevelopmental outcomes are unilaterally achievable through the fervent exertions of marginal sub(cult)ures who happen to fetishize these outcomes here and now for whatever reasons.
That said, I do think Nick has more of a handle on the sort of critique I am proposing than some others seem to do. As against those Superlative Technocentrics who would accuse my critique of facile fraudulence he would probably accuse it instead of facile obviousness. (Please input requisite smiley for those who aren’t properly attuned to ruefully ironic writing styles.)
Strictly speaking, Superlative technodevelopmental outcomes are not achievable at all in my view, since they aspire at the transcendental in my own technical usage of the term.
“Superlative” doesn’t mean for me “big changes” -- there are few who would deny that ongoing technodevelopmental social struggle is causing and coping with big, sweeping, radical change. For me, in this context, “Superlative” means investing technology with a kind of autonomy for one thing (there is a wide literature elaborating this problem, as it happen), but also a kind of sublime significance. This tends, in my view,
[1] to rely on an appeal to intuitions and iconography derived from or familiar to customary religiosity, and it is
[2] typically enlisted in the service of satisfying what are more customarily religious needs to overcome alienation, a quest for “deeper” meaning, a connection to ends more synoptic than those of parochial experience, and in ways that are
[3] prone in my view to activate irrational passions I would often associate with such religiosity as well, undercritical True Belief and groupthink, craving for authoritarian power or obedience to such, not to mention often being
[4] correlated, as religiosity so often is, to disdain of one’s body as well as disdain of the diverse aspirations and alien lifeways of one’s fellows, and so on.
(This critique of organized religiosity should not be taken as an endorsement of some of the recent critiques made by the so-called militant atheists, who seem to me -- whatever the strengths and pleasures of their discourse for a cheerful decades-long atheist like me -- (a) to mistake the perfectly reasonable esthetic or modest social role of religion in the lives of many of the variously "faithful" for a form of inevitably deranging irrationality that leads them, then, (b) to misconstrue as epistemological what is actually the political pathology of authoritarian fundamentalist formations that would opportunistically organize social discontent and moral identification in the service of tyrannical ends as well as (c) to mischaracterize as generally and dangerously irrational what are in fact promisingly secular societies, like the United States in my view, simply through a skewed interpretation of reports of religious belief by people who might mean radically different things by such reports and, in consequence, (d) to lose faith in the good sense and reliability of their fellow citizens and in the democratic processes that depend on these.)
Be that as it may, Nick suggests that I fail to take into account how Artificial General Intelligence (AGI) and Molecular NanoTechnology (MNT), so-called, "could" empower small groups to achieve Superlative goals. As I mentioned in an earlier response to Michael Anissimov, I actually do agree that discussions of the impact of relatively sudden shifts in the asymmetrical distribution of forces and capacities sometimes enabled by technodevelopmental change are certainly very important indeed. Ever more sophisticated malware and technical interventions at the nanoscale will likely yield effects of this kind many times in years to come…
However, I just don’t agree that one’s capacity to talk sensibly about such effects is much helped by
[1] highly general, rarely particularly caveated, too often more logical than pragmatic discussions of the “possible” engineering feasibility of particular idealized non-proximate (and hence profoundly uncertain) outcomes which are
[2] invested, nonetheless, with radical projected properties that activate irrational hopes and fears in people without much at all in the way of connection to the demands of the actually-existing proximately-upcoming technodevelopmental terrain we are coping with here and now and which proceed in ways that
[3] consistently and even systematically de-emphasize, denigrate, or altogether disavow the realities of the articulation of actual technodevelopmental social struggle by psychological, cultural, social factors and so on, and, hence,
[4] render the conclusions of the discourse highly suspect but too often also tend to
[5] disallow or at any rate skew democratic deliberation on technodevelopmental questions (and sometimes, I fear, not so much accidentally as because of the anti-democratic sentiments of partisans of the discourse itself) -- especially when these formulations attract popular attention or unduly influence policy-makers.
This is, I fear, what takes place too typically under the heading of discussions of “AGI” and “MNT.”
Depending on just how “largely” you mean by “largely independent” here I probably do decisively disagree with the idea that particular radical idealized technodevelopmental outcomes are unilaterally achievable through the fervent exertions of marginal sub(cult)ures who happen to fetishize these outcomes here and now for whatever reasons.
That said, I do think Nick has more of a handle on the sort of critique I am proposing than some others seem to do. As against those Superlative Technocentrics who would accuse my critique of facile fraudulence he would probably accuse it instead of facile obviousness. (Please input requisite smiley for those who aren’t properly attuned to ruefully ironic writing styles.)
Strictly speaking, Superlative technodevelopmental outcomes are not achievable at all in my view, since they aspire at the transcendental in my own technical usage of the term.
“Superlative” doesn’t mean for me “big changes” -- there are few who would deny that ongoing technodevelopmental social struggle is causing and coping with big, sweeping, radical change. For me, in this context, “Superlative” means investing technology with a kind of autonomy for one thing (there is a wide literature elaborating this problem, as it happen), but also a kind of sublime significance. This tends, in my view,
[1] to rely on an appeal to intuitions and iconography derived from or familiar to customary religiosity, and it is
[2] typically enlisted in the service of satisfying what are more customarily religious needs to overcome alienation, a quest for “deeper” meaning, a connection to ends more synoptic than those of parochial experience, and in ways that are
[3] prone in my view to activate irrational passions I would often associate with such religiosity as well, undercritical True Belief and groupthink, craving for authoritarian power or obedience to such, not to mention often being
[4] correlated, as religiosity so often is, to disdain of one’s body as well as disdain of the diverse aspirations and alien lifeways of one’s fellows, and so on.
(This critique of organized religiosity should not be taken as an endorsement of some of the recent critiques made by the so-called militant atheists, who seem to me -- whatever the strengths and pleasures of their discourse for a cheerful decades-long atheist like me -- (a) to mistake the perfectly reasonable esthetic or modest social role of religion in the lives of many of the variously "faithful" for a form of inevitably deranging irrationality that leads them, then, (b) to misconstrue as epistemological what is actually the political pathology of authoritarian fundamentalist formations that would opportunistically organize social discontent and moral identification in the service of tyrannical ends as well as (c) to mischaracterize as generally and dangerously irrational what are in fact promisingly secular societies, like the United States in my view, simply through a skewed interpretation of reports of religious belief by people who might mean radically different things by such reports and, in consequence, (d) to lose faith in the good sense and reliability of their fellow citizens and in the democratic processes that depend on these.)
Be that as it may, Nick suggests that I fail to take into account how Artificial General Intelligence (AGI) and Molecular NanoTechnology (MNT), so-called, "could" empower small groups to achieve Superlative goals. As I mentioned in an earlier response to Michael Anissimov, I actually do agree that discussions of the impact of relatively sudden shifts in the asymmetrical distribution of forces and capacities sometimes enabled by technodevelopmental change are certainly very important indeed. Ever more sophisticated malware and technical interventions at the nanoscale will likely yield effects of this kind many times in years to come…
However, I just don’t agree that one’s capacity to talk sensibly about such effects is much helped by
[1] highly general, rarely particularly caveated, too often more logical than pragmatic discussions of the “possible” engineering feasibility of particular idealized non-proximate (and hence profoundly uncertain) outcomes which are
[2] invested, nonetheless, with radical projected properties that activate irrational hopes and fears in people without much at all in the way of connection to the demands of the actually-existing proximately-upcoming technodevelopmental terrain we are coping with here and now and which proceed in ways that
[3] consistently and even systematically de-emphasize, denigrate, or altogether disavow the realities of the articulation of actual technodevelopmental social struggle by psychological, cultural, social factors and so on, and, hence,
[4] render the conclusions of the discourse highly suspect but too often also tend to
[5] disallow or at any rate skew democratic deliberation on technodevelopmental questions (and sometimes, I fear, not so much accidentally as because of the anti-democratic sentiments of partisans of the discourse itself) -- especially when these formulations attract popular attention or unduly influence policy-makers.
This is, I fear, what takes place too typically under the heading of discussions of “AGI” and “MNT.”
Subscribe to:
Post Comments (Atom)
16 comments:
Dale, thanks for the response.
For me, in this context, “Superlative” means investing technology with a kind of autonomy for one thing (there is a wide literature elaborating this problem, as it happen), but also a kind of sublime significance. This tends, in my view [....]
Thanks for concisely clearing that up. All I can say is that I see only a small amount of Superlativity (by your definition) in Singularitarian discourse, and even if there were more, there would still be arguments demanding rational evaluation that you seem to be dodging. Or perhaps all Singularitarianism sounds pathologically Superlative to you (as it used to to me) simply because the possibility of near-term radical transformation through superintelligence, the extreme level of focus on that possibility, and even the word "Singularitarian" itself (which was, IIRC, coined derisively), set off your religiosity detectors. While the "religious-sounding = bull" heuristic generally works well, reality is under no obligation to always conform to it. There are no technologies that merit literally religious treatment, but this does not mean any particular material technology is unattainable or unlikely just because it is treated religiously.
I probably do decisively disagree with the idea that particular radical idealized technodevelopmental outcomes are unilaterally achievable through the fervent exertions of marginal sub(cult)ures who happen to fetishize these outcomes here and now for whatever reasons.
This is where, I think, your refusal to answer technical questions becomes important. I don't mean so much narrow numeric questions like Brian's as larger ones like: How likely it is that a small group with modest resources could develop MNT or AGI powerful enough to give them a significant advantage over the rest of the world is a largely technical question. How urgent a dilemma superintelligence is is similarly a largely technical question, yet you give the appearance of answering "not very urgent" without actually considering the technical question. Yes, your critique of Superlative discourse is "lodged at a different level", and rightly so - but the technical questions are still there regardless of what you think of the psychology of those proposing them. And the question of the best near-term course of action, whether on a societal or individual level, while not technical is heavily informed by technical questions. If superintelligence has a nontrivial probability of happening in the next few decades, and if it would be radically transformative if it did happen (both largely technical questions), it follows that we should be investing much more effort in thinking about it than the nearly zero we are now. On an individual level, that means personal attention to AI (or MNT, or existential risk, or...) issues to the exclusion of better-staffed activities (like politics) can be justified. At the very least, what you see as pathological ignorance of the social factors surrounding technological development may be (at least partially) justified.
We both agree that whether a technology is developed and used depends on motivation as well as physical possibility, but at the point where a very useful technology can be developed by a small number of skilled people with an easily attainable level of resources (what that point is is a technical question, and like most Singularitarians I would say we're probably not too far off when it comes to MNT and AGI, and we get continually closer as awareness of those technologies rises and the body of published research grows) its development is highly likely, as is its use for a wide variety of ends as a wide variety of entities gain access to it. Heavy attention to motivations is more justifiable in the case of big technologies that require state- or huge-corporation levels of resources to develop than in the case of those that are more accessible. You also can't ignore the situation where a technology is initially developed with massive investment but then hacked by smaller agents, as is happening with computers and could easily happen with MNT. Or to put it simply: technological development may be driven less by collective "psychological, cultural, social factors" than by whoever builds or hacks into MNT/AGI first. This is worth worrying about.
I suppose this could be called a technological-determinist argument, but only on the trivial (and, to my mind, trivially true) level that if a useful technology is available to a large number of people, many of them will use it.
I actually do agree that discussions of the impact of relatively sudden shifts in the asymmetrical distribution of forces and capacities sometimes enabled by technodevelopmental change are certainly very important indeed.
It seems to me this is exactly at the core of what we Singularitarians are saying, and that the important disagreement (or one of the important disagreements) here is technical: just what sort of "relatively sudden shifts" are likely? Attacks on Singularitarian rhetoric, true or not, do not address the Singularitarian answer to that question.
However, I just don’t agree that one’s capacity to talk sensibly about such effects is much helped by
[1] highly general, rarely particularly caveated, too often more logical than pragmatic discussions of the “possible” engineering feasibility of particular idealized non-proximate (and hence profoundly uncertain) outcomes which are
Feasibility doesn't determine everything, but it is very important – how likely a technology is to be feasible is important to how worried we should be about it. Facile discussion of feasibility isn't good, but that hardly constitutes the whole terrain.
[2] invested, nonetheless, with radical projected properties that activate irrational hopes and fears in people without much at all in the way of connection to the demands of the actually-existing proximately-upcoming technodevelopmental terrain we are coping with here and now and which proceed in ways that
If a topic activates irrational hopes and fears, that's because of the irrationality of the people having that reaction, not because it's a stupid topic. Of course discourse should attempt not to induce irrational reactions, but this may be very hard with some important subjects that nevertheless tend to have that effect. And the technologies that are actually proximate may be what we Singularitarians think they are.
[3] consistently and even systematically de-emphasize, denigrate, or altogether disavow the realities of the articulation of actual technodevelopmental social struggle by psychological, cultural, social factors and so on, and, hence,
See above. The large-scale social factors that I take it you're discussing here do in fact diminish in significance when discussing technologies that can be developed with little social participation. They can't be dismissed entirely, but hardly all Singularitarians do that (although I agree the ones that do are shortsighted (which doesn't mean they're completely wrong)).
[4] render the conclusions of the discourse highly suspect but too often also tend to
This looks like one of those situations where a good heuristic breaks down.
[5] disallow or at any rate skew democratic deliberation on technodevelopmental questions (and sometimes, I fear, not so much accidentally as because of the anti-democratic sentiments of partisans of the discourse itself) -- especially when these formulations attract popular attention or unduly influence policy-makers.
Skew in what way? Toward discussion of technologies you regard as far-off and irrelevant? Again, here's where technical questions are important; and it seems safer to deliberate over a technology a little before deciding it's not worthy of immediate consideration. I can see how the idea of highly-probable near-term massive technological disruption could discourage democratic deliberation and I can see the need to avoid this, but (broken record) that doesn't change the need to consider the technical questions about that probability (we can't avoid discussing those questions entirely).
Am I moving towards understanding your critique on its own level, or am I missing something crucial?
I don't mean so much narrow numeric questions like Brian's as larger ones like: How likely it is that a small group with modest resources could develop MNT or AGI powerful enough to give them a significant advantage over the rest of the world is a largely technical question.
Hey, that's not a sentence. I meant something more like:
I don't mean so much narrow numeric questions like Brian's as larger ones like: How likely it is that a small group with modest resources could develop MNT or AGI powerful enough to give them a significant advantage over the rest of the world? That's a significant, largely technical question.
If I may butt in here.
Nick Tarleton wrote:
> Am I moving towards understanding your critique on its own level,
> or am I missing something crucial?
You're still missing something crucial. As indicated by
remarks such as:
"I see only a small amount of Superlativity. . .
in Singularitarian discourse. . ."
". . .your refusal to answer technical questions. . ."
"If superintelligence has a nontrivial probability of happening
in the next few decades. . ."
". . .like most Singularitarians I would say we're probably not
too far off when it comes to MNT and AGI. . ."
". . .MNT and AGI. . . development is highly likely. . ."
". . .the important disagreement. . . here is technical. . .
Attacks on Singularitarian rhetoric. . . do not address the Singularitarian
answer. . ."
Your insistence on taking for granted that the Singularitarians'
acceptance of the plausibility of "MNT" and "AGI" is based on those
things having been adequately demonstrated to the intellectual world
at large, **itself** betokens not just disagreement
with, but also a radical misunderstanding of, Dale's (and other
Singularitity critics') fundamental argument.
These things have **not** (despite the protestations of self-proclaimed
"geniuses" within the movement) been demonstrated to the satisfaction
of the intellectual world at large. They have not! They really haven't.
If you can't see that, it's because you've gotten guru-whammied,
or succumbed to your own hopes-against-hope for immortality, or
transcendence, or whatever. **That** is Dale's crucial argument.
Attempts by outsiders (or even sympathetic bystanders) to engage
in serious technical argument with self-identified "Singularitarians"
(or to exhibit serious technical arguments made by experts in the fields)
are simply waved away or shouted down within the confines of the Superlative
enthusiasts' salons themselves (the Extropians' mailing list, SL4, etc.).
This is a symptom of pathological True Belief.
There have been other analogous, putatively "rational" and
"non-religious" movements, similarly based on True Belief.
See, e.g., Jeff Walker's _The Ayn Rand Cult_.
http://www.amazon.com/Ayn-Rand-Cult-Jeff-Walker/dp/0812693906
> even the word "Singularitarian" itself (which was, IIRC,
> coined derisively). . .
You recall incorrectly here (IIRC and IMHO, of course ;-> ).
It was coined in all seriousness.
> . . .set off your religiosity detectors. . .
Dale's religiosity detectors are in good working order.
Do we need a bibliography on cults and gurus here?
Thanks for the continued respectful engagement, Nick, and thanks James for the very helpful intervention.
Part of the problem for me is that it is very difficult to keep the discussion focused at the place that interests me most, the place that seems to me most neglected in all this tech-talk.
I agree with Jim, of course, in his assessment of the actual status of the key Superlative discourses as "technical" discourses -- the very ground on which their partisans seem most eager to redirect my arguments, ironically enough.
It should go without saying that to the monks in the monastery the scholarly practices in which the number of angels that can perch atop pinheads are debated can assume the texture and force of a technical discourse, with more and less smart participants, more and less interesting procedures, occasions for real creativity and insight, political factions and all the rest. So too with Singularitarians calculating the Robot God odds. One doesn't really have to join the robot cult to offer up the critique that tells you all you need to know about the proper status and standing of the discourse. Sure, one would probably have to drink the Kool-Aid to fully appreciate the real ingenuity and even brilliance some of the partisans of that discourse surely do exhibit. But that in itself should be a warning sign, given the extent to which Superlative discourses are pitched for the most part at a popular level while never achieving actual popularity, rather attracting the devotion of marginal sub(cult)ures of True Believers.
But quite apart from all this, the fact is that I think the actual practical force, the real-world impact of the Superlative discourses is happening at exactly the level their advocates don't want to talk about, and want for the most part to ridicule: in the cultural, political, social, psychological, rhetorical dimensions I keep hammering on about.
Sub(cult)ural futurists should have at best a negligible and accidental hand in directing the technodevelopmental struggles that might eventuate in anything like the arrival of "technological" outcomes that preoccupy their imaginations. I say should, rather than will, because we are living now in the culmination of a counterexample to that should -- a world reaping the toxic, wasteful, disfunctional, blood-soaked worldwind of the never-popular market fundamentalist notions of a marginal sub(cult)ural movement of neoliberal and neoconservative incumbents.
Be that as it may, technodevelopmental social struggle is too complex, dynamic, contingent, unpredictable to afford the Superlative Technocentrics and/or Sub(cult)ural Futurists the linear and unlitaral implementation of the particular idealized outcomes with which they happen to identify here and now for whatever reasons. But in my view they can have a profound effect on that technodevelopmental struggle where it counts not to them, but in fact, in the technodevelopmental present of ongoing and proximately upcoming technoscientific change.
The ritually reiterated images and metaphors, the customary formulations, the inculcated frames, the naturalized assumptions of Superlative Technology discourse can have a profound effect on the technodevelopmental terrain as it exists here and now in a way that is incomparably more influential than any likely impact on the futures which Superlativity imagines itself to be concerned with.
And that influence, I say again, is almost always terrible: substituting oversimplifications and linearities for actual complexities, activating irrational passions that derange critical deliberation, indulging in hype to mobilize the idiotic energies of unsustainable and joyless consumption as well as terrorizing risk discourse to mobilize the authoritarian and acquiescent energies of militarism, endorsing elitist attitudes about people's ability to have a say in the public decisions that affect them, all too often offering up explicit hymns to un(der)interrogated and naturalized notions of progress, innovation, market order as an insult added to the already abundant injury of all these "implicit" props to corporate-militarist neoliberal incumbency.
This, of course, is where I work to lodge my primary critique of Superlativity. And this is the very site it becomes most difficult to address when Superlative Technocentrics demand we engage with them always only in "technical" debates (in a sense of "technical" that never really connects to much in the way of reality, whatever the protests to the contrary about Superlativity's superior scientificity).
The force of these re-directions into "technicality" is always to keep our focus squarely fixated on the abstracts far-futures they populate with their engineering mirages, and never on the present. But make no mistake: It is in the technodevelopmental terrain of the present that Superlative discourse works its real effects. And this is none too surprising, because it is in the technodevelopmental social struggle of the diverse stakeholders to ongoing and proximately emerging technoscientific change that we all do the actual work of education, agitation, organization, and analysis to provide the ongoing and growing material archive of a living, collaborative, responsible foresight, peer-to-peer. The idealizations of Superlativity may solicit identification in their marginal adherents, but they do not constitute the "foresight" they are so pleased to congratulation themselves for.
Foresight in the service of democratic, consensual, diverse, fair, sustainable, emancipatory futures must be an open, ongoing, pragmatic peer-to-peer process.
Superlative Technocentricities and Sub(cult)ural Futurisms substitute faith for foresight, priests for peers, and the pieties of neoliberal incumbency for an open democratic futurity.
Whatever the technical idiosyncracies, whatever the fundamentalist ethnographic peculiarities, it is this last political point that is my own worry and focus here.
Nick,
'Singularitarianism' was definitely not a derisive term. The underlying ideas are not well-regarded, so it's on the 'euphemism treadmill,' exacerbated by the its ending, "ism."
http://en.wikipedia.org/wiki/Singularitarianism
Dale and James,
What was the first year in which it would have been beneficial to begin public discussion and analysis of the potential of nuclear weapons? To organize a movement to try to influence the development and use of nuclear technology in beneficial ways? Which developments in physics and politics were relevant and why?
Likewise for bioweapons. Should public discussion of biowarfare have begun earlier or later than it in fact did, or was it perfectly timed? Which developments, biological and political, factor into that analysis? ihttp://en.wikipedia.org/wiki/Biological_Weapons_Convention
Did the development of understanding and action on climate change proceed optimally? Or would it have been better if private individuals had poured resources into better measurements, more research, and analysis of the implications of climate change early on (before the science was settled, indeed in order to settle the science)? Such an effort could have provided a number of additional years in which to safely address the problem.
What technical developments in AI would justify investing substantial effort into analyzing probable timelines and safety measures? Machine translation that outperforms 90th percentile human translators? Computer vision that can outperform humans in recognizing objects? Mathematical AI that can produce human-comprehensible proofs of major open problems in mathematics? An AI medical system that can plan and initiate all aspects of a patient's treatment, including performing surgery? A simulation of a mouse brain that can control the body of a mouse (using implanted devices) and produce behavior indistinguishable from an ordinary mouse's? A robust theory of intelligence implemented in a learning system that most computer scientists agree will reach human-equivalent ability levels in all fields within 5 years? Surely you would agree that there are some such developments that would justify active effort, ones that have not yet been achieved. What are they?
I would also ask Dale what, in his view, is the highest impact place for activists to put their energies? In expectation, how much can one person increase the probability of a bright future in that field? It's incredibly improbable that jobs at Greenpeace, UNICEF, Soros' Open Society projects, as a research assistant for Naomi Klein, as a staffer at the Human Rights Campaign in D.C., as writer for the IEET, as an analyst at the Nuclear Threat Initiative, and as a campaign staffer for Barack Obama are all roughly equally beneficial. Where should people who are trying to do as much good as they can put their efforts, and how much good do you expect they can do there?
jfehlinger, I would be very interested in pointers to what you consider the most serious technical critiques of "Superlative" versions of MNT and AGI.
Michael Anissimov (I presume) wrote:
> What was the first year in which it would have been beneficial
> to begin public discussion and analysis of the potential of
> nuclear weapons?
Probably around the time Einstein started writing (or signing)
letters to Roosevelt -- 1939[*].
http://www.ppu.org.uk/learn/infodocs/people/pp-einstein3.html
[*]"Because of the danger that Hitler might be the first to have
the bomb, I signed a letter to the President which had been drafted
by Szilard. Had I known that the fear was not justified[**], I would
not have participated in opening this Pandora's box, nor would
Szilard. For my distrust of governments was not limited to Germany."
[**]Wasn't it? Only because Heisenberg dropped the ball,
apparently.
But the difference here is that weapon-scale nuclear fission was a
well-established technical possibility. The feasibility of AI has
not yet been so demonstrated -- not in the sense in which most Singularitarians
use the word.
> What technical developments in AI would justify investing substantial
> effort into analyzing probable timelines and safety measures?
> Machine translation that outperforms 90th percentile human translators?
> Computer vision that can outperform humans in recognizing objects?
> Mathematical AI that can produce human-comprehensible proofs of major
> open problems in mathematics? An AI medical system that can plan and
> initiate all aspects of a patient's treatment, including performing
> surgery?
Yeah, those'd be pretty interesting developments.
> A robust theory of intelligence implemented in a learning system
> that most computer scientists agree will reach human-equivalent ability
> levels in all fields within 5 years?
You're putting this forth as a hypothetical development that would
"justify investing substantial effort. . ."?
If you're suggesting such a "robust theory of intelligence"
actually exists, then I question your reality-testing here
(pace, Dr. Goertzel).
> I would also ask Dale what, in his view, is the highest impact
> place for activists to put their energies?
The world is so constituted, alas, that one of the best places
for activists to put their energies is precisely in cogently
and incisively debunking other (misguided) activists.
I don't know if the on-line transhumanists deserve the attentions
of a Bertrand Russell or an H. L. Mencken, but I think Dale is
doing a decent job. ;->
Steven wrote:
> jfehlinger, I would be very interested in pointers to what you consider
> the most serious technical critiques of "Superlative" versions of MNT and AGI.
I will leave "MNT" to you to research. You no doubt have heard of the
debates between K. Eric Drexler and Richard E. Smalley (the latter is
the Rice University discoverer of fullerenes -- "buckyballs" -- who died
a couple of years ago). Folks inside the >Hist movement tend to
characterize Smalley as the "loser" of this debate, though I suspect
that the mainstream scientific community would think otherwise.
I have no comment one way or the other.
As far as "AGI" is concerned -- well, as you probably know, that
term was coined by Dr. Ben Goertzel (and/or his close associates)
and means whatever he, or they, or he and they and the Singularity
Institute, say(s) it means. So let's leave aside AGI -- a term which,
AFAIK, isn't used outside of the above small circle, and talk
about "strong" artificial intelligence.
The critiques of strong AI that **I** take seriously are the
ones which focus on what's somewhat derisively known as "GOFAI" --
"Good Old-Fashioned AI" -- the idea that intelligence can be
reduced to symbol-manipulation and algorithms.
Hubert L. Dreyfus (_What Computers Can't Do_ [1972] and
_What Computers Still Can't Do_ [1992]) is a good place to start.
Gerald M. Edelman (_Bright Air, Brilliant Fire_, 1992) debunks
the first-generation "cognitivist" analogy of the brain
as a digital computer. As does George Lakoff (e.g., in
_Philosophy in the Flesh: The Embodied Mind and its Challenge
to Western Thought_, 1999). Jaron Lanier has something to
say about this subject as well.
Note that it is the GOFAI-ish view of intelligence implicit in
much of the Singularitarian discourse on AI that I find particularly
implausible in light of today's knowledge. Not all transhumanists
espouse this. Eugen Leitl, e.g., has always been critical of
this view (though he gets less attention because he hasn't
founded a Church). Ray Kurzweil, interestingly, also downplays
GOFAI-ish approaches to AI, and places his bets on simulations
or reverse-engineering of biological nervous systems. I find
the simulation approach less inherently implausible, though
its computational requirements are vast, and it remains an open
question whether Moore's Law (or some paradigm shift into, e.g.,
3-D optical computing) will carry us far enough to attempt
such a thing on a human, let alone a superhuman, scale.
Anyway, here's a long extract from my e-mail archive apropos
Dreyfus:
---------------------------------------------------------------------
Subject: Let us calculate, Sir Marvin
_What Computers Still Can't Do: A Critique of
Artifical Reason_, Hubert L. Dreyfus, MIT Press,
1992
Introduction, pp. 67-70:
Since the Greeks invented logic and geometry, the idea that
all reasoning might be reduced to some kind of calculation --
so that all arguments could be settled once and for all --
has fascinated most of the Western tradition's rigorous
thinkers. Socrates was the first to give voice to this
vision. The story of artificial intelligence might well
begin around 450 B.C. when (according to Plato) Socrates
demands of Euthyphro, a fellow Athenian who, in the name
of piety, is about to turn in his own father for murder:
"I want to know what is characteristic of piety which
makes all actions pious. . . that I may have it to turn
to, and to use as a standard whereby to judge your actions
and those of other men." Socrates is asking Euthyphro
for what modern computer theorists would call an "effective
procedure," "a set of rules which tells us, from moment
to moment, precisely how to behave."
Plato generalized this demand for moral certainty into
an epistemological demand. According to Plato, all
knowledge must be stateable in explicit definitions
which anyone could apply. If one could not state his
know-how in terms of such explicit instructions -- if his
knowing **how** could not be converted into knowing
**that** -- it was not knowledge but mere belief.
According to Plato, cooks, for example, who proceed by
taste and intuition, and poets who work from inspiration,
have no knowledge; what they do does not involve
understanding and cannot be understood. More generally,
what cannot be stated explicitly in precise instructions --
all areas of human thought which require skill, intuition
or a sense of tradition -- are relegated to some kind of
arbitrary fumbling.
But Plato was not fully a cyberneticist (although according
to Norbert Wiener he was the first to use the term), for
Plato was looking for **semantic** rather than **syntactic**
criteria. His rules presupposed that the person understood
the meanings of the constitutive terms. . . Thus Plato
admits his instructions cannot be completely formalized.
Similarly, a modern computer expert, Marvin Minsky, notes,
after tentatively presenting a Platonic notion of effective
procedure: "This attempt at definition is subject to
the criticism that the **interpretation** of the rules
is left to depend on some person or agent."
Aristotle, who differed with Plato in this as in most questions
concerning the application of theory to practice, noted
with satisfaction that intuition was necessary to apply
the Platonic rules: "Yet it is not easy to find a formula
by which we may determine how far and up to what point a man
may go wrong before he incurs blame. But this difficulty
of definition is inherent in every object of perception;
such questions of degree are bound up with circumstances
of the individual case, where are only criterion **is**
the perception."
For the Platonic project to reach fulfillment one breakthrough
is required: all appeal to intuition and judgment must be
eliminated. As Galileo discovered that one could find
a pure formalism for describing physical motion by ignoring
secondary qualities and teleological considerations, so,
one might suppose, a Galileo of human behavior might succeed
in reducing all semantic considerations (appeal to meanings)
to the techniques of syntactic (formal) manipulation.
The belief that such a total formalization of knowledge must
be possible soon came to dominate Western thought. It
already expressed a basic moral and intellectual demand, and
the success of physical science seemed to imply to sixteenth-
century philosophers, as it still seems to suggest to
thinkers such as Minsky, that the demand could be satisfied.
Hobbes was the first to make explicit the syntactic conception
of thought as calculation: "When a man **reasons**, he
does nothing else but conceive a sum total from addition of
parcels," he wrote, "for REASON . . . is nothing but
reckoning. . ."
It only remained to work out the univocal parcels of "bits"
with which this purely syntactic calculator could operate;
Leibniz, the inventor of the binary system, dedicated
himself to working out the necessary unambiguous formal
language.
Leibniz thought he had found a universal and exact system of
notation, an algebra, a symbolic language, a "universal
characteristic" by means of which "we can assign to every
object its determined characteristic number." In this way
all concepts could be analyzed into a small number of
original and undefined ideas; all knowledge could be
expressed and brought together in one deductive system.
On the basis of these numbers and the rules for their
combination all problems could be solved and all controversies
ended: "if someone would doubt my results," Leibniz
said, "I would say to him: 'Let us calculate, Sir,' and
thus by taking pen and ink, we should settle the
question.'" . . .
In one of his "grant proposals" -- his explanations of how
he could reduce all thought to the manipulation of
numbers if he had money enough and time -- Leibniz remarks:
"[T]he most important observations and turns of skill
in all sorts of trades and professions are as yet unwritten.
This fact is proved by experience when passing from
theory to practice when we desire to accomplish something.
Of course, we can also write up this practice, since it
is at bottom just another theory more complex and
particular. . ."
Chapter 6, "The Ontological Assumption", pp. 209-213
Granting for the moment that all human knowledge can be
analyzed as a list of objects and of facts about each,
Minsky's analysis raises the problem of how such a large
mass of facts is to be stored and accessed. . .
And, indeed, little progress has been made toward
solving the large data base problem. But, in spite of
his own excellent objections, Minsky characteristically
concludes: "But we had better be cautious about
this caution itself, for it exposes us to a far more
deadly temptation: to seek a fountain of pure intelligence.
I see no reason to believe that intelligence can
exist apart from a highly organized body of knowledge,
models, and processes. The habit of our culture has
always been to suppose that intelligence resides in
some separated crystalline element, call it _consciousness_,
_apprehension_, _insight_, _gestalt_, or what you
will but this is merely to confound naming the problem
with solving it. The problem-solving abilities of
a highly intelligent person lies partly in his superior
heuristics for managing his knowledge-structure and
partly in the structure itself; these are probably
somewhat inseparable. In any case, there is no reason to
suppose that you can be intelligent except through the
use of an adequate, particular, knowledge or model
structure."
. . . It is by no means obvious that in order to be
intelligent human beings have somehow solved or needed to
solve the large data base problem. The problem may itself
be an artifact created by the fact that AI workers must
operate with discrete elements. Human knowledge does
not seem to be analyzable as an explicit description
as Minsky would like to believe. . . To recognize an
object as a chair, for example, means to understand its
relation to other objects and to human beings. This
involves a whole context of human activity of which
the shape of our body, the institution of furniture, the
inevitability of fatigue, consitute only a small part.
And these factors in turn are no more isolable than is
the chair. They all may get **their** meaning in
the context of human activity of which they form a
part. . .
There is no reason, only an ontological commitment,
which makes us suppose that all the facts we can make
explicit about our situation are already unconsciously
explicit in a "model structure," or that we
could ever make our situation completely explicit
even if we tried.
Why does this assumption seem self-evident to Minsky?
Why is he so unaware of the alternative that he takes
the view that intelligence involves a "particular,
knowledge or model structure," great systematic array
of facts, as an axiom rather than as an hypothesis?
Ironically, Minsky suppose that in announcing this
axiom he is combating the tradition. "The habit of
our culture has always been to suppose that intelligence
resides in some separated crystalline element, call
it consciousness, apprehension, insight, gestalt. . ."
In fact, by supposing that the alternative are either
a well-structured body of facts, or some disembodied
way of dealing with the facts, Minsky is so traditional
that he can't even see the fundamental assumption
that he shares with the whole of the philosophical
tradition. In assuming that what is given are facts
at all, Minsky is simply echoing a view which has been
developing since Plato and has now become so ingrained
as to **seem** self-evident.
As we have seen, the goal of the philosophical
tradition embedded in our culture is to eliminate
uncertainty: moral, intellectual, and practical.
Indeed, the demand that knowledge be expressed in
terms of rules or definitions which can be applied
without the risk of interpretation is already
present in Plato, as is the belief in simple elements
to which the rules apply. With Leibniz, the connection
between the traditional idea of knowledge and the
Minsky-like view that the world **must** be analyzable
into discrete elements becomes explicit. According
to Leibniz, in understanding we analyze concepts into
more simple elements. In order to avoid a regress
of simpler and simpler elements, then, there must
be ultimate simples in terms of which all complex
concepts can be understood. Moreover, if concepts
are to apply to the world, there must be simples
to which these elements correspond. Leibniz
envisaged "a kind of alphabet of human thoughts"
whose "characters must show, when they are used in
demonstrations, some kind of connection, grouping
and order which are also found in the objects."
The empiricist tradition, too, is dominated by
the idea of discrete elements of knowledge. For
Hume, all experience is made up of impressions:
isolable, determinate, atoms of experience.
Intellectualist and empiricist schools converge
in Russell's logical atomism, and the idea reaches
its fullest expression in Wittgenstein's _Tractatus_,
where the world is defined in terms of a set of
atomic facts which can be expressed in logically
independent propositions. This is the purest
formulation of the ontological assumption, and
the necessary precondition of all work in AI as long
as researchers continue to suppose that the world
must be represented as a structured set of descriptions
which are themselves built up from primitives.
Thus both philosophy and technology, in their appeal
to primitives, continue to posit what Plato sought:
a world in which the possibility of clarity, certainty
and control is guaranteed; a world of data structures,
decision theory, and automation.
No sooner had this certainty finally been made fully
explicit, however, than philosophers began to call it into
question. Continental phenomenologists [uh-oh, here
come those French. :-0] recognized it as the outcome
of the philosophical tradition and tried to show its
limitations. [Maurice] Merleau-Ponty calls the
assumption that all that exists can be treated as
determinate objects, the _prejuge du monde_,
"presumption of commonsense." Heidegger calls it
_rechnende Denken_ "calculating thought," and views
it as the goal of philosophy, inevitably culminating
in technology. . . In England, Wittgenstein less
prophetically and more analytically recognized the
impossibility of carrying through the ontological
analysis proposed in his _Tractatus_ and became his
own severest critic. . .
But if the ontological assumption does not square with
our experience, why does it have such power? Even if
what gave impetus to the philosophical tradition was
the demand that things be clear and simple so that
we can understand and control them, if things are not
so simple why persist in this optimism? What lends
plausibility to this dream? As we have already seen. . .
the myth is fostered by the success of modern
physics. . .
Chapter 8, "The Situation: Orderly Behavior Without
Recourse to Rules" pp. 256-257
In discussing problem solving and language translation
we have come up against the threat of a regress of rules
for determining relevance and significance. . . We
must how turn directly to a description of the situation
or context in order to give a fuller account of the
unique way human beings are "in-the-world," and the
special function this world serves in making orderly
but nonrulelike behavior possible.
To focus on this question it helps to bear in mind
the opposing position. In discussing the epistemological
assumption we saw that our philosophical tradition
has come to assume that whatever is orderly can be
formalized in terms of rules. This view has reached
its most striking and dogmatic culmination in the
conviction of AI workers that every form of intelligent
behavior can be formalized. Minsky has even
developed this dogma into a ridiculous but revealing
theory of human free will. He is convinced that all
regularities are rule governed. He therefore theorizes
that our behavior is either completely arbitrary
or it is regular and completely determined by the
rules. As he puts it: "[W]henever a regularity is
observed [in our behavior], its representation is
transferred to the deterministic rule region." Otherwise
our behavior is completely arbitrary and free.
The possibility that our behavior might be regular
but not rule governed never even enters his mind.
Dreyfus points out that when a publication anticipating
the first edition of his book came out in the late
1960s, he was taken aback by the hysterical tone of
the reactions to it:
Introduction, pp. 86-87
[T]he year following the publication of my first
investigation of work in artificial intelligence,
the RAND Corporation held a meeting of experts in
computer science to discuss, among other topics,
my report. Only an "expurgated" transcript of this
meeting has been released to the public, but
even there the tone of paranoia which pervaded the
discussion is present on almost every page. My
report is called "sinister," "dishonest,"
"hilariously funny," and an "incredible misrepresentation
of history." When, at one point, Dr. J. C. R. Licklider,
then of IBM, tried to come to the defense of my
conclusion that work should be done on man-machine
cooperation, Seymour Papert of M.I.T. responded:
"I protest vehemently against crediting Dreyfus with
any good. To state that you can associate yourself
with one of his conclusions is unprincipled. Dreyfus'
concept of coupling men with machines is based on
thorough misunderstanding of the problems and has nothing
in common with any good statement that might go by
the same words."
The causes of this panic-reaction should themselves be
investigated, but that is a job for psychology [;->],
or the sociology of knowledge. However, in anticipation
of the impending outrage I want to make absolutely clear
from the outset that what I am criticizing is the
implicit and explicit philosophical assumptions of
Simon and Minsky and their co-workers, not their
technical work. True, their philosophical prejudices
and naivete distort their own evaluation of their
results, but this in no way detracts from the
importance and value of their research on specific
techniques such a list structures, and on more
general problems. . .
An artifact could replace men in some tasks -- for
example, those involved in exploring planets --
without performing the way human beings would and
without exhibiting human flexibility. Research in
this area is not wasted or foolish, although a balanced
view of what can and cannot be expected of such an
artifact would certainly be aided by a little
philosophical perspective.
In the "Introduction to the MIT Press Edition" (pp. ix-xiii)
Dreyfus gives a summary of his work and reveals
the source of the acronym "GOFAI":
Almost half a century ago [as of 1992] computer pioneer
Alan Turing suggested that a high-speed digital
computer, programmed with rules and facts, might exhibit
intelligent behavior. Thus was born the field later
called artificial intelligence (AI). After fifty
years of effort, however, it is now clear to all but
a few diehards that this attempt to produce artificial
intelligence has failed. This failure does not mean
this sort of AI is impossible; no one has been able
to come up with a negative proof. Rather, it has
turned out that, for the time being at least, the
research program based on the assumption that human
beings produce intelligence using facts and rules
has reached a dead end, and there is no reason to
think it could ever succeed. Indeed, what John
Haugeland has called Good Old-Fashioned AI (GOFAI)
is a paradigm case of what philosophers of science
call a degenerating research program.
A degenerating research program, as defined by Imre
Lakatos, is a scientific enterprise that starts out
with great promise, offering a new approach that
leads to impressive results in a limited domain.
Almost inevitably researchers will want to try to apply
the approach more broadly, starting with problems
that are in some way similar to the original one.
As long as it succeeds, the research program expands
and attracts followers. If, however, researchers
start encountering unexpected but important phenomena
that consistently resist the new techniques, the
program will stagnate, and researchers will abandon
it as soon as a progressive alternative approach
becomes available.
We can see this very pattern in the history of GOFAI.
The work began auspiciously with Allen Newell and
Herbert Simon's work at RAND. In the late 1950's,
Newell and Simon proved that computers could do more
than calculate. They demonstrated that a computer's
strings of bits could be made to stand for anything,
including features of the real world, and that its
programs could be used as rules for relating these
features. The structure of an expression in the
computer, then, could represent a state of affairs
in the world whose features had the same structure,
and the computer could serve as a physical symbol
system storing and manipulating representations.
In this way, Newell and Simon claimed, computers
could be used to simulate important aspects of intelligence.
Thus the information-processing model of the mind
was born. . .
My work from 1965 on can be seen in retrospect as a
repeatedly revised attempt to justify my intuition,
based on my study of Martin Heidegger, Maurice
Merleau-Ponty, and the later Wittgenstein, that the
GOFAI research program would eventually fail.
My first take on the inherent difficulties of
the symbolic information-processing model of the
mind was that our sense of relevance was holistic and
required involvement in ongoing activity,
whereas symbol representations were atomistic and
totally detached from such activity. By the
time of the second edition of _What Computers Can't
Do_ in 1979, the problem of representing what I
had vaguely been referring to as the holistic
context was beginning to be perceived by AI researchers
as a serious obstacle. In my new introduction I
therefore tried to show that what they called the
commonsense-knowledge problem was not really a problem
about how to represent **knowledge**; rather, the
everyday commonsense background understanding that
allows us to experience what is currently relevant
as we deal with things and people is a kind of
**know-how**. The problem precisely was that this
know-how, along with all the interests, feelings,
motivations, and bodily capacities that go to make a
human being, would have had to be conveyed to the
computer as knowledge -- as a huge and complex belief
system -- and making our inarticulate, preconceptual
background understanding of what it is like to
be a human being explicit in a symbolic representation
seemed to me a hopeless task.
For this reason I doubted the commonsense-knowledge
problem could be solved by GOFAI techniques, but I could
not justify my suspicion that the know-how that made up
the background of common sense could not itself be
represented by data structures made up of facts and
rules. . .
When _Mind Over Machine_ came out, however, Stuart
[Dreyfus] and I faced the same objection that had been
raised against my appeal to holism in _What Computers
Can't Do_. You may have described how expertise
**feels**, our critics said, but our only way of
**explaining** the production of intelligent behavior
is by using symbolic representations, and so
that must be the underlying causal mechanism. Newell
and Simon resort to this type of defense of
symbolic AI: "The principal body of evidence for
the symbol-system hypothesis. . . is negative evidence:
the absence of specific competing hypotheses as to
how intelligent activity might be accomplished whether
by man or by machine [sounds like a defense of
Creationism!]"
In order to respond to this "what else could it be?" defense
of the physical symbol system research program, we
appealed in _Mind Over Machine_ to a somewhat vague and
implausible idea that the brain might store holograms
of situations paired with appropriate responses,
allowing it to respond to situations in way it had
successfully responded to similar situations in the
past. The crucial idea was that in hologram matching
one had a model of similarity recognition that did not
require analysis of the similarity of two pattersn
in terms of a set of common features. But the model
was not convincing. No one had found anything
resembling holograms in the brain.
Minsky gets the brunt of Dreyfus' exasperation and sarcasm.
Introduction to the Revised Edition, pp. 34-36:
In 1972, drawing on Husserl's phenomenological analysis,
I pointed out that it was a major weakness of AI that no
programs made use of expectations. Instead of
modeling intelligence as a passive receiving of
context-free facts into a structure of already stored
data, Husserl thinks of intelligence as a context-
determined, goal-directed activity -- as a **search**
for anticipated facts. For him the _noema_, or
mental representation of any type of object, provides
a context or "inner horizon" of expectations or
"predelineations" for structuring the incoming data. . .
The noema is thus a symbolic description of all the
features which can be expected with certainty in exploring
a certain type of object -- features which remain
"inviolably the same. . ." . . .
During twenty years of trying to spell out the components
of the noema of everyday objects, Husserl found that
he had to include more and more of what he called the
"outer horizon," a subject's total knowledge of the
world. . .
He sadly concluded at the age of seventy-five that he was
a "perpetual beginner" and that phenomenology was an
"infinite task" -- and even that may be too optimistic. . .
There are hints in an unpublished early draft of the
frame paper that Minsky has embarked on the same misguided
"infinite task" that eventually overwhelmed Husserl. . .
Minsky's naivete and faith are astonishing. Philosophers
from Plato to Husserl, who uncovered all these problems
and more, have carried on serious epistemological
research in this area for two thousand years without
notable success. Moreover, the list Minsky includes in
this passage deals only with natural objects, and
their positions and interactions. As Husserl saw, and
as I argue. . ., intelligent behavior also presupposes
a background of cultural practices and institutions. . .
Minsky seems oblivious to the hand-waving optimism of
his proposal that programmers rush in where philosophers
such as Heidegger fear to tread, and simply make explicit
the totality of human practices which pervade our lives
as water encompasses the life of a fish.
Dale wrote:
> Superlative Technocentricities and Sub(cult)ural Futurisms substitute
> faith for foresight, priests for peers, and the pieties of neoliberal
> incumbency for an open democratic futurity.
>
> Whatever the technical idiosyncracies, whatever the fundamentalist
> ethnographic peculiarities, it is this last political point that
> is my own worry and focus here.
Yes, of course.
Why waste energy on [gay lib, civil rights, universal healthcare,
insert least-favorite liburl hobby-horse here] when it only
distracts us from getting to the Singularity, after which
everybody (even the poofs, I gather) will have endless free
rides on the merry-go-round?
Or, as one prominent S'ian actually said on a list back
before the 2000 elections -- S'ians should vote Republican
(as the lesser of two evils), because it's the party
that'll keep the capital flowing to the folks who'll get us to the
Singularity.
And also, as I pointed out to a Singularitarian
a few years ago (to little avail):
Subject: dy/dx -> infinity
. . .I think it's important
for you to understand its implications (though I have
little hope that you will).
If the Singularity is the fulcrum determining humanity's
future, . . .
the point at which dy/dx -> infinity, the very inflection
point itself, then **ALL** morality goes out the window.
You might as well be dividing by zero.
You could justify **anything** on that basis
. . .
The more hysterical things seem, the more desperate,
the more apocalyptic, the more the discourse **and**
moral valences get distorted (a singularity indeed!)
by the weight of importance bearing down on one
[small group of people].
Comment to Nick Tarleton:
I see from a bit of Googling that you're young, and became
enamored of the transhumanists just this year, as a result of
reading Eliezer's _Staring into the Singularity_ (a tract which
I myself was greatly enthusiastic about a decade ago).
Well, FWIW, here's a link to an Orkut community that I created
several years ago to summarize my own disillusionment with the
whole Singularity mishegas. Nobody ever posted there but
me (talk about vanity publishing! ;-> ), but it remains useful
for just this purpose (i.e., giving people access to my views,
however unpopular), and at the same time Orkut shields it
from being generally Googlable, which probably cuts down on
the hate-mail I'd otherwise get.
I think you have to join Orkut to access it, but I believe
anybody can do that now, without a special invitation.
"Unbound Singularity"
http://www.orkut.com/Community.aspx?cmm=38810
"Michael Anissimov (I presume)"
No, I'm a graduate student interested in philosophical utilitarianism and effective charitable giving.
"Had I known that the fear was not justified[**], I would
not have participated in opening this Pandora's box, nor would
Szilard. For my distrust of governments was not limited to Germany.""
I'm not sure the analogy is apt. Are you saying that public discussion will increase the likelihood of destructive AI more than that of safe AI?
If we expected that public discussion of nuclear weapons or AI would accelerate development more than it would lead to useful precautions, then I would agree that we should not discuss them.
For instance, if we expected institutional quality to greatly increase (global governance, with political structures conducive to much more rational policy) then it might be better to have 5 years to prepare starting 10 years from now than 12 years to prepare starting today. On the other hand, later development of AI means more time for Moore's law and the robotics industry to create a 'hardware overhang,' resources that an AI could use to rapidly increase its capabilities.
"But the difference here is that weapon-scale nuclear fission was a
well-established technical possibility. The feasibility of AI has
not yet been so demonstrated -- not in the sense in which most Singularitarians
use the word."
Could you clarify here? There were certainly massive unsolved engineering problems to be dealt with in the Manhattan project, and no guarantee that they would be solvable or in what timescale. It seems that observing chain reactions in fissile material and intelligence in human brains both constitute existence proofs, but there are far fewer engineering steps between the former and nukes than the latter and AI.
It seems to me that if you have uncertain engineering steps that reduce the probability of a technology being developed, then you should care less about the technology in proportion to that reduction. If 90% of new HIV drugs fail in trials, then you should not pay more than 10% of what you would if success was guaranteed for a new chemical entity, but you shouldn't value the entity at zero. We should care less about AI or catastrophic positive feedback global warming (much worse than the consensus estimates) because these are quite uncertain possibilities, but we shouldn't ignore them entirely.
"You're putting this forth as a hypothetical development that would
"justify investing substantial effort. . ."?"
It's a reductio. The amount of resources invested into AI over past decades, and the state of the academic literature and existing projects make it appear extremely unlikely that human-level AI will be developed in the next 10 years. But if we wait until AI is clearly in the very near future then some long-term preparations will be infeasible.
For instance, consider the developments in game theory, doctrine around the securing of weapons, the development of the 'hotline' and other measures to prevent nuclear war. These could have been worked on before the development of atomic weapons, and could have reduced the likelihood of disaster in the early Cold War.
"If you're suggesting such a "robust theory of intelligence"
actually exists, then I question your reality-testing here
(pace, Dr. Goertzel)."
I made no such suggestion, and don't think such a theory exists. I would assign a probability well under 1% to Novamente serving as the core of an eventual human-level AI, but can't assign zero probability (it's not logically impossible, and our uncertainty about AI means we should have big error bars in both directions on difficulty).
However, software and hardware are improving, and I can't say with high confidence that an AGI project undertaken with abundant talent and resources (e.g. a significant fraction of the world's most talented minds) won't be able to undertake such a project within a few decades. Companies like Google and Microsoft, private individuals such as Paul Allen, Jim Simons, Peter Thiel, or even Ray Kurzweil, and national militaries may all be capable of mustering talent required to produce human-level AI. Ensuring that such a project has access to a body of careful thought on the risks involved, including philosophical ones (e.g. www.nickbostrom.com/fut/evolution.html) would seem to reduce the likelihood of disaster.
Thinking about issues of philosophy and decision theory that would be relevant in deciding what kind of AI motivations we should seek to instill can be done well before we have a working AI, and might take quite a long time to do properly.
Such work can be made available for all projects (including nonpublic corporate, private, and military ones), and thus seems to offer one way in which action today can reduce the risk from AI developments decades hence.
"The world is so constituted, alas, that one of the best places
for activists to put their energies is precisely in cogently
and incisively debunking other (misguided) activists."
I realize this is substantially tongue-and-cheek, but I would really appreciate an answer on the substantive question of the next best alternative. Debunking useless or distracting causes is primarily important insofar as it reallocates talent to other problems. Where can I best save lives or reduce the probability of human extinction, and how much impact could I expect there?
Oral rehydration therapy, deworming medicine, micronutrient supplementation, etc mean that dedicating my income to averting African childhood deaths should let me prevent 50,000+ over my career by paying for marginal distribution of developed techniques. I could donate to the MIT Poverty Action Lab, to improve the efficiency of much larger quantities of aid. I could donate to lobby groups for global health and foreign aid. I could donate in the crowded global warming field, to NASA's asteroid and comet watches, or to the Nuclear Threat Initiative to work against nuclear proliferation. Some cause is *the* best way for me to save existing lives, and some cause (possibly the same) is the very best way for me to reduce the chance of human extinction. What are they?
"Subject: dy/dx -> infinity
. . .I think it's important
for you to understand its implications (though I have
little hope that you will).
If the Singularity is the fulcrum determining humanity's
future, . . .
the point at which dy/dx -> infinity, the very inflection
point itself, then **ALL** morality goes out the window.
You might as well be dividing by zero.
You could justify **anything** on that basis"
This is indeed a big concern, although it arises for most consequentialists from 3rd world poverty without any need to bring in superlative technologies. If you can save a life for even $1000 and have limited remaining earning power, a chance at embezzling $10 million and directing it to save lives poses a really severe moral dilemma. Returning the cash from lost wallets, or correcting a cashier who gives you excess change becomes a source of moral angst.
Utilitarian wrote:
> Where can I best save lives or reduce the probability of
> human extinction, and how much impact could I expect there?
A friend once quoted to me:
"To do good is virtuous, and to wish good to be done is
amiable, but to wish to do good is as vain as it is vain."
This probably isn't what you want to hear, but I think it's
a very unhealthy attitude to expect to have **any** impact
on "the probability of human extinction." Even Bill Gates,
if he's got any sense, shouldn't expect to have any impact
on the probability of human extinction.
If you're rich, toss the guilt and use the resources at your
disposal to find something you're genuinely interested in.
**Intrinsically** interested in. Even if it's collecting
hubcaps.
And -- this **isn't** meant sarcastically -- if you think you
could benefit from psychotherapy or psychopharmacology,
go for it. To tell the truth, your writing sounds a little
hypomanic to me. If you need a mood stabilizer, then by all
means get a prescription for one.
Good luck.
"This probably isn't what you want to hear, but I think it's
a very unhealthy attitude to expect to have **any** impact
on "the probability of human extinction." "
The reasoning seems similar to that for voting. I can be almost completely sure that my vote won't matter, but there remains a very, very, tiny chance of a tie in which my vote will make a large difference. Depending on the state and district, the change in subjective probability may be one in hundreds of thousands or one in tens of millions (depending on prior polling and its reliability, how evenly balanced those polls were, etc), but it's still a change in probability.
"If you're rich, toss the guilt and use the resources at your
disposal to find something you're genuinely interested in."
Very rich by world standards, American upper middle class next June. Doesn't this undercut the whole argument that Singularitarianism is bad because of its ineffectiveness? And seriously, if we shouldn't feel guilty about not preventing cheaply preventable deaths of children in developing countries what's your objection to people neglecting "[gay lib, civil rights, universal healthcare,
insert least-favorite liburl hobby-horse here]"?
"And -- this **isn't** meant sarcastically -- if you think you
could benefit from psychotherapy or psychopharmacology,
go for it. To tell the truth, your writing sounds a little
hypomanic to me. If you need a mood stabilizer, then by all
means get a prescription for one."
Er...I'll take that one under advisement. Certainly I'll admit that my approach to ethics is abnormal, and may well reflect some brain abnormality (a gross anatomical one, since since pretty much all mental variations are also brain variations, depending on how you describe things like glandular activity and muscle memory):
www.wjh.harvard.edu/~jgreene/GreeneWJH/Greene-Util-VMPFC-TiCS07.pdf
Zell Kravinsky's marriage certainly suffered from his charitable activities, and he has been tormented by guilt about the suffering of others, but that doesn't seem to me adequate reason to say that he should not have saved all the lives and helped all the people that he did.
http://en.wikipedia.org/wiki/Zell_Kravinsky
It seems that at least part of where we differ at the moment is just a different view of aggregative ethics and improbable events, but the underlying empirical question is still of interest. If you can specify some field where it is feasible to have more impact than the steps I mentioned above I would take it very seriously, but the argument that AI is very probably much less imminent than is suggested by the hype of people like Ben Goertzel, and less imminent than Kurzweil indicates, doesn't change anything (from the perspective of directing my action) unless that puts the expected value of promoting AI safety below some alternative. My direct questions along the lines of "well, then what's the best way to do X," are an attempt to pin down your position as precisely as possible and distinguish ethical and empirical aspects of disagreement.
Utilitarian wrote:
> [I wrote:]
>
> > [T]oss the guilt and use the resources at your
> > disposal to find something you're genuinely interested in.
>
> Doesn't this undercut the whole argument that Singularitarianism
> is bad because of its ineffectiveness?
I don't think Singularitarianism is bad solely because it's
"ineffective". I think it's bad to the extent that it has
cultish overtones, even while claiming superior "rationality".
Further, I think that the seriously irrational (to the
point of outright Narcissistic Personality Disorder) self-confidence
and arrogance of some of its most outspoken gurus **muddies**
rather than clarifies the discourse surrounding intelligence,
artificial or otherwise.
I think it's an example (one of many) of an unfortunate
tendency in human social affairs -- the propensity of the
unjustifiably self-confident to accrete hopeful followers
who are desperate to find somebody to tell them how to live.
I fear you may be one of the latter.
> [T]he argument that AI is very probably much less imminent
> than is suggested by the hype of people like Ben Goertzel,
> and less imminent than Kurzweil indicates, doesn't change anything. . .
In my view, the sort of AI that the Singularitarians claim to
be trying to find a way to make "safe" -- a symbol-juggling,
algorithmic "intelligence" like the "sophotechs" portrayed
in John C. Wright's "Golden Transcendence" SF books (popular
among the Extropians, I believe) will never happen **at all**.
I can understand the continuing appeal of the idea to folks
who hope to reduce the whole world to mathematical formalisms,
to Ayn Rand fans, and to people on the autistic spectrum,
but I believe it's a fantasy.
If you want to know about what makes people, or super-people,
or artificial people, **safe**, then study human psychology.
Why are there psychopaths, and other kinds of folks who
resist the hindrances of the social fabric? Could the
world get along without them? Or do we need more of them? ;->
Steven wrote:
> jfehlinger, I would be very interested in pointers to what you consider
> the most serious technical critiques of "Superlative" versions of MNT and AGI.
For some more discussion of the technical issues that, to say the least, make the success of the MNT project far from a certainty, take a look at:
http://www.softmachines.org/wordpress/?p=175
I might say that, although as a physicist I have engaged at a technical level with the proponents of the MNT vision for a few years, I'm increasingly concluding that the kind of cultural and ideological perspective that people like Dale are bringing to this discussion is perhaps more valuable and pertinent than the technical discussion. Or perhaps, to state this in a stronger way, discussions that at first seem to be technical in nature in actuality are proxies for profound ideological disagreements.
Post a Comment