Using Technology to Deepen Democracy, Using Democracy to Ensure Technology Benefits Us All
Wednesday, April 29, 2009
Superlative Strategery
STEP ONE: Read science fiction.
STEP TWO: Circle-Wank.
STEP THREE: Techno-heaven!
STEP TWO: Circle-Wank.
STEP THREE: Techno-heaven!
Tuesday, April 28, 2009
More Reductionism
Upgraded and adapted from the Moot:
The repudiation of reductionism in the sense I mean is not an embrace of supernaturalism but a simple reminder that one cannot derive ought from is, coupled with the reminder -- unfortunately, less widely affirmed but quite as crucial -- that oughts are nonetheless indispensable to human flourishing.
That life, intelligence, freedom are not supernatural but natural phenomena suggests that they are, indeed, susceptible, in principle, of natural analysis. But this is certainly no justification for treating our own conspicuously preliminary empirical understandings of life, intelligence, freedom -- or, better yet, essentially figurative formulations that scarcely even pretend to factuality (or consensus, whatever futurological protests to the contrary) except to their faithful -- as already adequate to these phenomena when they palpably are not adequate, just because it is not logically impossible that eventual understanding may become adequate.
The repudiation of reductionism in the sense I mean is not an embrace of supernaturalism but a simple reminder that one cannot derive ought from is, coupled with the reminder -- unfortunately, less widely affirmed but quite as crucial -- that oughts are nonetheless indispensable to human flourishing.
That life, intelligence, freedom are not supernatural but natural phenomena suggests that they are, indeed, susceptible, in principle, of natural analysis. But this is certainly no justification for treating our own conspicuously preliminary empirical understandings of life, intelligence, freedom -- or, better yet, essentially figurative formulations that scarcely even pretend to factuality (or consensus, whatever futurological protests to the contrary) except to their faithful -- as already adequate to these phenomena when they palpably are not adequate, just because it is not logically impossible that eventual understanding may become adequate.
Superlatvity Exposed
Upgraded and adapted from the Moot:
I hate to break it to you but these figures Kurzweil, Drexler, Moravek, even enormously likable fellows like de Grey (and don't even get me started on that atrocity exhibition Yudkowsky) and so on you like to cite as your authorities are quite simply not taken seriously outside the small circle of superlative futurology itself -- at least not for the claims you are investing with superlative-endorsing significance.
Scientists rightly and reasonably cherish outliers, they benefit from provocation, and at their best will give a serious hearing to the extraordinary so long as it aspires to scientificity -- but there is a difference between this appreciation and the actual achievement of the standard of scientific consensus, just as there is a difference between the achievement of a popular bestseller and that of passing muster as science.
Ever heard of a citation index? You claim to care about facts above all. Well, citation indexes tell a story about the relation of superlativity to scientific consensus that there is no denying if you are truly the reality-based person you want to sell yourself as.
You can't claim at once to be a paragon of science while eschewing its standards.
You simply can't.
You keep trying to divert these discussions of the conceptual difficulties and figurative entailments of your futurological discourse into superficially "technical" discussions about superficially predictive "differences of opinion" about trumped up technodevelopmental timelines -- but you have not earned the right to be treated as somebody having a technical or predictive discussion in these matters.
No developmental "timeline" will spit out the ponies you are looking for at the end of the rainbow. This isn't a question of "predictions."
Pining for an escape from error-proneness, weakness, or mortality isn't the same thing as debating how best to land a rocket on the Moon or cure polio.
I am a teacher of aesthetic and political theory in the academy, precisley the sort of person many superlative futurologists like to deride as a muzzy effete fashionably-nonsensical relativist, but I am for all that a champion of consensus science, a champion of more public funding for research and more public science education, and as a proper champion of consensus science I am the one who tells you that consensus science is no ally to Robot Cultism, no ally of yours.
The proper questions provoked by the phenomena of superlative futurology are: just what renders the aspirations to superintelligence, superlongevity, and superabundance so desirable and so plausible to those who are personally invested in superlative futurological sub(cult)ures organized by shared desire for and faith in these transcendentalizing aspirations?
Turning to these questions one no longer participates in any of the preferred topics that preoccupy the Robot Cultists themselves, who like to treat pseudo-science and superficially scientific forms as shared public rituals, the indulgence in which substantiates in the present the reality effect of their wish-fulfillment fantasies about "The Future," so-called. No, when we treat superlativity as it is, as a narrative genre and a faithful sub(cult)ure, then quickly and quite properly the discussion instead turns terminological, discursive, literary, psychological, ethnographic.
It is no wonder that so many would-be superlative futurologists, as the pseudo-scientists they are, so disdain the thinking of humanities scholarship, which -- while it is indeed non-scientific is not ultimately anti-scientific like their own tends to be -- is precisely the most relevant and capable of exposing them for what they are.
I hate to break it to you but these figures Kurzweil, Drexler, Moravek, even enormously likable fellows like de Grey (and don't even get me started on that atrocity exhibition Yudkowsky) and so on you like to cite as your authorities are quite simply not taken seriously outside the small circle of superlative futurology itself -- at least not for the claims you are investing with superlative-endorsing significance.
Scientists rightly and reasonably cherish outliers, they benefit from provocation, and at their best will give a serious hearing to the extraordinary so long as it aspires to scientificity -- but there is a difference between this appreciation and the actual achievement of the standard of scientific consensus, just as there is a difference between the achievement of a popular bestseller and that of passing muster as science.
Ever heard of a citation index? You claim to care about facts above all. Well, citation indexes tell a story about the relation of superlativity to scientific consensus that there is no denying if you are truly the reality-based person you want to sell yourself as.
You can't claim at once to be a paragon of science while eschewing its standards.
You simply can't.
You keep trying to divert these discussions of the conceptual difficulties and figurative entailments of your futurological discourse into superficially "technical" discussions about superficially predictive "differences of opinion" about trumped up technodevelopmental timelines -- but you have not earned the right to be treated as somebody having a technical or predictive discussion in these matters.
No developmental "timeline" will spit out the ponies you are looking for at the end of the rainbow. This isn't a question of "predictions."
Pining for an escape from error-proneness, weakness, or mortality isn't the same thing as debating how best to land a rocket on the Moon or cure polio.
I am a teacher of aesthetic and political theory in the academy, precisley the sort of person many superlative futurologists like to deride as a muzzy effete fashionably-nonsensical relativist, but I am for all that a champion of consensus science, a champion of more public funding for research and more public science education, and as a proper champion of consensus science I am the one who tells you that consensus science is no ally to Robot Cultism, no ally of yours.
The proper questions provoked by the phenomena of superlative futurology are: just what renders the aspirations to superintelligence, superlongevity, and superabundance so desirable and so plausible to those who are personally invested in superlative futurological sub(cult)ures organized by shared desire for and faith in these transcendentalizing aspirations?
Turning to these questions one no longer participates in any of the preferred topics that preoccupy the Robot Cultists themselves, who like to treat pseudo-science and superficially scientific forms as shared public rituals, the indulgence in which substantiates in the present the reality effect of their wish-fulfillment fantasies about "The Future," so-called. No, when we treat superlativity as it is, as a narrative genre and a faithful sub(cult)ure, then quickly and quite properly the discussion instead turns terminological, discursive, literary, psychological, ethnographic.
It is no wonder that so many would-be superlative futurologists, as the pseudo-scientists they are, so disdain the thinking of humanities scholarship, which -- while it is indeed non-scientific is not ultimately anti-scientific like their own tends to be -- is precisely the most relevant and capable of exposing them for what they are.
Saturday, April 25, 2009
No, You're the Cultist!
Upgraded and adapted from the Moot: "Extropia" declares me to be no different from a fulminating Creationist in my assessment that the curious claims of the Robot Cultists are faith-based in their essence. He declares the very confidence of my belief to reveal me to be the True Cultist. From inside the charmed circle of True Belief, glimpses of the outside world seem to get a bit... skewed sometimes, don't they, though?
As it happens, I don't claim to be completely correct in any aspect of my life, I'm not that sort of person at all. I'm a pragmatist by conviction and temperament both, and have no truck with certainties. I do hold strong opinions and delight in testifying to them and am well pleased to own up to the consequences. That is the substance of freedom in my view.
Among these opinions of mine, I am quite confident in declaring Robot Cultism to be a constellation of faith-based initiatives connected in only the most superficial way to the secular democracy of sensible educated enlightened people. The various branches of superlative futurology and their organizational life in the various Robot Cults are marginal both to consensus science and to prevailing progressivity.
Do I need to recite those views of the Robot Cultists again for the peanut gallery, by the way? The preoccupation with "migrating" organismic intelligence into cyberspace? Thereby to "immortalize" or super-longevize it? And so to "live" on in a virtual or nanobotic-slave Heaven? All under the beneficent eye of a history-shattering superintelligent Robot God?
Deny the obvious marginality of these beliefs all you want, you simply expose yourself instantly thereby as a loon (though the beliefs themselves have gone a long way in preparing us for that possibility already). No "arguments" are necessary on this score and, indeed, to indulge them at this level is actually to concede you ground you could not earn on your own crazy terms.
Once we are clear that it is you who are making the extraordinary claims (and, to be fair, I'll cheerfully concede and celebrate that extraordinary claims have often contributed their measure to human progress and delight, especially as aesthetic matters) then we should be agreed that yours are not the terms that define these debates, it is the skeptics you need to convince on terms intelligible to us, with evidences that pass muster on the terms of consensus science, with patient elaborations rather than impatient declarations of your self-evident superiority and certainty despite your utter marginality.
Unfortunately, I suspect you will find that once you engage in a good-faith effort to translate your project into such terms all that will be left that compels any kind of attention will consist of fairly mainstream secular progressive support of well-funded well-regulated equitably-distributed open technoscience in the service of solving shared problems.
Nobody needs to join a Robot Cult to work on actual software security, or actual healthcare, or actual materials science. The deeper psychological and social needs that are truly the ones the Robot Cult answers to -- for the overabundant majority of its True Believers -- are just as well met by a good therapist, a big bottle, a fine book of poems, some modest non-moralizing faith-practice, a good sound occasional fuck, or what have you. As they are for the rest of us.
Hell, Robot Cultists can still indulge for aesthetic and subcultural kicks in sf fandoms and futurological daydreams for all I care (I'm a big sf geek myself after all, I get the sensawunda thing) -- they just shouldn't keep pretending and trying to sell that what they're doing is science or policy or progressive politics in any sense of the word.
Once all that is well and truly cleared up Robot Cultists are just silly people following their idiosyncratic bliss and doing nobody any harm but possibly themselves. Who cares? Let your freak flags fly, as I will mine, for all the world to see.
It is the superlative futurological derangement of public technodevelopmental deliberation, it is the anti-democratizing politics of superlative futurology, it is the deeper more prevailing anti-democratic corporate-militarist futurology the Robot Cultists symptomize in their extremity that are the real dangers and problems that interest me.
Sure, little the Robot Cultists say makes sense on its own terms, either, but that's true of lots of other people and viewpoints that I don't devote my energy to critiquing.
As it happens, I don't claim to be completely correct in any aspect of my life, I'm not that sort of person at all. I'm a pragmatist by conviction and temperament both, and have no truck with certainties. I do hold strong opinions and delight in testifying to them and am well pleased to own up to the consequences. That is the substance of freedom in my view.
Among these opinions of mine, I am quite confident in declaring Robot Cultism to be a constellation of faith-based initiatives connected in only the most superficial way to the secular democracy of sensible educated enlightened people. The various branches of superlative futurology and their organizational life in the various Robot Cults are marginal both to consensus science and to prevailing progressivity.
Do I need to recite those views of the Robot Cultists again for the peanut gallery, by the way? The preoccupation with "migrating" organismic intelligence into cyberspace? Thereby to "immortalize" or super-longevize it? And so to "live" on in a virtual or nanobotic-slave Heaven? All under the beneficent eye of a history-shattering superintelligent Robot God?
Deny the obvious marginality of these beliefs all you want, you simply expose yourself instantly thereby as a loon (though the beliefs themselves have gone a long way in preparing us for that possibility already). No "arguments" are necessary on this score and, indeed, to indulge them at this level is actually to concede you ground you could not earn on your own crazy terms.
Once we are clear that it is you who are making the extraordinary claims (and, to be fair, I'll cheerfully concede and celebrate that extraordinary claims have often contributed their measure to human progress and delight, especially as aesthetic matters) then we should be agreed that yours are not the terms that define these debates, it is the skeptics you need to convince on terms intelligible to us, with evidences that pass muster on the terms of consensus science, with patient elaborations rather than impatient declarations of your self-evident superiority and certainty despite your utter marginality.
Unfortunately, I suspect you will find that once you engage in a good-faith effort to translate your project into such terms all that will be left that compels any kind of attention will consist of fairly mainstream secular progressive support of well-funded well-regulated equitably-distributed open technoscience in the service of solving shared problems.
Nobody needs to join a Robot Cult to work on actual software security, or actual healthcare, or actual materials science. The deeper psychological and social needs that are truly the ones the Robot Cult answers to -- for the overabundant majority of its True Believers -- are just as well met by a good therapist, a big bottle, a fine book of poems, some modest non-moralizing faith-practice, a good sound occasional fuck, or what have you. As they are for the rest of us.
Hell, Robot Cultists can still indulge for aesthetic and subcultural kicks in sf fandoms and futurological daydreams for all I care (I'm a big sf geek myself after all, I get the sensawunda thing) -- they just shouldn't keep pretending and trying to sell that what they're doing is science or policy or progressive politics in any sense of the word.
Once all that is well and truly cleared up Robot Cultists are just silly people following their idiosyncratic bliss and doing nobody any harm but possibly themselves. Who cares? Let your freak flags fly, as I will mine, for all the world to see.
It is the superlative futurological derangement of public technodevelopmental deliberation, it is the anti-democratizing politics of superlative futurology, it is the deeper more prevailing anti-democratic corporate-militarist futurology the Robot Cultists symptomize in their extremity that are the real dangers and problems that interest me.
Sure, little the Robot Cultists say makes sense on its own terms, either, but that's true of lots of other people and viewpoints that I don't devote my energy to critiquing.
Monday, April 20, 2009
Hannah Arendt on AI
And now for the third and final excerpt of something like an unexpected trilogy of excerpts from Hannah Arendt today. Although this is likely to be the first many readers encounter in consequence of the chronological arrangement of posts in a blog, I do want to stress that this is the third, and in many ways least interesting, of the trilogy, an excerpt that needs the earlier two (first here and second here) to take on its real salience as a complement to what I criticize as superlative futurology. This passage appears in The Human Condition, on pp. 171-172, and nicely ties together some of the themes from the preceding discussion.
Again, we have in the reference to the "worldlessness" of instrumental calculation and its effects a unique Arendtian usage. For Arendt the "world" is profoundly political in its substance, akin to the sense in which when we speak of "worldly" concerns we often mean to indicate more than just planetary or natural concerns, but public and cultural affairs more generally. On p. 52 of The Human Condition, she writes that "the term 'public' signifies the world itself." She continues
Among other things, it seems worthwhile to draw attention to Arendt's idiosyncratic understanding of the "world" especially since this is the world the love of which Arendt announced in her personal motto, Amor Mundi. Think of the way in which we are born into a speech, a "mother tongue," the existence of which long precedes our birth and will continue on long after our death, but which, for all that still consists entirely of our own performances of it, performances that at once sustain it in its existence but also change it (through coinages, figurative deviations, and so on).
If it were true that man is an animal rationale in the sense in which the modern age understood the term, namely, an animal species which differs from other animals in that it is endowed with superior brain power, then the newly invented electronic machines, which, sometimes to the dismay and sometimes to the confusion of their inventors, are so spectacularly more "intelligent" than human beings, would indeed be homunculi. As it is, they are, like all machines, mere substitutes and artificial improvers of human labor power, following the time-honored device of all division of labor to break down every operation into its simplest constituent motions, substituting, for instance, repeated addition for multiplication. The superior power of the machine is manifest in its speed, which is far greater than that of human brain power; because of this superior speed, the machine can dispense with multiplication, which is the pre-electronic technical device to speed up addition. All that the giant computers prove is that the modern age was wrong to believe with Hobbes that rationality, in the sense of "reckoning with consequences," is the highest and most human of man's capacities, and that the life and labor philosophers, Marx or Bergson or Nietzsche, were right to see in this type of intelligence, which they mistook for reason, a mere function of the life process itself, or, as Hume put it, a mere "slave of the passions." Obviously, this brain power and the compelling logical processes it generates are not capable of erecting a world, are as worldless as the compulsory processes of life, labor, and consumption.
Again, we have in the reference to the "worldlessness" of instrumental calculation and its effects a unique Arendtian usage. For Arendt the "world" is profoundly political in its substance, akin to the sense in which when we speak of "worldly" concerns we often mean to indicate more than just planetary or natural concerns, but public and cultural affairs more generally. On p. 52 of The Human Condition, she writes that "the term 'public' signifies the world itself." She continues
This world... is not identical with the earth or with nature... It is related, rather, to the human artifact, the fabrication of human hands, as well as to affairs which go on among those who inhabit the man-made world together [emphasis added --d]. To live together in the world means essentially that a world of things is between those who have it common, as a table is located between those who sit around it; the world, like every in-between, relates and separates men at the same time.
Among other things, it seems worthwhile to draw attention to Arendt's idiosyncratic understanding of the "world" especially since this is the world the love of which Arendt announced in her personal motto, Amor Mundi. Think of the way in which we are born into a speech, a "mother tongue," the existence of which long precedes our birth and will continue on long after our death, but which, for all that still consists entirely of our own performances of it, performances that at once sustain it in its existence but also change it (through coinages, figurative deviations, and so on).
Hannah Arendt on Common Sense
This is from The Human Condition, pp. 283-284. To be clear here, Arendt is proposing that two quite radically different modes of rationality have been described with the term "common sense," and that the prevailing one substitutes for public deliberation an instrumental calculation insensitive to, and indeed obliterative of, the fragile substance of enacted and re-enacted human freedom (for which, in any case, it also substitutes the meaningless seductions of an endlessly amplified instrumental force). I'm pushing the grammar of the first sentence quite a bit here -- though not to the cost of its actual sense, I am confident -- not because I think it is unclear or ungraceful as written (far from it), but just because I want to stress the way in which it is proposing the key distinction on which what follows depends:
Note that in denying that humans can have the structure of their minds identically in common, Arendt is not denying that there is obvious salient structural overlap in that structure (in fact, she takes this for granted by the paragraph's end), she is denying instead that salience is restricted to the ways in which they do overlap. Philip Rieff's encomium to Freud, that he "democratized genius by giving everybody a creative unconscious," is very much in point here. That we might indeed be neurocomputationally identical in our shared recourse to instrumental rationality provides little reason to imagine we are comparably identical in our capacity to make meaning, express value, divert literal into figurative language, unpredictably interrupt custom and calculation and so "change the subject" in the deepest sense and so on. To deny that "strictly speaking," we actually "cannot have in common" the "structure of [our] minds" is actually to a certain extent little more than the facile materialist admission that we think with different brains even when we think with them similarly, but also that the material structure of these different brains surely materially attest to the different memories, customs, dispositions they incarnate. Be that as it may, Arendt is freighting this distinction of instrumental as against deliberative conceptions of common sense with extraordinary significance. She continues:
Here, I believe, we find the gesture of superlativity in perhaps an unexpected place, perhaps a foundational place, in which an initial reduction or impoverishment of reason into instrumentality is compensated by a promissory amplification of instrumentality, means without end or ends, functionally substituting force for freedom.
It is crucial to grasp that the "reality" from which this instrumentalized mind is cut off is the substantial reality of the public sphere, the world in common made and sustained by peers acting in concert. Arendt is not literally mistaking instrumentalization as a kind of comatose state, although she would likely point out that both states amount to the radical objectification of a subject no longer able legibly to act in the world on her own. Arendt is not denying that objects of calculation have an alterity that can frustrate our ends and confound our expectations, but proposing that they must first be constituted as objects, subsumed within our conventions, to be legible as frustrating or confounding in salient ways in the first place. Under the mode of instrumentalization a thing to be known must first be made by us or made-knowable by us, we can only trust what we make. This is an argument she elaborates at great length prior to the passage I have quoted here, in an extended reading of the Cartesian Doubt. As always, there is much more to say here than I have time to say it where Arendt is concerned. For now, I will treat the next couple of sentences continuing from the passage above as a conclusion of this particular discussion:
What intrigues me about this last comment is that while it seems to decry the reduction of human beings to the status of animals (a reduction that has never much disturbed me, since I am quite happy to concede nonhuman animals their share of dignity and a stake in the collaboration of peers in sharing and making a world worth living in), it seems to mark more emphatically in fact the further reduction of human animals to mere mechanisms, an altogether more troubling move it seems to me, and one prone to all sorts of mischief.
[C]ommon sense... once had been the ["sense"] by which all other senses, with their intimately private sensations, were fitted into the common world, just as vision fitted man into the visible world[. But] now [it has become] an inner faculty without any world relationship. This sense now [is] called common merely because it happened to be common to all. What men [sic] now have in common is not the world but the structure of their minds, and this they cannot have in common, strictly speaking; their faculty of reasoning can only happen to be the same in everybody. The fact that, given the problem of two plus two we all will come out with the same answer, four, is henceforth the very model of common-sense reasoning.
Note that in denying that humans can have the structure of their minds identically in common, Arendt is not denying that there is obvious salient structural overlap in that structure (in fact, she takes this for granted by the paragraph's end), she is denying instead that salience is restricted to the ways in which they do overlap. Philip Rieff's encomium to Freud, that he "democratized genius by giving everybody a creative unconscious," is very much in point here. That we might indeed be neurocomputationally identical in our shared recourse to instrumental rationality provides little reason to imagine we are comparably identical in our capacity to make meaning, express value, divert literal into figurative language, unpredictably interrupt custom and calculation and so "change the subject" in the deepest sense and so on. To deny that "strictly speaking," we actually "cannot have in common" the "structure of [our] minds" is actually to a certain extent little more than the facile materialist admission that we think with different brains even when we think with them similarly, but also that the material structure of these different brains surely materially attest to the different memories, customs, dispositions they incarnate. Be that as it may, Arendt is freighting this distinction of instrumental as against deliberative conceptions of common sense with extraordinary significance. She continues:
Reason, in Descartes no less than in Hobbes, becomes "reckoning with consequences," the faculty of deducing and concluding, that is, of a process which man at any moment can let loose in himself. The mind of this man -- to remain in the sphere of mathematics -- no longer looks upon "two-plus-two-are-four" as an equation in which two side balance in a self-evident harmony, but understands the equation as the expression of a process in which two and two become four in order to generate further processes of addition which eventually will lead into the infinite.
Here, I believe, we find the gesture of superlativity in perhaps an unexpected place, perhaps a foundational place, in which an initial reduction or impoverishment of reason into instrumentality is compensated by a promissory amplification of instrumentality, means without end or ends, functionally substituting force for freedom.
This faculty [ie, instrumental calculation] the modern age calls common-sense reasoning; it is the playing of the mind with itself, which comes to pass when the mind is shut off from all reality and "senses" only itself.
It is crucial to grasp that the "reality" from which this instrumentalized mind is cut off is the substantial reality of the public sphere, the world in common made and sustained by peers acting in concert. Arendt is not literally mistaking instrumentalization as a kind of comatose state, although she would likely point out that both states amount to the radical objectification of a subject no longer able legibly to act in the world on her own. Arendt is not denying that objects of calculation have an alterity that can frustrate our ends and confound our expectations, but proposing that they must first be constituted as objects, subsumed within our conventions, to be legible as frustrating or confounding in salient ways in the first place. Under the mode of instrumentalization a thing to be known must first be made by us or made-knowable by us, we can only trust what we make. This is an argument she elaborates at great length prior to the passage I have quoted here, in an extended reading of the Cartesian Doubt. As always, there is much more to say here than I have time to say it where Arendt is concerned. For now, I will treat the next couple of sentences continuing from the passage above as a conclusion of this particular discussion:
Whatever difference there may be [between intelligent individuals, once their intelligence is reduced to instrumentality] is a difference of mental power, which can be tested and measured like horsepower. Here the old definition of man as an animal rationale acquires a terrible precision: deprived of the sense through which man's five animal senses are fitted into a world common to all men, human beings are indeed no more than animals who are able to reason, "to reckon with consequences."
What intrigues me about this last comment is that while it seems to decry the reduction of human beings to the status of animals (a reduction that has never much disturbed me, since I am quite happy to concede nonhuman animals their share of dignity and a stake in the collaboration of peers in sharing and making a world worth living in), it seems to mark more emphatically in fact the further reduction of human animals to mere mechanisms, an altogether more troubling move it seems to me, and one prone to all sorts of mischief.
Hannah Arendt on Futurology
Arendt is speaking here of "think tanks" like the Rand Corporation who were gaming out genocidal and suicidal war scenarios in the epoch of Vietnam and Mutually Assured Destruction and with the most murderous and disastrous imaginable consequences, fancying themselves consummately rational in their palpable irrationality all the while. Arendt is not addressing what I describe as superlativity here, but it will be very clear that superlativity is an intelligible amplification of this thoughtlessness misconstrued as deliberation. The piece is excerpted from On Violence, from the anthology Crises of the Republic, pp. 108-110.
This last comment is crucial, since with this judgment it becomes clear that Arendt's earlier description of the futurologists as "scientifically-minded" was not an attack on science but on a kind of pseudo-science that sells itself as science. Needless to say, the "unthinkable" in this passage is mostly a matter of the actual contemplation of nuclear war (an inherently and absolutely unreasonable and unconscionable calculation), but we know from the "Prologue" to The Human Condition that it is not only the unprecedented self-destructive potential of nuclear weapons that confront humanity with its dissolution via the thoughtless unfolding of the instrumental logic arising from technique unrestrained by public deliberation, indeed technique amplified and misconstrued as an apt substitute for the freedom of public deliberation.
In Superlativity, in my sense of the term, the "unthinkable" has connected up to the theological "unthinkable," to the Mystery of Divinity evoked by the very incoherence of the omni-predicates through which "God" is presumably apprehended as unapprehendable. The promise of personal transcendence via the technodevelopmental aspirations to superintelligence, superlongevity, and superabundance preoccupy superlative futurology, but they are pseudo-scientific in Arendt's sense of the term, while mobilizing the anti-scientific energies of the Mystery as well. Taking up the superficial coloration of scientificity while failing to pass muster according to its legitimate forms, even more extraordinarily Superlativity evokes worldly experiences like intelligence, life, and emancipation, and then evacuates them of their worldly substance as biological, social, historical phenomena in a repudiation of the world and embrace of supernatural reward ("The Future") that is quintessentially faithful.
Arendt's critique of futurology continues on, a bit further down the page. You will discover that I am not forcing a false association on Arendt in describing her critique as anti-"futurological," even if I do extend its terms in a number of ways I can't know she would approve of.
The force of this final point turns on Arendt's understanding of common sense, and as it happens that understanding of common sense is one that made her an early under-appreciated critic of the traditional program of artificial intelligence. I will turn briefly to that understanding in my next post. What I would emphasize for now, though, is that Arendt is not simply making the claim that futurology underestimates the complexity and dynamism and vicissitudes of the history it claims to predict and so should take greater care to qualify its overconfident assertions (although that is indeed a recommendation that most futurologists would do well to take on board). She is actually making the more forceful claim that futurology as a discourse is premised on the substitution of the mode of reason that is instrumental calculation for the mode of reason that is public deliberation, and since the latter for Arendt is incomparably more suited to address the substance of human history -- the narrative of a diversity of peers unpredictably acting in the world -- to which futurology seeks to address its own attention, this substitution of instrumentality for deliberation risks more than factual and predictive errors but more seriously still the inculcation of an insensitivity to that substance of history and its freedom that actually manages to undermine its reality. To lose sight of differences that make a difference, like the difference between political power and instrumental force or the correlated difference between public deliberation and instrumental calculation, results, as Arendt writes later in the same piece, "in a kind of blindness to the realities they correspond to" (p. 142), and since these are political realities that must be enacted and re-enacted to maintain their reality, blindness to their salience is too likely the prelude to their loss.
The other thing to say is that it is possible, as always, to read Arendt's lucid and graceful prose with a sense of real gratification but without quite grasping the full force of her arguments, since she deploys everyday terms like "routine," "act," "calculation" in a very specific rather than glib way, and that the force of her account ultimately derives from the ways in which these terms are embedded in the provocative constellation of distinctions she is endlessly introducing into conventional thinking while sometimes seeming simply to be thinking conventionally. This makes even long excerpting of her work a tricky business, since it is easier than usual to draw an incomplete or misleading insight from taking her writing out of its extended context. I hope I can recompense the risk of multiplying such misunderstandings through injudicious excerpting by seducing readers into reading the actual texts on their own terms through judicious excerpting.
[T]here are, indeed, few things that are more frightening than the steadily increasing prestige of scientifically minded brain trusters in the councils of government during the last decades. The trouble is not that they are cold blooded enough to "think the unthinkable," but that they do not think. Instead of indulging in such an old-fashioned, uncomputerizable activity, they reckon with the consequences of certain hypothetically assumed constellations without, however, being able to test their hypotheses against actual occurrences. The logical flaw in these hypothetical constructions of future events is always the same: what first appears as a hypothesis -- with or without its implied alternatives, according to the level of sophistication -- turns immediately, usually after a few paragraphs, into a 'fact,' which then gives birth to a whole string of non-facts, with the result that the purely speculative character of the whole enterprise is forgotten. Needless to say, this is not science but pseudo-science[.]
This last comment is crucial, since with this judgment it becomes clear that Arendt's earlier description of the futurologists as "scientifically-minded" was not an attack on science but on a kind of pseudo-science that sells itself as science. Needless to say, the "unthinkable" in this passage is mostly a matter of the actual contemplation of nuclear war (an inherently and absolutely unreasonable and unconscionable calculation), but we know from the "Prologue" to The Human Condition that it is not only the unprecedented self-destructive potential of nuclear weapons that confront humanity with its dissolution via the thoughtless unfolding of the instrumental logic arising from technique unrestrained by public deliberation, indeed technique amplified and misconstrued as an apt substitute for the freedom of public deliberation.
In Superlativity, in my sense of the term, the "unthinkable" has connected up to the theological "unthinkable," to the Mystery of Divinity evoked by the very incoherence of the omni-predicates through which "God" is presumably apprehended as unapprehendable. The promise of personal transcendence via the technodevelopmental aspirations to superintelligence, superlongevity, and superabundance preoccupy superlative futurology, but they are pseudo-scientific in Arendt's sense of the term, while mobilizing the anti-scientific energies of the Mystery as well. Taking up the superficial coloration of scientificity while failing to pass muster according to its legitimate forms, even more extraordinarily Superlativity evokes worldly experiences like intelligence, life, and emancipation, and then evacuates them of their worldly substance as biological, social, historical phenomena in a repudiation of the world and embrace of supernatural reward ("The Future") that is quintessentially faithful.
Arendt's critique of futurology continues on, a bit further down the page. You will discover that I am not forcing a false association on Arendt in describing her critique as anti-"futurological," even if I do extend its terms in a number of ways I can't know she would approve of.
Events, by definition, are occurrences that interrupt routine processes and routine procedures; only in a world in which nothing of importance ever happens could the futurologists' dream come true. Predictions of the future are never anything but projections of present automatic processes and procedures, that is, of occurrences that are likely to come to pass if men [sic] do not act and if nothing unexpected happens; every action, for better or worse, and every accident necessarily destroys the whole pattern in whose frame the prediction moves and where it finds its evidence. Proudhon's passing remark, "The fecundity of the unexpected far exceeds the statesman's prudence," is fortunately still true. It exceeds even more obviously the expert's calculations.) To call such unexpected, unpredicted, and unpredictable happenings "random events" or "the last gasps of the past," condemning them to irrelevance or the famous "dustbin of history," is the oldest trick in the trade; the trick, no doubt, helps in clearing up the theory, but at the price of removing it further and further from reality. The danger is that these theories are not only plausible, because they take their evidence from actually discernible present trends, but that, because of their inner consistency, they have a hypnotic effect; they put to sleep our common sense, which is nothing else but our mental organ for perceiving, understanding, and dealing with reality and factuality.
The force of this final point turns on Arendt's understanding of common sense, and as it happens that understanding of common sense is one that made her an early under-appreciated critic of the traditional program of artificial intelligence. I will turn briefly to that understanding in my next post. What I would emphasize for now, though, is that Arendt is not simply making the claim that futurology underestimates the complexity and dynamism and vicissitudes of the history it claims to predict and so should take greater care to qualify its overconfident assertions (although that is indeed a recommendation that most futurologists would do well to take on board). She is actually making the more forceful claim that futurology as a discourse is premised on the substitution of the mode of reason that is instrumental calculation for the mode of reason that is public deliberation, and since the latter for Arendt is incomparably more suited to address the substance of human history -- the narrative of a diversity of peers unpredictably acting in the world -- to which futurology seeks to address its own attention, this substitution of instrumentality for deliberation risks more than factual and predictive errors but more seriously still the inculcation of an insensitivity to that substance of history and its freedom that actually manages to undermine its reality. To lose sight of differences that make a difference, like the difference between political power and instrumental force or the correlated difference between public deliberation and instrumental calculation, results, as Arendt writes later in the same piece, "in a kind of blindness to the realities they correspond to" (p. 142), and since these are political realities that must be enacted and re-enacted to maintain their reality, blindness to their salience is too likely the prelude to their loss.
The other thing to say is that it is possible, as always, to read Arendt's lucid and graceful prose with a sense of real gratification but without quite grasping the full force of her arguments, since she deploys everyday terms like "routine," "act," "calculation" in a very specific rather than glib way, and that the force of her account ultimately derives from the ways in which these terms are embedded in the provocative constellation of distinctions she is endlessly introducing into conventional thinking while sometimes seeming simply to be thinking conventionally. This makes even long excerpting of her work a tricky business, since it is easier than usual to draw an incomplete or misleading insight from taking her writing out of its extended context. I hope I can recompense the risk of multiplying such misunderstandings through injudicious excerpting by seducing readers into reading the actual texts on their own terms through judicious excerpting.
Sunday, April 19, 2009
Superlativity Is Neither Enlightened Nor Scientific
Upgraded and adapted from a response of mine in the Moot:
There is nothing in current technique that "implies" the arrival at the superlative outcome in which you are personally invested.
What I see is humanity discovering things and applying these discoveries to the solution of shared problems (and usually creating new problems as we go along) where you seem to see a "trend," a series of stepping stones along the path to an idealized superlative outcome. This time, you are calling it "control of matter with atomic precision." What you probably really mean by this is something like the arrival of "drextech," or the "nanofactory," a robust programmable poly-purpose self-replicating room-temperature device that can transform cheap feedstock into nearly any desirable commodity with a software recipe.
I call this superlative outcome "superabundance," and this particular superlative aspiration is also familiar in a great deal of digital utopianism and virtuality discourse of the last decade, just as it suffused discourses of automation and plastic in the post-war period before that, just as it drove the alchemical project of turning lead into gold for ages before that.
The aspiration to superabundance is the infantile fantasy of a circumvention of the struggle with necessity, ananke: in psychoanalytic terms a pining for a return to the plenitude represented by the Pleasure Principle and renunciation of the exactions represented by the Reality Principle. Or, in different terms, it is an anti-political fantasy of a circumvention of the struggle to reconcile the ineradicable diversity of the aspirations of our peers with whom we share the world (where all are satisfied, no personally frustrating reconciliation is necessary).
In both of these aspects, it seems to me that this superlative aspiration is an irrationalist repudiation of the heart of what Enlightenment has typically seen as its substance -- the struggle for autonomous adulthood (as against subjection by parental, priestly, or otherwise unaccountable authorities) and for the consensualization, via general welfare and the rule of law, of the disputatious public sphere. It is worth noting that many superlative futurologists like to sell themselves as exemplars of "Enlightenment" while indulging in this infantilism, anti-politicism, and irrationalism. In a word, they're not.
It is not the available science that inspires your superlative aspirations, but science that provides the pretext and rationalization for your indulgence in what is an essentially faith-based initiative.
We are talking here and now about superabundance and in particular superabundance in its nano-Santalogical variant, but the same sorts of moves are taking place in the other variations: in which Singularitarians, for example, indulge the wish-fulfillment fantasy of either personally achieving or at least of bearing witness to the arrival of post-biological superintelligence, the Robot God Who, if Friendly, solves all our problems for us, or Who, if Unfriendly, ends the world in an ubergoo apocalypse, in either case constituting a history-ending Singularity (hence the name of their particular variant of the Robot Cult); or in which techno-immortalists indulge the wish-fulfillment fantasy of personal immortality -- or superlongevity, or "indefinite lifespan" or whatever term that is currently fashionable among them to try to sounds less religious while pining after the quintessentially religious promise of eternal life.
Common to these discourses is the divestment of a familiar phenomenon (like personhood, intelligence, or life) of the actual organismic, social, and biological substance and context in which it has always hitherto been intelligible, very likely to the fatal cost of the coherence of the resulting ideas of these familiar phenomena, but then providing a compensation for this divestment of substance with an investment of radically hyperbolic aspiration. According to the terms of my Superlative critique, these hyperbolic aspirations function more or less as pseudo-scientific correlates to the conventional omni-predicates of theology -- omniscience, omnipotence, omnibenevolence -- translated from the project to apprehend the supernatural divinity of God to the project of a personal transcendence into a differently super-natural demi-divinity via technoscience, characterized by superlative aspirations to superintelligence, superlongevity, and superabundance.
Now, quite apart from all that, you go on, in the usual way, earnestly to recommend to me that the cutting edge of superlative futurological discourse has abandoned this or that particular formulation, has taken up this or that "technical" variation, that I have failed to distinguish the position of Robot Cultist A from that of Robot Cultist B, and so on.
You will forgive me, but there is no need for those of us who confine our reasonable technoscientific deliberation to beliefs that are warranted by consensus science to lose ourselves in fine-grained appreciation of differences that fail to make the difference that actually makes a difference in such matters. You rattle off the handful of preferred figures who tell you what you want to hear, barnacled up in who knows what baroque jargon and ptolemaic epicycles, as though these are widely respected widely-cited figures outside your sub(cult)ure.
But they are not.
As a very easily discovered matter of fact, they are not.
It isn't a sign of discernment but of its opposite, as it happens, that you can recite the minute differences that distinguish three disputants on the question of how many angels can dance on a pin-head, when the overabundant consensus of relevant warranted belief has become either indifferent or hostile to the notion of angels dancing on pinheads as such.
It is the extraordinary assertion of belief that demands extraordinary proofs and patient elaborations. You are invested in a whole constellation of flabbergastingly extraordinary claims -- expectations of superhumanization and near-immortalization and paradisical plenitude -- and yet seem to demand as the price of skeptical engagement with your discourse that critics become conversant with disputes the relevance of which depends on the prior acceptance of the whole fantastically marginal and extraordinary enterprise in which they are embedded. Meanwhile, the public life of your discourse, whatever the technical details you believe to undergird it, continues to proceed at a level of generality and hyperbole built up of metaphors, citations of myth, activations of infantile wish-fulfillment fantasies, and supported, at most, with vague conjurations of inevitable progress, triumphalist reductionism, and a handful of "existence proofs," usually from biology, that aren't actually analogous at all in their specificity to the idealized outcomes that drive superlativity, at least not at the bedeviling level of detail that concerns consensus scientists and accountable policy-makers but not so much ideologues, priests, and scam artists.
We are offered up claims built upon claims built upon claims, few of which have excited the interest or support of a consensus of scientists in the relevant fields, and fewer still of which invest these claims with the idealized outcomes that are the preoccupation of those who indulge most forcefully in superlative discourses as such.
Superlativity, in a word, is not science. It is a discourse, opportunistically taking up a highly selective set of scientific results and ideas and diverting them to the service of a host of wish-fulfillment fantasies that are very old and very familiar, dreams of invulnerability, certainty, immortality, and abundance that rail against the finitude of the human condition.
They are a distraction and derangement of those aspects of Enlightenment that would mobilize collective intelligence, expressivity, and effort to the progressive democratization, consensualization, and diversification of public life and the practical solution of shared problems.
Progress is not transcendence, nor is enlightenment a denial of human finitude.
There is more than enough sensationalism and irrationalism distorting urgently needed sensible public deliberation on, for example, the environmental and bioethical quandaries of disruptive technoscientific change at the moment.
The Robot Cultists and their various noise machines are not helping. At all.
There is nothing in current technique that "implies" the arrival at the superlative outcome in which you are personally invested.
What I see is humanity discovering things and applying these discoveries to the solution of shared problems (and usually creating new problems as we go along) where you seem to see a "trend," a series of stepping stones along the path to an idealized superlative outcome. This time, you are calling it "control of matter with atomic precision." What you probably really mean by this is something like the arrival of "drextech," or the "nanofactory," a robust programmable poly-purpose self-replicating room-temperature device that can transform cheap feedstock into nearly any desirable commodity with a software recipe.
I call this superlative outcome "superabundance," and this particular superlative aspiration is also familiar in a great deal of digital utopianism and virtuality discourse of the last decade, just as it suffused discourses of automation and plastic in the post-war period before that, just as it drove the alchemical project of turning lead into gold for ages before that.
The aspiration to superabundance is the infantile fantasy of a circumvention of the struggle with necessity, ananke: in psychoanalytic terms a pining for a return to the plenitude represented by the Pleasure Principle and renunciation of the exactions represented by the Reality Principle. Or, in different terms, it is an anti-political fantasy of a circumvention of the struggle to reconcile the ineradicable diversity of the aspirations of our peers with whom we share the world (where all are satisfied, no personally frustrating reconciliation is necessary).
In both of these aspects, it seems to me that this superlative aspiration is an irrationalist repudiation of the heart of what Enlightenment has typically seen as its substance -- the struggle for autonomous adulthood (as against subjection by parental, priestly, or otherwise unaccountable authorities) and for the consensualization, via general welfare and the rule of law, of the disputatious public sphere. It is worth noting that many superlative futurologists like to sell themselves as exemplars of "Enlightenment" while indulging in this infantilism, anti-politicism, and irrationalism. In a word, they're not.
It is not the available science that inspires your superlative aspirations, but science that provides the pretext and rationalization for your indulgence in what is an essentially faith-based initiative.
We are talking here and now about superabundance and in particular superabundance in its nano-Santalogical variant, but the same sorts of moves are taking place in the other variations: in which Singularitarians, for example, indulge the wish-fulfillment fantasy of either personally achieving or at least of bearing witness to the arrival of post-biological superintelligence, the Robot God Who, if Friendly, solves all our problems for us, or Who, if Unfriendly, ends the world in an ubergoo apocalypse, in either case constituting a history-ending Singularity (hence the name of their particular variant of the Robot Cult); or in which techno-immortalists indulge the wish-fulfillment fantasy of personal immortality -- or superlongevity, or "indefinite lifespan" or whatever term that is currently fashionable among them to try to sounds less religious while pining after the quintessentially religious promise of eternal life.
Common to these discourses is the divestment of a familiar phenomenon (like personhood, intelligence, or life) of the actual organismic, social, and biological substance and context in which it has always hitherto been intelligible, very likely to the fatal cost of the coherence of the resulting ideas of these familiar phenomena, but then providing a compensation for this divestment of substance with an investment of radically hyperbolic aspiration. According to the terms of my Superlative critique, these hyperbolic aspirations function more or less as pseudo-scientific correlates to the conventional omni-predicates of theology -- omniscience, omnipotence, omnibenevolence -- translated from the project to apprehend the supernatural divinity of God to the project of a personal transcendence into a differently super-natural demi-divinity via technoscience, characterized by superlative aspirations to superintelligence, superlongevity, and superabundance.
Now, quite apart from all that, you go on, in the usual way, earnestly to recommend to me that the cutting edge of superlative futurological discourse has abandoned this or that particular formulation, has taken up this or that "technical" variation, that I have failed to distinguish the position of Robot Cultist A from that of Robot Cultist B, and so on.
You will forgive me, but there is no need for those of us who confine our reasonable technoscientific deliberation to beliefs that are warranted by consensus science to lose ourselves in fine-grained appreciation of differences that fail to make the difference that actually makes a difference in such matters. You rattle off the handful of preferred figures who tell you what you want to hear, barnacled up in who knows what baroque jargon and ptolemaic epicycles, as though these are widely respected widely-cited figures outside your sub(cult)ure.
But they are not.
As a very easily discovered matter of fact, they are not.
It isn't a sign of discernment but of its opposite, as it happens, that you can recite the minute differences that distinguish three disputants on the question of how many angels can dance on a pin-head, when the overabundant consensus of relevant warranted belief has become either indifferent or hostile to the notion of angels dancing on pinheads as such.
It is the extraordinary assertion of belief that demands extraordinary proofs and patient elaborations. You are invested in a whole constellation of flabbergastingly extraordinary claims -- expectations of superhumanization and near-immortalization and paradisical plenitude -- and yet seem to demand as the price of skeptical engagement with your discourse that critics become conversant with disputes the relevance of which depends on the prior acceptance of the whole fantastically marginal and extraordinary enterprise in which they are embedded. Meanwhile, the public life of your discourse, whatever the technical details you believe to undergird it, continues to proceed at a level of generality and hyperbole built up of metaphors, citations of myth, activations of infantile wish-fulfillment fantasies, and supported, at most, with vague conjurations of inevitable progress, triumphalist reductionism, and a handful of "existence proofs," usually from biology, that aren't actually analogous at all in their specificity to the idealized outcomes that drive superlativity, at least not at the bedeviling level of detail that concerns consensus scientists and accountable policy-makers but not so much ideologues, priests, and scam artists.
We are offered up claims built upon claims built upon claims, few of which have excited the interest or support of a consensus of scientists in the relevant fields, and fewer still of which invest these claims with the idealized outcomes that are the preoccupation of those who indulge most forcefully in superlative discourses as such.
Superlativity, in a word, is not science. It is a discourse, opportunistically taking up a highly selective set of scientific results and ideas and diverting them to the service of a host of wish-fulfillment fantasies that are very old and very familiar, dreams of invulnerability, certainty, immortality, and abundance that rail against the finitude of the human condition.
They are a distraction and derangement of those aspects of Enlightenment that would mobilize collective intelligence, expressivity, and effort to the progressive democratization, consensualization, and diversification of public life and the practical solution of shared problems.
Progress is not transcendence, nor is enlightenment a denial of human finitude.
There is more than enough sensationalism and irrationalism distorting urgently needed sensible public deliberation on, for example, the environmental and bioethical quandaries of disruptive technoscientific change at the moment.
The Robot Cultists and their various noise machines are not helping. At all.
Wednesday, April 15, 2009
Let's Talk About Cultishness
On the one hand, I find the organizational forms of superlative futurology so ridiculous that I often judge that they demand nothing but ridicule in return. But, on the other hand, I think that the discourses of superlative futurology represent a symptom and reductio of prevailing neoliberal developmental discourse that repays our more serious scrutiny and I also think that the hyperbolic rhetoric arising out of the sub(cult)ures of superlative futurology are congenial to sensationalist mass media and contribute in ways we should take seriously to the derangement of sensible deliberation on technodevelopmental questions at an important historical moment of disruptive change. So, I regard superlativity as ridiculous but I take it seriously, too. In the moments in which I am impressed most by its ridiculousness I find myself referring to organized sub(cult)ural formations of superlative futurology as "The Robot Cult" and its representatives as "Robot Cultists." How apt is that charge when all is said and done, and just how glib am I being in making it? Let's talk about that a little bit, shall we?
I will take another comment by "Hjalte" that I've upgraded and adapted from the Moot, this time one in which she takes umbrage at some of the insinuations arising from the charge of Robot Cultism, as the occasion for some scattered speculations on the relations of superlative futurology, its organized forms, the sub(cult)ures associated with these, and finally the derisive designation of Robot Cultism itself.
"Hjalte" protests: It is not like I worship the man as if he was the guru in some sort of robot cult. I said particularly: not that he is the first to come up with such ideas. And those other people I refer to is not (just) the rest of the incrowd at SIAI. It is people like Sam Harris and Daniel Dennett, and likely countless other philosophers and neuroscientists of whom I have not heard. (maybe even some of the old Greek philosophers as well, they had moments of good insight). Also I don’t say that anyone possess full knowledge of these issues, though the state of the art may be a little above ”various reactions going on in the brain”.
The "man" in question is would-be guru Eliezer Yudkowsky of the Singularity Institute for Artificial Intelligence, which one might describe, together with would-be guru Ray Kurzweil's Singularity U, as something like Robot Cult Ground Zero. Sam Harris, needless to say, isn't a neuroscientist. Daniel Dennet is a philosopher. I liked his book Elbow Room very much, and like him I am a champion of Darwin and enjoy some of the things he writes in championing Darwin himself. It's nice that you like some of the Greeks, as well. Me too. I must say that there is a strange mushy amalgam of bestselling popular science authors and "the new atheism" polemicists with a broad family resemblance, rather than an explicit program exactly, holding them together, mostly involving a rather pointless and hysterical assertion in my view of technical triumphalism through reductionism. I always find myself wishing secularists would go back to reading James and Dewey rather than all this facile reductionism misconstrued as respect for science. This is a brutal oversimplification, but it seems to me, roughly speaking, that in mis-identifying fundamentalism with the humanities, they tend to advocate a reductionism that re-writes science itself in the image of a priestly authoritarianism with too much in common with the very fundamentalisms they claim to disdain (and rightly so).
Anyway, it's easy to see why you would connect the Robot Cultists you cherish to this popular science assembly (some of the authors in which I personally find more or less appealing myself in their proper precinct), and probably in a loose sort of way with the Edge.org folks (I tend to gravitate predictably enough more toward the more progressive and capacious Seed Scienceblogs set myself). This amalgam of insistent scientism -- again, I'm painting with too broad a brush, but you take my point, I'm hoping -- is more or less what the American Ayn Rand enthusiasts, also something of a cult, mind you, of the 60s mutated into, by way of the L5 society, by the time of the irrational exuberance of the 90s. Wired's libertechian "digirati" and Extropian transhumanism were very much a part of that moment -- to their everlasting embarrassment one would think. Vinge, Kurzweil, Yudkowsky either originated together with it or arise out of it (of these three, only Vinge is a figure of lasting significance in my opinion). This fandom-cum-sub(cult)ure hasn't really changed all that much in broad outline over the years, apart from occasional terminological refurbishments and fumigations in the name of organizational PR, since Ed Regis offered his arch ethnography Great Mambo Chicken way back. Brian Alexander's Rapture, written years and years later is most extraordinary in my view for the lack of change in the futurological cast of characters he discovers, the claims they make, the (lack of) influence they exert in their marginality, and so on.
Be all that as it may, let's have something a reality check here, shall we?
If you are concerned about software and network security issues (and there are plenty of good reasons to be), you certainly need not join a Robot Cult to work on them, you need not think of yourself as a "member" of a "movement" that publishes more online manifestos than actually cited scientific papers.
Why would efforts to address software and network security issues impel one into a marginal sub(cult)ure in which one finds a personal identity radically at odds with most of one's peers and what is taken to be a perspective on and place within a highly idiosyncratic version of human history freighted with the tonalities of transcendence and apocalypse?
I don't doubt you when you say that you do not literally worship would-be Robot Cult gurus like Yudkowsky or Kurzweil or Max More or whoever (depending on the particular flavor of superlativity you most invest in personally), but the fact remains that these figures are incredibly marginal to scientific consensus, and you locate yourself very insistently outside that mainstream yourself when theirs are the terms you take up to understand what is possible and important and problematic in the fields of your greatest interest.
The fact that this self-marginalization is typically coupled among superlative futurologists with the assumption of a defensive assertion that in fact you represent a vanguard championing a super-scientificity while you actually actively disdain consensus-scientificity suggests there are other things afoot in this identity you have assumed for whatever reasons than simply a desire to solve software and network security problems.
There are, after all, thousands upon thousands of serious, credentialized, published professionals and students working to solve such problems who have never heard of any of the people you take most seriously and who, upon hearing of them, would laugh their assess off. This possibly should matter to you.
Transhumanism, singularitarianism, techno-immortalism, extropianism, and all the rest might seem to differ a bit from classic cult formations in that they do tolerate and even celebrate dissenting views on the questions that preoccupy their attention. What one notices however is that the constellation of problems at issue for them are highly marginal and idiosyncratic yet remain unusually stable, and the disputatious positions assumed in respect to these issues are also fairly stable as well.
The "party line" for the Robot Cult is not so much a matter of memorizing a Creed and observing Commandments, but of taking seriously as nobody else on earth does (sometimes by going through the ritual motions of dispute itself) a set of idealized outcomes -- outcomes that would just happen to confer personal "transcendence" on those who are preoccupied with them, namely, superintelligence, superlongevity, and superabundance -- and fixating on a set of "technical" problems (not accepted as priorities in the consensus scientific fields on which these "technical" vocabularies parasitically depend) standing in the way of the realization of those idealized outcomes and the promise of that transcendence.
It is not so much a hard party-line that is policed by the Robot Cult, but a circumscription of debate onto an idiosyncratic set of marginal problems and marginal "technical" vocabularies in the service of superlative transcendentalizing aspirations rather than conventional progressive technodevelopmental aspirations.
This marginality is compensated by the fraught pleasures of a highly defensive sub(cult)ural identification, the sense of being a vanguard rather than an ignoramus or a crank, the sense of gaining a highly simplified explanatory narrative and a location within it as against the ignorance and confusion that likely preceded the conversion experience (or, to be more generous about it, for some, the assumption of the futurological enthusiasm that impelled them into this particular fandom), not to mention the offering up of a tantalizing glimpse and promise of superlative aspirations, however conceptually confused, however technically implausible.
For some, superlativity functions as a straightforward faith-based initiative, and mobilizes the conventional authoritarian organizational circuit of True Believers and would-be Priestly Authorities, while for others it is a self-marginalizing sub(cult)ural enthusiasm more like a fandom. The fandom may be less psychologically damaging and less fundamentalist and less prone to authoritarianism (or not), but it nurtures and mobilizes the worst extremes in organized superlative futurology all the same.
The True Believers and the Fans will all refer just the same to "the movement" and to themselves as "transhumanists" or "singularitarians" or what have, imagining themselves different sorts of people in consequence of their identification with that movement and with the Movement of History in which it is imagined uniquely to participate along a path to transcendence or apocalypse.
Beyond all that, as I said, superlative futurology also continues to provide an illuminating symptom and clarifyingly extreme variation on prevailing neoliberal developmental discourse as such, which is saturated with reductionisms, determinisms, utopianisms, eugenicisms, and libertopianisms very much like the ones the find their extreme correlates in superlative futurology. It is as both sympton and reductio of neoliberal developmentalism that superlative futurology probably best repays our considered attention.
On their own, the Robot Cultists are a rather clownish collection, even if one should also pay close attention to the ways in which sensationalist media take up their facile and deranging framings of technodevelopmental quandaries to the cost of sense at the worst possible historical moment, and also one should remain vigilant about the organizational life of superlative futurology since even absurd marginal groups of boys with toys who say useful things to incumbent interests while fancying themselves the smartest people in the room and Holders of the Keys of History can do enormous damage if they connect to good funding sources however palpably idiotic their actual views (as witness Nazis and Neocons and all the usual suspects in this dumb dreary disastrous vein).
I will take another comment by "Hjalte" that I've upgraded and adapted from the Moot, this time one in which she takes umbrage at some of the insinuations arising from the charge of Robot Cultism, as the occasion for some scattered speculations on the relations of superlative futurology, its organized forms, the sub(cult)ures associated with these, and finally the derisive designation of Robot Cultism itself.
"Hjalte" protests: It is not like I worship the man as if he was the guru in some sort of robot cult. I said particularly: not that he is the first to come up with such ideas. And those other people I refer to is not (just) the rest of the incrowd at SIAI. It is people like Sam Harris and Daniel Dennett, and likely countless other philosophers and neuroscientists of whom I have not heard. (maybe even some of the old Greek philosophers as well, they had moments of good insight). Also I don’t say that anyone possess full knowledge of these issues, though the state of the art may be a little above ”various reactions going on in the brain”.
The "man" in question is would-be guru Eliezer Yudkowsky of the Singularity Institute for Artificial Intelligence, which one might describe, together with would-be guru Ray Kurzweil's Singularity U, as something like Robot Cult Ground Zero. Sam Harris, needless to say, isn't a neuroscientist. Daniel Dennet is a philosopher. I liked his book Elbow Room very much, and like him I am a champion of Darwin and enjoy some of the things he writes in championing Darwin himself. It's nice that you like some of the Greeks, as well. Me too. I must say that there is a strange mushy amalgam of bestselling popular science authors and "the new atheism" polemicists with a broad family resemblance, rather than an explicit program exactly, holding them together, mostly involving a rather pointless and hysterical assertion in my view of technical triumphalism through reductionism. I always find myself wishing secularists would go back to reading James and Dewey rather than all this facile reductionism misconstrued as respect for science. This is a brutal oversimplification, but it seems to me, roughly speaking, that in mis-identifying fundamentalism with the humanities, they tend to advocate a reductionism that re-writes science itself in the image of a priestly authoritarianism with too much in common with the very fundamentalisms they claim to disdain (and rightly so).
Anyway, it's easy to see why you would connect the Robot Cultists you cherish to this popular science assembly (some of the authors in which I personally find more or less appealing myself in their proper precinct), and probably in a loose sort of way with the Edge.org folks (I tend to gravitate predictably enough more toward the more progressive and capacious Seed Scienceblogs set myself). This amalgam of insistent scientism -- again, I'm painting with too broad a brush, but you take my point, I'm hoping -- is more or less what the American Ayn Rand enthusiasts, also something of a cult, mind you, of the 60s mutated into, by way of the L5 society, by the time of the irrational exuberance of the 90s. Wired's libertechian "digirati" and Extropian transhumanism were very much a part of that moment -- to their everlasting embarrassment one would think. Vinge, Kurzweil, Yudkowsky either originated together with it or arise out of it (of these three, only Vinge is a figure of lasting significance in my opinion). This fandom-cum-sub(cult)ure hasn't really changed all that much in broad outline over the years, apart from occasional terminological refurbishments and fumigations in the name of organizational PR, since Ed Regis offered his arch ethnography Great Mambo Chicken way back. Brian Alexander's Rapture, written years and years later is most extraordinary in my view for the lack of change in the futurological cast of characters he discovers, the claims they make, the (lack of) influence they exert in their marginality, and so on.
Be all that as it may, let's have something a reality check here, shall we?
If you are concerned about software and network security issues (and there are plenty of good reasons to be), you certainly need not join a Robot Cult to work on them, you need not think of yourself as a "member" of a "movement" that publishes more online manifestos than actually cited scientific papers.
Why would efforts to address software and network security issues impel one into a marginal sub(cult)ure in which one finds a personal identity radically at odds with most of one's peers and what is taken to be a perspective on and place within a highly idiosyncratic version of human history freighted with the tonalities of transcendence and apocalypse?
I don't doubt you when you say that you do not literally worship would-be Robot Cult gurus like Yudkowsky or Kurzweil or Max More or whoever (depending on the particular flavor of superlativity you most invest in personally), but the fact remains that these figures are incredibly marginal to scientific consensus, and you locate yourself very insistently outside that mainstream yourself when theirs are the terms you take up to understand what is possible and important and problematic in the fields of your greatest interest.
The fact that this self-marginalization is typically coupled among superlative futurologists with the assumption of a defensive assertion that in fact you represent a vanguard championing a super-scientificity while you actually actively disdain consensus-scientificity suggests there are other things afoot in this identity you have assumed for whatever reasons than simply a desire to solve software and network security problems.
There are, after all, thousands upon thousands of serious, credentialized, published professionals and students working to solve such problems who have never heard of any of the people you take most seriously and who, upon hearing of them, would laugh their assess off. This possibly should matter to you.
Transhumanism, singularitarianism, techno-immortalism, extropianism, and all the rest might seem to differ a bit from classic cult formations in that they do tolerate and even celebrate dissenting views on the questions that preoccupy their attention. What one notices however is that the constellation of problems at issue for them are highly marginal and idiosyncratic yet remain unusually stable, and the disputatious positions assumed in respect to these issues are also fairly stable as well.
The "party line" for the Robot Cult is not so much a matter of memorizing a Creed and observing Commandments, but of taking seriously as nobody else on earth does (sometimes by going through the ritual motions of dispute itself) a set of idealized outcomes -- outcomes that would just happen to confer personal "transcendence" on those who are preoccupied with them, namely, superintelligence, superlongevity, and superabundance -- and fixating on a set of "technical" problems (not accepted as priorities in the consensus scientific fields on which these "technical" vocabularies parasitically depend) standing in the way of the realization of those idealized outcomes and the promise of that transcendence.
It is not so much a hard party-line that is policed by the Robot Cult, but a circumscription of debate onto an idiosyncratic set of marginal problems and marginal "technical" vocabularies in the service of superlative transcendentalizing aspirations rather than conventional progressive technodevelopmental aspirations.
This marginality is compensated by the fraught pleasures of a highly defensive sub(cult)ural identification, the sense of being a vanguard rather than an ignoramus or a crank, the sense of gaining a highly simplified explanatory narrative and a location within it as against the ignorance and confusion that likely preceded the conversion experience (or, to be more generous about it, for some, the assumption of the futurological enthusiasm that impelled them into this particular fandom), not to mention the offering up of a tantalizing glimpse and promise of superlative aspirations, however conceptually confused, however technically implausible.
For some, superlativity functions as a straightforward faith-based initiative, and mobilizes the conventional authoritarian organizational circuit of True Believers and would-be Priestly Authorities, while for others it is a self-marginalizing sub(cult)ural enthusiasm more like a fandom. The fandom may be less psychologically damaging and less fundamentalist and less prone to authoritarianism (or not), but it nurtures and mobilizes the worst extremes in organized superlative futurology all the same.
The True Believers and the Fans will all refer just the same to "the movement" and to themselves as "transhumanists" or "singularitarians" or what have, imagining themselves different sorts of people in consequence of their identification with that movement and with the Movement of History in which it is imagined uniquely to participate along a path to transcendence or apocalypse.
Beyond all that, as I said, superlative futurology also continues to provide an illuminating symptom and clarifyingly extreme variation on prevailing neoliberal developmental discourse as such, which is saturated with reductionisms, determinisms, utopianisms, eugenicisms, and libertopianisms very much like the ones the find their extreme correlates in superlative futurology. It is as both sympton and reductio of neoliberal developmentalism that superlative futurology probably best repays our considered attention.
On their own, the Robot Cultists are a rather clownish collection, even if one should also pay close attention to the ways in which sensationalist media take up their facile and deranging framings of technodevelopmental quandaries to the cost of sense at the worst possible historical moment, and also one should remain vigilant about the organizational life of superlative futurology since even absurd marginal groups of boys with toys who say useful things to incumbent interests while fancying themselves the smartest people in the room and Holders of the Keys of History can do enormous damage if they connect to good funding sources however palpably idiotic their actual views (as witness Nazis and Neocons and all the usual suspects in this dumb dreary disastrous vein).
Robotic Reductionism
Upgraded and adapted from the Moot, "Hjalte" wrote (as nicely excerpted by "AnneC"):
I am a materialist on matters of mind, I have been a cheerful atheist for a quarter century, I am a champion of consensus science, not a scientist by any means but hardly uninformed about technoscience questions, and my politics are those of secular progressive consensual democracy.
You are simply straightforwardly not understanding my point. I don't agree that it is particularly heretical or harrowing to attribute consciousness to neurochemistry in an organismic brain.
Indeed, that statement shouldn't be the least bit of a surprise to you since one of my repeated accusations against the so-called Singularitarians, dead-enders as most of them are in the old school Program of Strong AI (and this has even been a repeated accusation of mine quite literally in the very thread to which you are contributing and which I would imagine, then, that you have taken the time to read), is that despite their own materialism they tend to treat the actual substantial form of materialization hitherto associated with intelligence as comparatively negligible, fancying that complex software and the complex behaviors it can provoke can be properly denominated as "intelligence" without arising out of anything like the dynamisms or exhibiting anything like the dynamisms of the actually-existing organismic intelligences from which they are appropriating the term.
Despite these failures, their discourse is nonetheless saturated with the paraphernalia of intelligence as it is actually incarnated in the world. Discussions of artificial intelligence inevitably lead into discussions of intentions, values, optimizations, smartness, personhood, rights, friendliness and so on, none with any good justification.
To a certain extent these figurative borrowings from one domain to another to prop up our understanding of a new phenomena and new problems are inevitable and useful. The term in rhetoric to describe this figure is catachresis, in case you're interested (I teach this stuff to my university students), which describes derangements of literal usage to describe phenomena to which they didn't originally apply and also to describe the coinage of new terms to accomplish this (these coinages, after all, typically involve borrowings from other languages and so on).
Usually these borrowings are functionally proto-theoretical, their plausibility builds on the sense, right or wrong, of the analogical or associational propriety of the traffic between the old and the new domain over which the borrowing is taking place. Also, the trace of the older associations of the term reverberate into the new usages, yielding rich ramifying associations that continue to exert their force on the ways new usages play out in the world.
It would seem to me that the attribution of "intelligence" to computers and, subsequently, the reduction of intelligence to computation has been an enormously compelling catachresis that has palpably confused far more than it has illuminated and in fact has yielded a poisonous harvest of incomprehension where matters of testifying to the experiences of critical and abstract and empathetic and passionate and imaginative thinking, understanding, and judgment are concerned, the testifying on which distinctively human forms of agency and meaning actually depend for their abiding intelligibility, force, and flourishing.
It is not materialism as such that has hollowed out the human understanding of our own freedom and agency and meaningfulness, it is an instrumentalization of reason that was never compelled by materialism and which has made its advocates ever more insensitive to and dismissive of the difference between persons and robots, the difference between the exercise of freedom in the presence of one's peers and the exertion of instrumental force translating means into ends. Instrumentality, of course, cannot provide the ends at which its efficiencies should be aimed, and so freedom rewritten in the image of its imperialism is exactly what one would expect of a robot, blind, meaningless, brute-force mistaken as emancipation rather than the radical impoverishment it would be. This is not a problem of materialism, it is a problem of reductionism.
The heretic thought that consciousness is nothing but the way the various reactions going on in the brain feels when that brain happens to be yours ... [surely] sounds like hollowing out of human experience of the worst kind imaginable.
I am a materialist on matters of mind, I have been a cheerful atheist for a quarter century, I am a champion of consensus science, not a scientist by any means but hardly uninformed about technoscience questions, and my politics are those of secular progressive consensual democracy.
You are simply straightforwardly not understanding my point. I don't agree that it is particularly heretical or harrowing to attribute consciousness to neurochemistry in an organismic brain.
Indeed, that statement shouldn't be the least bit of a surprise to you since one of my repeated accusations against the so-called Singularitarians, dead-enders as most of them are in the old school Program of Strong AI (and this has even been a repeated accusation of mine quite literally in the very thread to which you are contributing and which I would imagine, then, that you have taken the time to read), is that despite their own materialism they tend to treat the actual substantial form of materialization hitherto associated with intelligence as comparatively negligible, fancying that complex software and the complex behaviors it can provoke can be properly denominated as "intelligence" without arising out of anything like the dynamisms or exhibiting anything like the dynamisms of the actually-existing organismic intelligences from which they are appropriating the term.
Despite these failures, their discourse is nonetheless saturated with the paraphernalia of intelligence as it is actually incarnated in the world. Discussions of artificial intelligence inevitably lead into discussions of intentions, values, optimizations, smartness, personhood, rights, friendliness and so on, none with any good justification.
To a certain extent these figurative borrowings from one domain to another to prop up our understanding of a new phenomena and new problems are inevitable and useful. The term in rhetoric to describe this figure is catachresis, in case you're interested (I teach this stuff to my university students), which describes derangements of literal usage to describe phenomena to which they didn't originally apply and also to describe the coinage of new terms to accomplish this (these coinages, after all, typically involve borrowings from other languages and so on).
Usually these borrowings are functionally proto-theoretical, their plausibility builds on the sense, right or wrong, of the analogical or associational propriety of the traffic between the old and the new domain over which the borrowing is taking place. Also, the trace of the older associations of the term reverberate into the new usages, yielding rich ramifying associations that continue to exert their force on the ways new usages play out in the world.
It would seem to me that the attribution of "intelligence" to computers and, subsequently, the reduction of intelligence to computation has been an enormously compelling catachresis that has palpably confused far more than it has illuminated and in fact has yielded a poisonous harvest of incomprehension where matters of testifying to the experiences of critical and abstract and empathetic and passionate and imaginative thinking, understanding, and judgment are concerned, the testifying on which distinctively human forms of agency and meaning actually depend for their abiding intelligibility, force, and flourishing.
It is not materialism as such that has hollowed out the human understanding of our own freedom and agency and meaningfulness, it is an instrumentalization of reason that was never compelled by materialism and which has made its advocates ever more insensitive to and dismissive of the difference between persons and robots, the difference between the exercise of freedom in the presence of one's peers and the exertion of instrumental force translating means into ends. Instrumentality, of course, cannot provide the ends at which its efficiencies should be aimed, and so freedom rewritten in the image of its imperialism is exactly what one would expect of a robot, blind, meaningless, brute-force mistaken as emancipation rather than the radical impoverishment it would be. This is not a problem of materialism, it is a problem of reductionism.
Sunday, April 12, 2009
Pretending To Be Biologists
Why do so many computer scientists throng the ranks of superlative futurology?
I'll grant -- as many do not -- that at least some of "computer science" really is a science, rather than always only just engineering practices or art practices (although an enormous amount of "computer science" is indeed better described that way), say, a science struggling to understand principles underlying material forms of information processing or something like that.
As I said, I'll grant that at least some computer scientists really are scientists. Of course, many of the Robot Cultists don't even have degrees in computer science, however loosely construed, and indeed many of them, I daresay probably most of them, are little more than serfs robotically coding in the veal fattening pens that pimple corporate America but sell themselves in techno-utopian chatrooms (and possibly also, sadly, to themselves) as the equivalent of plasma physicists.
It makes a certain sense at least that one would find code-jockey handwavers of the "cybernetic totalist" school of non-thought among the singularitarian Robot God branch of superlative futurology and among the "Mind Uploading" enthusiasts. But why do computer coders throng the nanofactory crowd, and the cryonics crowd, and the negligible senescence longevity medicine crowd as well -- communities of futurological faithfulness preoccupied with scientific questions about which they endlessly declare their consummate superiority to scientific consensus but which depend most of all on biology?
I'll grant -- as many do not -- that at least some of "computer science" really is a science, rather than always only just engineering practices or art practices (although an enormous amount of "computer science" is indeed better described that way), say, a science struggling to understand principles underlying material forms of information processing or something like that.
As I said, I'll grant that at least some computer scientists really are scientists. Of course, many of the Robot Cultists don't even have degrees in computer science, however loosely construed, and indeed many of them, I daresay probably most of them, are little more than serfs robotically coding in the veal fattening pens that pimple corporate America but sell themselves in techno-utopian chatrooms (and possibly also, sadly, to themselves) as the equivalent of plasma physicists.
It makes a certain sense at least that one would find code-jockey handwavers of the "cybernetic totalist" school of non-thought among the singularitarian Robot God branch of superlative futurology and among the "Mind Uploading" enthusiasts. But why do computer coders throng the nanofactory crowd, and the cryonics crowd, and the negligible senescence longevity medicine crowd as well -- communities of futurological faithfulness preoccupied with scientific questions about which they endlessly declare their consummate superiority to scientific consensus but which depend most of all on biology?
Saturday, April 11, 2009
Actions Are Not Behaviors
Here's more debate with the steadfast Singularitarians, again adapted from exchanges playing out at Accelerating Future, this time zeroing in on some assumptions that get to the heart of some of these disputes in my view. Also, I daresay the differences getting aired here are not confined always only to the superlative futurologists, who represent in this a symptom and expressive extremity of this sort of rhetoric, but point to prevailing assumptions that problematically suffuse the techno-determinism and reductionism of a great deal of corporate-militarist developmental discourse more generally.
I said, at some point or other, that "I think the words 'smart' 'intelligent' [and] 'act' shouldn’t be used literally to describe mechanical behavior."
In response to which I received from "Roko": So, the thinking process isn’t limited to only biological organisms, but the thinking process isn’t a mechanical computing either? It’s something we can’t imitate with any kind of a machine? Do I understand you correctly?
In response to this same statement of mine "Thom" responded: [Y]ou are directly contradicting literally the whole field of artificial intelligence. From the wikipedia article on AI:
I do indeed regard "the thinking process" as one limited to biological organisms, as a factual, empirical matter in the present, except in science fiction and futurological handwaving in which it has exactly the same substantial existence as do the ghosts and wizards and wands in Harry Potter books. The forms that actually-existing intelligence takes in the world should surely matter to people who claim to care about science as much as Robot Cultists claim to do. Among many other things, it is true that I find the term “thought” to be one that encompasses more personal experiences and worldy phenomena than just reckoning with consequences in my view. Part of the way I try to get at this difference is to stipulate a pretty conventional distinction, at least in technical philosophical debates turning on the present issues, between the "acts" of subjects (as relatively free actors) and the "behaviors" of objects (as mere mechanical playthings): I don't claim that this distinction always accords with common usage, but I think it often does, and at any rate helps us get at a difference that makes a difference to us in moral, aesthetic, ethical, and political matters.
Is it right to say of this move of mine that in it I am "directly contradicting literally the whole field of artificial intelligence"?
I would distinguish, in a rough and tumble sort of way, the actual software-coding practices, useful general principles, and testable results of that field, on the one hand, from the rhetorical practices through which narrative and figurative frames are mobilized to educate, inspire, and make sense of practices, principles, and results of that field, on the other hand. The notion of "artificial intelligence" as it presently plays out in the world depends for its force on conceptual confusions and ill-digested metaphors as far as I can see.
You will notice that shade-aversive plants are no more intelligent than machines that also satisfy the definition of intelligence cited above. Further, it is rightly a matter of some controversy whether we should impute “success” to the behavior of a system that has no personal stake in its accomplishment. And, again, I have used the word “behavior” because I want to distinguish in this context acting as a political term from behavior as a more conventionally causal one. Strictly speaking, it is not a denigration of the useful results arising out of computer science (even when it decides it wants to call itself "artificial intelligence" without reason or sense), to point out that there are practical, conceptual, ethical, and other problems afoot in the ways in which computer science sometimes goes on to make sense of what it is doing and where it is going.
I said, at some point or other, that "I think the words 'smart' 'intelligent' [and] 'act' shouldn’t be used literally to describe mechanical behavior."
In response to which I received from "Roko": So, the thinking process isn’t limited to only biological organisms, but the thinking process isn’t a mechanical computing either? It’s something we can’t imitate with any kind of a machine? Do I understand you correctly?
In response to this same statement of mine "Thom" responded: [Y]ou are directly contradicting literally the whole field of artificial intelligence. From the wikipedia article on AI:
Artificial Intelligence (AI) is the intelligence of machines and the branch of computer science which aims to create it. Major AI textbooks define the field as “the study and design of intelligent agents,” where an intelligent agent is a system that perceives its environment and takes actions which maximize its chances of success.
I do indeed regard "the thinking process" as one limited to biological organisms, as a factual, empirical matter in the present, except in science fiction and futurological handwaving in which it has exactly the same substantial existence as do the ghosts and wizards and wands in Harry Potter books. The forms that actually-existing intelligence takes in the world should surely matter to people who claim to care about science as much as Robot Cultists claim to do. Among many other things, it is true that I find the term “thought” to be one that encompasses more personal experiences and worldy phenomena than just reckoning with consequences in my view. Part of the way I try to get at this difference is to stipulate a pretty conventional distinction, at least in technical philosophical debates turning on the present issues, between the "acts" of subjects (as relatively free actors) and the "behaviors" of objects (as mere mechanical playthings): I don't claim that this distinction always accords with common usage, but I think it often does, and at any rate helps us get at a difference that makes a difference to us in moral, aesthetic, ethical, and political matters.
Is it right to say of this move of mine that in it I am "directly contradicting literally the whole field of artificial intelligence"?
I would distinguish, in a rough and tumble sort of way, the actual software-coding practices, useful general principles, and testable results of that field, on the one hand, from the rhetorical practices through which narrative and figurative frames are mobilized to educate, inspire, and make sense of practices, principles, and results of that field, on the other hand. The notion of "artificial intelligence" as it presently plays out in the world depends for its force on conceptual confusions and ill-digested metaphors as far as I can see.
You will notice that shade-aversive plants are no more intelligent than machines that also satisfy the definition of intelligence cited above. Further, it is rightly a matter of some controversy whether we should impute “success” to the behavior of a system that has no personal stake in its accomplishment. And, again, I have used the word “behavior” because I want to distinguish in this context acting as a political term from behavior as a more conventionally causal one. Strictly speaking, it is not a denigration of the useful results arising out of computer science (even when it decides it wants to call itself "artificial intelligence" without reason or sense), to point out that there are practical, conceptual, ethical, and other problems afoot in the ways in which computer science sometimes goes on to make sense of what it is doing and where it is going.
A Fresh Argument
"Thomas," who accused me of being a Vitalist in an earlier turn of my waltz with the false, now accuses me of being a Vitalist:
From all this crap I only see that Singularitarians are reductionists. They have no fresh arguments for their cause.
I'm not positing any kind of mysterious supernatural force in the way that the vitalists did. I'm pointing out that for so-called materialists you sure have an odd way of discounting the non-negligible material incarnation of actually-existing intelligences (on the basis of which alone you can have formed the notion of an emulable intelligence in the first place, given that they are the only actual game in town), and an odd way of discounting many dimensions associated with the actual exhibition of intelligence in the world (which include more than just cognitive reckoning with consequences but also sensitivity, imagination, empathy, emotionality, expressivity, savvy, instinct, improvisation, and a conscience that cannot be reduced only to calculation).
You can discount this objection as woo-woo mysticism if you like, but it looks to me like pointing out errors, one materialist to another. I daresay that dead-enders in the always only endlessly failed predictive powerhouse of the Strong Program might wonder what they keep on missing all these years to account for the interminable flummoxing of their certainties.
It's hard to resist the overwhelming sense that at least some Robot Cultists are willing so to substitute a vision of mere amplified calculation for actual intelligence because that reduction enables them to tell a more plausible story that would connect current technoscientific knowledge with the technodevelopmental accomplishment of the outcomes with which they identify so fervently as a community: Namely, first, the arrival of the superintelligent post-biological Robot God who either is Friendly enough to solve all their problems or Unfriendly enough to end the world altogether, as well as, second, the arrival of a "mind-uploading technique" through which their mortal vulnerable error-prone bodily selves can "migrate" into an imperishable digitality in which superintelligence, superlongevity, and superabundance is finally theirs.
Needless to say, whatever the actual sensible programming and science onto which they are glomming in crafting these formulations, this discourse is not itself a scientific one at all, but a discourse connecting selective experience to moral, aesthetic, and political hopes and calling upon older mythic archetypes and theological discourses. This is why it is sometimes handy to have a rhetorician in the house, amidst the breezing buzzing confusions of fixated coders.
UPDATE: "Roko" jumps in, responding to "Thomas's" attribution to me of "vitalism," suggesting instead:
But I'm simply using it in a way that captures the way you use it and experience it in your actual life. There is more to intelligence than reckoning with consequences.
Hereupon "Roko" soldiers on, hoping craftily to corner me into the reductionist cul-de-sac in spite of myself, apparently:
Among other things.
"Effectiveness" isn't all intelligence is up to. It also is up to "meaningfulness." And some of the ways in which it finds its way to "effectiveness" connect up to the ways in which it finds its way to "meaningfulness."
Whatever Wikipedia says at the moment on the subject, however fervently you might deny the salience or substance of the dimensions of intelligence to which I refer, I can no more deny them myself than I could deny the pressure that deforms the surface of my fingertips and the slick contact with surface that meets each strike of the keyboard out of which this reply is forming on the screen before my eyes right here, right now.
I don’t think we know that at all yet, and I very much doubt it in any case. Setting aside abstruse philosophical quandaries I might have with such an effort at abstraction and reduction, let's turn instead to considerations I suspect you'll take more seriously (that isn't a compliment): To what extent was the substance of what is entailed at least in part by what you mean by “planner” “problem solver” and so on incarnated in the squishy organismic brain through evolutionary processes having to do with vicissitudes in the environmental idiosyncrasies threatening the survival or enabling the flourishing of the organisms to which we are indebted for the intelligent brains we actually now have? Intelligence isn't math, it's a squishy sloppy-wet mess, like a kiss.
That aside, I definitely won’t have you reduce the word “reasoner” to number crunching (however "pure" in your revealing terms). Reason is a far more capacious word in my book. For heaven’s sake, humanity is the Aristotelian rational, that is also to say, political animal!
From all this crap I only see, that Dale is a kind of vitalist.
He has no fresh arguments for his cause.
From all this crap I only see that Singularitarians are reductionists. They have no fresh arguments for their cause.
I'm not positing any kind of mysterious supernatural force in the way that the vitalists did. I'm pointing out that for so-called materialists you sure have an odd way of discounting the non-negligible material incarnation of actually-existing intelligences (on the basis of which alone you can have formed the notion of an emulable intelligence in the first place, given that they are the only actual game in town), and an odd way of discounting many dimensions associated with the actual exhibition of intelligence in the world (which include more than just cognitive reckoning with consequences but also sensitivity, imagination, empathy, emotionality, expressivity, savvy, instinct, improvisation, and a conscience that cannot be reduced only to calculation).
You can discount this objection as woo-woo mysticism if you like, but it looks to me like pointing out errors, one materialist to another. I daresay that dead-enders in the always only endlessly failed predictive powerhouse of the Strong Program might wonder what they keep on missing all these years to account for the interminable flummoxing of their certainties.
It's hard to resist the overwhelming sense that at least some Robot Cultists are willing so to substitute a vision of mere amplified calculation for actual intelligence because that reduction enables them to tell a more plausible story that would connect current technoscientific knowledge with the technodevelopmental accomplishment of the outcomes with which they identify so fervently as a community: Namely, first, the arrival of the superintelligent post-biological Robot God who either is Friendly enough to solve all their problems or Unfriendly enough to end the world altogether, as well as, second, the arrival of a "mind-uploading technique" through which their mortal vulnerable error-prone bodily selves can "migrate" into an imperishable digitality in which superintelligence, superlongevity, and superabundance is finally theirs.
Needless to say, whatever the actual sensible programming and science onto which they are glomming in crafting these formulations, this discourse is not itself a scientific one at all, but a discourse connecting selective experience to moral, aesthetic, and political hopes and calling upon older mythic archetypes and theological discourses. This is why it is sometimes handy to have a rhetorician in the house, amidst the breezing buzzing confusions of fixated coders.
UPDATE: "Roko" jumps in, responding to "Thomas's" attribution to me of "vitalism," suggesting instead:
He seems to be using the word “intelligence” in non-standard way.
But I'm simply using it in a way that captures the way you use it and experience it in your actual life. There is more to intelligence than reckoning with consequences.
Hereupon "Roko" soldiers on, hoping craftily to corner me into the reductionist cul-de-sac in spite of myself, apparently:
you concede that the human mind is a planner-reasoner-problem-solver-speaker-learner,
Among other things.
albeit a more effective one than any currently existing computer program in most domains
"Effectiveness" isn't all intelligence is up to. It also is up to "meaningfulness." And some of the ways in which it finds its way to "effectiveness" connect up to the ways in which it finds its way to "meaningfulness."
Whatever Wikipedia says at the moment on the subject, however fervently you might deny the salience or substance of the dimensions of intelligence to which I refer, I can no more deny them myself than I could deny the pressure that deforms the surface of my fingertips and the slick contact with surface that meets each strike of the keyboard out of which this reply is forming on the screen before my eyes right here, right now.
and that the property of being a planner-reasoner-problem-solver-speaker-learner is a purely algorithmic property which is substrate independent
I don’t think we know that at all yet, and I very much doubt it in any case. Setting aside abstruse philosophical quandaries I might have with such an effort at abstraction and reduction, let's turn instead to considerations I suspect you'll take more seriously (that isn't a compliment): To what extent was the substance of what is entailed at least in part by what you mean by “planner” “problem solver” and so on incarnated in the squishy organismic brain through evolutionary processes having to do with vicissitudes in the environmental idiosyncrasies threatening the survival or enabling the flourishing of the organisms to which we are indebted for the intelligent brains we actually now have? Intelligence isn't math, it's a squishy sloppy-wet mess, like a kiss.
That aside, I definitely won’t have you reduce the word “reasoner” to number crunching (however "pure" in your revealing terms). Reason is a far more capacious word in my book. For heaven’s sake, humanity is the Aristotelian rational, that is also to say, political animal!
Singularitarian Stick-To-It-Iveness Will Get the Robot Cultists to Robo-Heaven By Brute Force If Necessary!
Under the heading, "Those who can, create, those who can’t, criticize," a new interlocutor, calling themselves "DevNull" enters the scene to throw some darts at the effete elitist who dares perturb the precincts of hard he-man sooper-science with his muzzy emotionalist relativizing rhetoric:
Angels, demons, fairies, genies, ghosts, golems, perpetual motion machines, immortality elixirs, love potions, squares circled.
Oh, look. Another Robot Cultist who fancies himself the next Einstein and the Wright Brothers all rolled into one.
Unfortunately for our singularitarian superlative futurologists, their terms are not “difficult” or even “super difficult” -- this isn’t a problem of having a “can-do” attitude —- the terms are actually deeply confused in a way that prvokes deep and dangerous confusions.
The actually-existing intelligence from which the Robot Cultists have superficially formed the idea of engineering its like is actually non-negligibly materialized in organismic brains and also socialized in the substance of its exhibition and far more multidimensional in its ways and means than are accommodated in the facile formulations Singularitarians accept as representing its “accomplishment.”
The Robot Cultists can’t achieve it on their terms because in achieving what they are actually seeking they would discover they confronted something altogether otherwise than an intelligence deserving of the term, because they don’t understand what they are even looking for in flabbergastingly basic ways. This is not, by the way, a "prediction" of the future, this is not a competing "prophetic utterance" and "The Future," this is an effort to expose an error in a superlative futurological discourse.
You’re right, "DevNull." Just a little bit 'o stick-to-it-iveness and you’ll have that perpetual motion machine licked! Who cares what some whiny pomo Berzerkeley humanities types say. Accentuate the positive, sooper-scientists, you’ll get that circle squared, pull yourselves up by the bootstraps, extreme to max, dood!
To me Dale seems to say: There is none now, therefore there shall never be any. That’s no argument.
Angels, demons, fairies, genies, ghosts, golems, perpetual motion machines, immortality elixirs, love potions, squares circled.
It’s difficult yes, but not impossible. Even super super super difficult doesn’t mean impossible. What Einstein did was not super super super difficult but actually impossible to most people. But not to him.
Oh, look. Another Robot Cultist who fancies himself the next Einstein and the Wright Brothers all rolled into one.
Unfortunately for our singularitarian superlative futurologists, their terms are not “difficult” or even “super difficult” -- this isn’t a problem of having a “can-do” attitude —- the terms are actually deeply confused in a way that prvokes deep and dangerous confusions.
The actually-existing intelligence from which the Robot Cultists have superficially formed the idea of engineering its like is actually non-negligibly materialized in organismic brains and also socialized in the substance of its exhibition and far more multidimensional in its ways and means than are accommodated in the facile formulations Singularitarians accept as representing its “accomplishment.”
The Robot Cultists can’t achieve it on their terms because in achieving what they are actually seeking they would discover they confronted something altogether otherwise than an intelligence deserving of the term, because they don’t understand what they are even looking for in flabbergastingly basic ways. This is not, by the way, a "prediction" of the future, this is not a competing "prophetic utterance" and "The Future," this is an effort to expose an error in a superlative futurological discourse.
What’s clearly impossible to Dale (as it seems), may be rather obvious to some super super super smart scientist. I’ve existed in a world where technologies I now use daily didn’t exist. And tomorrow, I shall exist in a world where new technologies exist. One of them might be AGI. It’s just a machine (like us) with capabilities we call “intelligence”. Intelligence is the most powerful force in the universe. So what? Big deal.
You’re right, "DevNull." Just a little bit 'o stick-to-it-iveness and you’ll have that perpetual motion machine licked! Who cares what some whiny pomo Berzerkeley humanities types say. Accentuate the positive, sooper-scientists, you’ll get that circle squared, pull yourselves up by the bootstraps, extreme to max, dood!
Thursday, April 09, 2009
Understanding Superlative Futurology
"Superlativity" as I use the term very specifically in my critique isn't a synonym for "really big epochal technodevelopmental changes." Like most technoscientifically literate people, I expect those, too, assuming we don't destroy ourselves any time soon instead with our waste or with our weapons. Instead, Superlativity in my sense of the term names the effort to reductively redefine emancipation in primarily instrumental terms and then expansively reorient the project of that emancipation to the pursuit of personal "transcendence" through hyperbolic misconstruals of technoscientific possibility.
This personal transcendence is typically conceived in terms that evoke the customary omni-predicates of theology, transfiguring them into super-predicates that the futurological faithful personally identify with, but proselytize in the form of "predictions" of imaginary technodevelopmental outcomes. Nevertheless, superlativity in my view is a literary genre more than a research program. It relies for its force and intelligibility on the citation of other, specifically theological/ wish-fulfillment/ transcendentalizing discourses, more than it does on proper technoscience when all is said and done. It is a way of framing a constellation of descriptions mistaken for facts, and embedding them into a narrative that solicits personal identification, which then forms the basis for moralizing forms of sub(cult)ural advocacy.
The three super-predicates, recall, are superintelligence, superlongevity, and superabundance, and they correlate to the three theological omni-predicates -- omniscience, omnipotence, and omnibenevolence. But like the avowed articles of faith of the omni-predicates with which they are correlated, these super-predicates are ultimately incapable of functioning as factual assertions at all, they are self-consuming quasi-factual placeholders for the brute assertion of faith itself. Indeed, superlative aspirations are conceptually confused to the point of illegibility, and their advocacy amounts to what is essentially a faith-based initiative.
Neither culture nor subcultures deliver deification -- technoscience will never purchase omni-predication, omnipotence, omniscience, omnibenevolence -- and never will some robotic deployment of superlative technique deliver the secularized analogues to the damaging daydream of a deity the Robot Cultists are indulging in, superintelligence, superlongevity, and the circumvention via superabundance of the impasse of stakeholder politics in a world shared with a diversity of peers.
All culture is prosthetic, and all prostheses are culture. Technoscience is simply the collective prosthetic elaboration of agency. And agency, in turn, is our effort to achieve and maintain social legibility, accomplish ends, testify to and make sense of our lives, peer to peer, in a diverse abiding material discursive world that both enables and frustrates us in this. Freedom is our word for that collective experience. Freedom cannot properly be reduced to capacitation, the muscular amplification that is instrumentality, any more than it is properly fancied as an accomplishment of solitary sufficiency: our encumbrance is a condition of our collective elaboration of freedom, peer to peer, and what we want of our technique and what we make of it is likewise conditioned fundamentally by its play in an absolutely unpredictable interminable interdependent diversity of peers and their works.
There is no "overcoming" to be had of these limits, however many present limits and customs we overturn, inasmuch as finitude as such is literally the constitutive condition of the very experience of freedom we cherish. The superlative futurologists would idiotically obliterate freedom in their clumsy wrongheaded infantile wish-fulfillment fantasy of a toypile so high it reaches Heaven, of an endlessly amplified instrumental power that transcends freedom and delivers superlative variations on an omnipredicated godhead.
Each of the super-predicates of superlative discourse amounts to a personal investment in a stealthy article of faith proffered up as endlessly-deferred scientific "predictions." Confronted with such superlative utterances it is entirely beside the point to indulge in what appear to be "technical" disputes about the validity of the scientific claims that are hyperbolized into rationales for superlative articles of faith or to debate technodevelopmental timelines for superlative "outcomes." To indulge superlative futurologists in these preferred arguments is as little scientific as debating the number of angels who can dance on a pin-head with a monk or pouring over Nostradamus with some disasterbatory enthusiast to "determine" the exact date the world will end.
The phenomenological payoff for the True Believer, so long as these conversations play out in real time, is to confer onto their imaginary object of faith a substantial reality that the object itself cannot otherwise attain. It is better for everyone not to indulge this sort of irrationality at all, and certainly not to confuse this sort of thing with actual science or actual policy discourse to the cost of the indispensable work these enterprises actually do. Or, at any rate, one should understand this sort of thing as an essentially idiosyncratic aesthetic or moral matter on the part of its enthusiasts and treat it (even celebrate it as one always can appreciate kooky marginal fandoms) as one would comparable enthusiasms in their proper precinct.
This personal transcendence is typically conceived in terms that evoke the customary omni-predicates of theology, transfiguring them into super-predicates that the futurological faithful personally identify with, but proselytize in the form of "predictions" of imaginary technodevelopmental outcomes. Nevertheless, superlativity in my view is a literary genre more than a research program. It relies for its force and intelligibility on the citation of other, specifically theological/ wish-fulfillment/ transcendentalizing discourses, more than it does on proper technoscience when all is said and done. It is a way of framing a constellation of descriptions mistaken for facts, and embedding them into a narrative that solicits personal identification, which then forms the basis for moralizing forms of sub(cult)ural advocacy.
The three super-predicates, recall, are superintelligence, superlongevity, and superabundance, and they correlate to the three theological omni-predicates -- omniscience, omnipotence, and omnibenevolence. But like the avowed articles of faith of the omni-predicates with which they are correlated, these super-predicates are ultimately incapable of functioning as factual assertions at all, they are self-consuming quasi-factual placeholders for the brute assertion of faith itself. Indeed, superlative aspirations are conceptually confused to the point of illegibility, and their advocacy amounts to what is essentially a faith-based initiative.
Neither culture nor subcultures deliver deification -- technoscience will never purchase omni-predication, omnipotence, omniscience, omnibenevolence -- and never will some robotic deployment of superlative technique deliver the secularized analogues to the damaging daydream of a deity the Robot Cultists are indulging in, superintelligence, superlongevity, and the circumvention via superabundance of the impasse of stakeholder politics in a world shared with a diversity of peers.
All culture is prosthetic, and all prostheses are culture. Technoscience is simply the collective prosthetic elaboration of agency. And agency, in turn, is our effort to achieve and maintain social legibility, accomplish ends, testify to and make sense of our lives, peer to peer, in a diverse abiding material discursive world that both enables and frustrates us in this. Freedom is our word for that collective experience. Freedom cannot properly be reduced to capacitation, the muscular amplification that is instrumentality, any more than it is properly fancied as an accomplishment of solitary sufficiency: our encumbrance is a condition of our collective elaboration of freedom, peer to peer, and what we want of our technique and what we make of it is likewise conditioned fundamentally by its play in an absolutely unpredictable interminable interdependent diversity of peers and their works.
There is no "overcoming" to be had of these limits, however many present limits and customs we overturn, inasmuch as finitude as such is literally the constitutive condition of the very experience of freedom we cherish. The superlative futurologists would idiotically obliterate freedom in their clumsy wrongheaded infantile wish-fulfillment fantasy of a toypile so high it reaches Heaven, of an endlessly amplified instrumental power that transcends freedom and delivers superlative variations on an omnipredicated godhead.
Each of the super-predicates of superlative discourse amounts to a personal investment in a stealthy article of faith proffered up as endlessly-deferred scientific "predictions." Confronted with such superlative utterances it is entirely beside the point to indulge in what appear to be "technical" disputes about the validity of the scientific claims that are hyperbolized into rationales for superlative articles of faith or to debate technodevelopmental timelines for superlative "outcomes." To indulge superlative futurologists in these preferred arguments is as little scientific as debating the number of angels who can dance on a pin-head with a monk or pouring over Nostradamus with some disasterbatory enthusiast to "determine" the exact date the world will end.
The phenomenological payoff for the True Believer, so long as these conversations play out in real time, is to confer onto their imaginary object of faith a substantial reality that the object itself cannot otherwise attain. It is better for everyone not to indulge this sort of irrationality at all, and certainly not to confuse this sort of thing with actual science or actual policy discourse to the cost of the indispensable work these enterprises actually do. Or, at any rate, one should understand this sort of thing as an essentially idiosyncratic aesthetic or moral matter on the part of its enthusiasts and treat it (even celebrate it as one always can appreciate kooky marginal fandoms) as one would comparable enthusiasms in their proper precinct.
Robot Cultists Decry My Pseudo-Science
"Thomas" sums up my skepticism on the Robot God priesthood and would-be Mind-Upload Immortalists in a word: "Vitalism."
Vitalism? Really?
It's like you want to pretend that noticing the actually salient facts connected to the materialization of consciousness in brains is tantamount to believing in phlogiston in this day and age.
I am a materialist in matters of mind and that means, among other things, that I don't discount the logical possibility that something enough akin to intelligence to deserve the description might be materialized on a different substrate. But logical possibility gives us no reasons to find ponies where there aren't any, and there is nothing in computer science or in actual computers on offer to justify analogies between them and intelligence, consciousness, and so on. That's all just bad poetry and media sensationalism and futurological salesmanship.
When Robot Cultists start fulminating about smarter-than-human-AI the first thing I notice is that they tend to have reduced human intelligence to something like a glandular calculator before going on thereupon to glorify calculators into imminent Robot Gods. I disapprove of both the reductionist impoverishment of human intelligence on which this vision depends as well as the faith-based unqualified deranging handwaving idealizations and hyperbolizations that follow thereafter.
The implications of the embodied materialization of human intelligence is even more devastating to superlative futurological wish-fulfillment fantasies of techno-immortalization via "uploading" into the cyberspatial sprawl, inasmuch as the metaphor of "migration" (yes, that's all it is, a metaphor) from brain-embodied mind to digital-materialized mind is by no means a sure thing if we accept, as materialists would seem to do, that the actual materialization is non-negligible after all.
UPDATE: Of course, "Roko" jumped right into the fray at this point.
In response to this comment of mine from the above -- “I don’t discount the logical possibility that something enough akin to intelligence to deserve the description might be materialized on a different substrate” --
Roko pants: "So let me get this straight: you think it is possible to build a computer that would deserve the name “intelligent”. From this I presume that you think it is possible to build a computer that is intelligent and smarter than any human -- as in, it can do any mental task as well as any human can, and it can do certain mental tasks that humans cannot do. Am I correct here?"
Of course, I said nothing about building a computer. You all can see that, right? You're all right here literally reading the same words Roko did. They're all right here in front of us. I get it that this is all Roko cares about since he thinks he gets to find his pony if only computers get treated as people. But computers are actually-existing things in the world, and they aren’t smart and they aren’t showing any signs of getting smart. Roko is hearing what he wants to hear here. Both in my response, but apparently also from his desktop (you don't understand, Jerry, he loves me, he loves me).
It should go without saying, but being a materialist about mind doesn’t give me or you or Roko permission to pretend to find a pony where there isn’t one. I can’t have the conversation Roko seems to want to have about whether he is “correct” or incorrect" to draw his conclusion from what I have said since all that has happened as far as I can see is that he has leaped off the deep end into spastic handwaving about computers being intelligent and smarter and acting this or that way, just because I pointed out that I don’t attribute mind to some mysterious supernatural force.
Despite all that, I also think, sensibly enough, that the words “smart” “intelligent” and “act” can’t be used literally to describe mechanical behavior, that these are only metaphors when so applied, and indeed metaphors that clearly seem utterly to have bewitched and derailed Roko (and his “community,” as he puts it) to the cost of sense.
I mean, who knows what beasts or aliens or what have you we might come upon who might be intelligent or not, however differently incarnated or materialized? But when we are talking about code and computers and intelligence we are talking about furniture that actually exist, and to attribute the traits we associate with intelligence with the things we call computers or the behaviors of software is to play fast and loose with language in ways that suggest either confusion or deception or both as far as I can see.
Vitalism? Really?
It's like you want to pretend that noticing the actually salient facts connected to the materialization of consciousness in brains is tantamount to believing in phlogiston in this day and age.
I am a materialist in matters of mind and that means, among other things, that I don't discount the logical possibility that something enough akin to intelligence to deserve the description might be materialized on a different substrate. But logical possibility gives us no reasons to find ponies where there aren't any, and there is nothing in computer science or in actual computers on offer to justify analogies between them and intelligence, consciousness, and so on. That's all just bad poetry and media sensationalism and futurological salesmanship.
When Robot Cultists start fulminating about smarter-than-human-AI the first thing I notice is that they tend to have reduced human intelligence to something like a glandular calculator before going on thereupon to glorify calculators into imminent Robot Gods. I disapprove of both the reductionist impoverishment of human intelligence on which this vision depends as well as the faith-based unqualified deranging handwaving idealizations and hyperbolizations that follow thereafter.
The implications of the embodied materialization of human intelligence is even more devastating to superlative futurological wish-fulfillment fantasies of techno-immortalization via "uploading" into the cyberspatial sprawl, inasmuch as the metaphor of "migration" (yes, that's all it is, a metaphor) from brain-embodied mind to digital-materialized mind is by no means a sure thing if we accept, as materialists would seem to do, that the actual materialization is non-negligible after all.
UPDATE: Of course, "Roko" jumped right into the fray at this point.
In response to this comment of mine from the above -- “I don’t discount the logical possibility that something enough akin to intelligence to deserve the description might be materialized on a different substrate” --
Roko pants: "So let me get this straight: you think it is possible to build a computer that would deserve the name “intelligent”. From this I presume that you think it is possible to build a computer that is intelligent and smarter than any human -- as in, it can do any mental task as well as any human can, and it can do certain mental tasks that humans cannot do. Am I correct here?"
Of course, I said nothing about building a computer. You all can see that, right? You're all right here literally reading the same words Roko did. They're all right here in front of us. I get it that this is all Roko cares about since he thinks he gets to find his pony if only computers get treated as people. But computers are actually-existing things in the world, and they aren’t smart and they aren’t showing any signs of getting smart. Roko is hearing what he wants to hear here. Both in my response, but apparently also from his desktop (you don't understand, Jerry, he loves me, he loves me).
It should go without saying, but being a materialist about mind doesn’t give me or you or Roko permission to pretend to find a pony where there isn’t one. I can’t have the conversation Roko seems to want to have about whether he is “correct” or incorrect" to draw his conclusion from what I have said since all that has happened as far as I can see is that he has leaped off the deep end into spastic handwaving about computers being intelligent and smarter and acting this or that way, just because I pointed out that I don’t attribute mind to some mysterious supernatural force.
Despite all that, I also think, sensibly enough, that the words “smart” “intelligent” and “act” can’t be used literally to describe mechanical behavior, that these are only metaphors when so applied, and indeed metaphors that clearly seem utterly to have bewitched and derailed Roko (and his “community,” as he puts it) to the cost of sense.
I mean, who knows what beasts or aliens or what have you we might come upon who might be intelligent or not, however differently incarnated or materialized? But when we are talking about code and computers and intelligence we are talking about furniture that actually exist, and to attribute the traits we associate with intelligence with the things we call computers or the behaviors of software is to play fast and loose with language in ways that suggest either confusion or deception or both as far as I can see.
Is Superlativity Worthy of Consideration?
Upgraded and adapted from the Moot, brave "Anonymous," opines:
Well, it's a philosophical critique and only a minority of people engage in such philosophical critique. By the way, this isn't an elitist condemnation of the majority -- people can be thoughtful without being philosophical, strictly speaking, after all.
Ah, poor little Robot Cultist. Nearly everybody who comes into contact with the transhumanists decides they are silly and wrong and dismisses them on the spot as kooks, and quite rightly so. No doubt some people will continue to be drawn to superlativity, for reasons like the ones I mention at the end of the post to which brave "Anonymous" is responding: namely, "in order to sell their scam[s] or cope with their fear of death and contingency or indulge their social or bodily alienation or lose themselves in wish-fulfillment fantasies inspired by science fiction or try to gain some sense of purchase, however delusive, on their precarious inhabitation of a dangerously unstable corporate-militarized world[.]"
Only a vanishingly small minority of people are transhumanist-identified or avowed singularitarians and so on. Thoughtfulness is not exactly the quality these people share in my view.
Most people don't take the superlative futurologists and Robot Cultists seriously enough in the first place to understand why I devote the time I do to critiquing them. I don't think many grasp that superlative futurology is a symptom and clarifying extreme expression of corporate-militarist developmental discourse more generally, and that such futurology, in turn, is the quintessential ideological expression of neoliberalism.
I do think it is regrettable that I have not managed to attract more attention from like-minded critics of corporate-militarism, but I must say that not convincing a few dumb boys who fetishize their toys to give up their Robot Cult is hardly any kind of abiding regret of mine where the superlativity critique is concerned.
You're right Dale, few will listen to you about superlativity, now or ever.
Well, it's a philosophical critique and only a minority of people engage in such philosophical critique. By the way, this isn't an elitist condemnation of the majority -- people can be thoughtful without being philosophical, strictly speaking, after all.
Meaning, few will start saying "yes Dale, you're right, transhumanist discourse is wrong or silly or harmful." To the contrary, a lot of serious, bright, and thoughtful people will likely continue to see transhumanist discourse as having value no matter how many times you repeat your critique.
Ah, poor little Robot Cultist. Nearly everybody who comes into contact with the transhumanists decides they are silly and wrong and dismisses them on the spot as kooks, and quite rightly so. No doubt some people will continue to be drawn to superlativity, for reasons like the ones I mention at the end of the post to which brave "Anonymous" is responding: namely, "in order to sell their scam[s] or cope with their fear of death and contingency or indulge their social or bodily alienation or lose themselves in wish-fulfillment fantasies inspired by science fiction or try to gain some sense of purchase, however delusive, on their precarious inhabitation of a dangerously unstable corporate-militarized world[.]"
Only a vanishingly small minority of people are transhumanist-identified or avowed singularitarians and so on. Thoughtfulness is not exactly the quality these people share in my view.
Most people don't take the superlative futurologists and Robot Cultists seriously enough in the first place to understand why I devote the time I do to critiquing them. I don't think many grasp that superlative futurology is a symptom and clarifying extreme expression of corporate-militarist developmental discourse more generally, and that such futurology, in turn, is the quintessential ideological expression of neoliberalism.
I do think it is regrettable that I have not managed to attract more attention from like-minded critics of corporate-militarism, but I must say that not convincing a few dumb boys who fetishize their toys to give up their Robot Cult is hardly any kind of abiding regret of mine where the superlativity critique is concerned.
Tuesday, April 07, 2009
The Robot Cultists Have All the Facts on Their Side
"Roko" serves up the usual premature dismissal:
If you have a fact-based argument as to why smarter than human AI is not possible then please tell me.
Just what assumptions and frames are embedded in your notion of "smarter" here, and are the implications of those assumptions matters of fact? Are differences arising from these assumptions open to adjudication on the basis of what you consider to be facts?
People who have trouble distinguishing science fiction from science should be less cocksure that they always have the facts on their side, and that their skeptics are always ignorant or irrational.
Is "smartness" a matter of instrumental or formulaic calculation, are sensitivity, imagination, improvisation, criticism, expressivity dimensions contained in your notion of "smarter than human AI"?
Does it matter or not to your visions of post-biological smartness that intelligence has always only been materialized in brains, does it matter that performances of intelligence are always social, and that in some construals collaboration is already a form of greater-than-personal intelligence?
If not, why not? At what point is the trait you claim to be so palpably possible sufficiently remote from the actual phenomena denoted by the term "intelligence" that you might properly be compelled (by the demands of sense, I mean) to find some other word for what you are talking about?
What are the stakes of your attribution of "possibility" to the "arrival" of this smartness, whatever you happen to mean by it? Is it logical possibility? Is it theoretical possibility, however well-substantiated or not, however remote or not? Is it proximate practical possibility capable of attracting investment capital or demanding immediate regulation?
Do these distinctions figure at all in your determination of whether or not this question of engineering "smarter than human AI" is worthy of serious consideration?
If not, why not? Wouldn't these sorts of distinctions figure in most practical considerations of the kind you seem to think you are engaging in?
If you want to sell what looks to me like a faith-based initiative concerning the arrival of post-biological "superintelligence" you'll discover that skeptics you want to persuade don't have to meet your terms, you have to meet ours. It's the extraordinary claim that demands the extraordinary substantiation.
Your personal challenge to me is finally irrelevant, of course, since the challenge of scientific consensus is the one that confronts your claim and so far you have failed to attract that consensus. You may be able to find a cul-de-sac in which your claim passes muster for a marginal minority (that's the whole point of joining a Robot Cult, presumably), and you are surely able to best me or at any rate bamboozle me in some exchange on some technical matter I have neither the training nor the temperament to address the proper significance of, but all that is neither here nor there.
I pose my own challenges to you on the terms I am fit for, and those terms are relevant even if they are not the only relevant ones in a question like this, and even if you choose to demote them as not "fact based" and hence, apparently, unworthy of consideration. You'll discover that you live in a world with sufficiently many people in it who differ with you on the question of which concerns are the ones worthy of consideration that dismissals only ensure that you are dismissed. That, too, after all, is a fact.
Monday, April 06, 2009
Drextech As Superlative, Now With "Existence Proofs"!
Insofar as all of this recent drextech talk hereabouts is connecting up specifically with my Superlativity critique, it is important to note that where "nanotechnology" is concerned I tend to focus on claims that technodevelopment will arrive at a superabundance through which we will presumably circumvent the impasse of stakeholder politics in a world shared by an ineradicable diversity of peers. The aspect that I focus on is the discourse of superabundance as an anti-political wish-fulfillment fantasy.
This "anti-political" discourse, as it happens, tends to function concretely in the service of elitist-incumbent-authoritarian political ends, whatever the professed politics of its adherents.
But like the other super-predicates of superlative discourse, superabundance amounts to a personal investment in a stealthy article of faith proffered up as endlessly-deferred scientific "predictions." The three super-predicates, recall, are superintelligence, superlongevity, and superabundance, and they correlate to the three theological omni-predicates -- omniscience, omnipotence, and omnibenevolence. But like the avowed articles of faith of the omni-predicates with which they are correlated, these super-predicates are ultimately incapable of functioning as factual assertions at all, they are self-consuming quasi-factual placeholders for the brute assertion of faith itself.
Confronted with superlative utterances it is entirely beside the point to indulge in what appear to be technical disputes about the validity of the scientific claims that are hyperbolized into rationales for superlative articles of faith or to debate technodevelopmental timelines for superlative "outcomes." To indulge superlative futurologists in these preferred arguments is as little scientific as debating the number of angels who can dance on a pin-head with a monk or pouring over Nostradamus with some disasterbatory enthusiast to "determine" the exact date the world will end. The phenomenological payoff for the True Believer, so long as these conversations play out in real time, is to confer onto their imaginary object of faith a substantial reality that the object itself cannot otherwise attain. It is better for everyone not to indulge this sort of irrationality at all, certainly not to confuse this sort of thing with actual science or actual policy discourse to the cost of the indispensable work these enterprises actually do, or, at any rate, one should understand this sort of thing as an essentially idiosyncratic aesthetic or moral matter on the part of its enthusiasts and treat it (even celebrate it) as one would comparable enthusiasms in their proper precinct.
Anyway, to the extent that the drextech business takes up the faith-based discourse of superabundance it is susceptible to the superlativity critique, but while drextech is exemplary of Superlativity in this way, it is far from essential to it. It seems to me that late-century digital-utopianism and hype about immersive virtualies, as well as quite a lot of mid-century automation discourse also indulged conspicuously in superlative discourse as the anti-political gesture of an aspiration toward the technodevelopmental "accomplishment" of superabundance. I regard Roland Barthes' reading of "Plastic" in his Mythologies as yet another example of the same.
I think all this is related to but not exactly the same sort of thing that I was talking about when last week I discussed the relation of marginal ideas and warranted consensus in the scientific mode of reasonable belief ascription, and similarly related to but not exactly the same thing Richard Jones is pointing to when he delineates the ways in which the claims arising out of drextech fandoms are marginal to consensus science, are overconfident and uncaveated in ways that are uncharacteristic of consensus science, and rely on deeply questionable uninterrogated assumptions that would trouble most consensus scientists, and so on. He's far more qualified to address technical questions at a level of detail my own different training not to mention different temperament ill-suits me for, but I am qualified to know to stick to consensus where I am not qualified to falsify or substantiate candidate assertions for scientific belief and to recognize marginality by the available criteria on hand and treat it accordingly. It seems to me that my modesty is better suited to proper scientificity than the immodesty of the superlative "champions of science" who would deride my muzzy-headed lie-brul elite fashionable nonsensicality.
It does seem to me there is a common tendency in superlative discourse for its adherents to hang their hats on "existence proofs" the generality of which is scarcely substantial enough to bear the weight of confidence in particular outcomes and particular technodevelopmental trajectories that tend to get connected to them.
It is hard to see how biological realities actually substantiate the confidence drextechians seem to have in the technodevelopmental accomplishment of highly-controlled programmable general-purpose self-replicating room-temperature molecular manufacturing when that actually isn't what we find in biology after all. Similarly, it is hard to see how the realities of organismic intelligence actually substantiate the confidence AI-enthusiasts have in the technodevelopmental accomplishment of differently materialized intelligences, let alone bear the weight of confidence "uploaders" have that consciousness can migrate across substrates without loss.
One hears techno-immortalists declaring that because intelligence is material rather than supernatural this means that presently biological intelligence can migrate into imperishable digital networks in principle, when it seems at least as likely that the materiality of consciousness renders the concrete materiality of its incarnation non-negligible and hence less susceptible to "migrations." Indeed, I expect that the "plausibility" of such scenarios of migration (the frame itself is a metaphorical sleight of hand, after all, rather than any kind of lab result) derives more from the deep seated familiarity of mystical framings of insubstantial ensouled consciousness "imprisoned" by the material body more than anything else, however vociferously these superlative futurologists disdain religious faith otherwise.
In a nutshell, I think superlative faith-based initiatives find comfort in these "existence proofs" because they need to find them there, not because there is any reasonable support to be found in them. Skeptics are quite familiar with the faithful who find "proof" of God, and so of the immortality of their souls, in the sublimity of a buzzing blooming sun-drenched natural world without God or immortal souls anywhere on hand. It is a mercy at any rate that the faithful don't try to sell us on the notion that it is in such moments of inspired free-association that they are behaving as model scientists, which is what the galling flabbergasting spectacle of Superlativity too often seems to amount to.
There can be an appealing poetry in esoteric mysticism and in the aesthetic practices of personal perfection in which the faithful indulge -- so long as they don't become hectoring and puritanical I find it as easy to sympathize with their lifeways as I do other creative people, poets, punks, perverts, party-animals and so on. If the superlative futurologists just realized that they are another sf fandom and stopped messing with urgently needed public technodevelopmental deliberation in a time of disruptive technoscientific change, they could be perfectly charming and harmless as well, as better befits their preoccupations.
But to the extent that superlative discourse fancies itself a world-historical movement, whomping up a sub(cult)ural constituency, advocating an ideology from the vantage of an organizational archipelago impacting actual public discourse and actual policy, then I'm afraid it becomes crucial to identify its essentially faithful constitution and critique its mobilization and ramification in the world as such.
That Superlativity is not properly scientific is important to grasp and expose, especially given the tendency of superlatives to sell themselves as a kind of scientific avant-garde, but it matters more to understand not just that Superlativity fails to pass muster as scientific but that it is essentially a faith-based initiative indulging in moralizing politics from a sub(cult)ural vantage and hence constitutes a threat to secular democratic multiculture in much the same ways that perniciously politicized fundamentalist faiths always are.
This "anti-political" discourse, as it happens, tends to function concretely in the service of elitist-incumbent-authoritarian political ends, whatever the professed politics of its adherents.
But like the other super-predicates of superlative discourse, superabundance amounts to a personal investment in a stealthy article of faith proffered up as endlessly-deferred scientific "predictions." The three super-predicates, recall, are superintelligence, superlongevity, and superabundance, and they correlate to the three theological omni-predicates -- omniscience, omnipotence, and omnibenevolence. But like the avowed articles of faith of the omni-predicates with which they are correlated, these super-predicates are ultimately incapable of functioning as factual assertions at all, they are self-consuming quasi-factual placeholders for the brute assertion of faith itself.
Confronted with superlative utterances it is entirely beside the point to indulge in what appear to be technical disputes about the validity of the scientific claims that are hyperbolized into rationales for superlative articles of faith or to debate technodevelopmental timelines for superlative "outcomes." To indulge superlative futurologists in these preferred arguments is as little scientific as debating the number of angels who can dance on a pin-head with a monk or pouring over Nostradamus with some disasterbatory enthusiast to "determine" the exact date the world will end. The phenomenological payoff for the True Believer, so long as these conversations play out in real time, is to confer onto their imaginary object of faith a substantial reality that the object itself cannot otherwise attain. It is better for everyone not to indulge this sort of irrationality at all, certainly not to confuse this sort of thing with actual science or actual policy discourse to the cost of the indispensable work these enterprises actually do, or, at any rate, one should understand this sort of thing as an essentially idiosyncratic aesthetic or moral matter on the part of its enthusiasts and treat it (even celebrate it) as one would comparable enthusiasms in their proper precinct.
Anyway, to the extent that the drextech business takes up the faith-based discourse of superabundance it is susceptible to the superlativity critique, but while drextech is exemplary of Superlativity in this way, it is far from essential to it. It seems to me that late-century digital-utopianism and hype about immersive virtualies, as well as quite a lot of mid-century automation discourse also indulged conspicuously in superlative discourse as the anti-political gesture of an aspiration toward the technodevelopmental "accomplishment" of superabundance. I regard Roland Barthes' reading of "Plastic" in his Mythologies as yet another example of the same.
I think all this is related to but not exactly the same sort of thing that I was talking about when last week I discussed the relation of marginal ideas and warranted consensus in the scientific mode of reasonable belief ascription, and similarly related to but not exactly the same thing Richard Jones is pointing to when he delineates the ways in which the claims arising out of drextech fandoms are marginal to consensus science, are overconfident and uncaveated in ways that are uncharacteristic of consensus science, and rely on deeply questionable uninterrogated assumptions that would trouble most consensus scientists, and so on. He's far more qualified to address technical questions at a level of detail my own different training not to mention different temperament ill-suits me for, but I am qualified to know to stick to consensus where I am not qualified to falsify or substantiate candidate assertions for scientific belief and to recognize marginality by the available criteria on hand and treat it accordingly. It seems to me that my modesty is better suited to proper scientificity than the immodesty of the superlative "champions of science" who would deride my muzzy-headed lie-brul elite fashionable nonsensicality.
It does seem to me there is a common tendency in superlative discourse for its adherents to hang their hats on "existence proofs" the generality of which is scarcely substantial enough to bear the weight of confidence in particular outcomes and particular technodevelopmental trajectories that tend to get connected to them.
It is hard to see how biological realities actually substantiate the confidence drextechians seem to have in the technodevelopmental accomplishment of highly-controlled programmable general-purpose self-replicating room-temperature molecular manufacturing when that actually isn't what we find in biology after all. Similarly, it is hard to see how the realities of organismic intelligence actually substantiate the confidence AI-enthusiasts have in the technodevelopmental accomplishment of differently materialized intelligences, let alone bear the weight of confidence "uploaders" have that consciousness can migrate across substrates without loss.
One hears techno-immortalists declaring that because intelligence is material rather than supernatural this means that presently biological intelligence can migrate into imperishable digital networks in principle, when it seems at least as likely that the materiality of consciousness renders the concrete materiality of its incarnation non-negligible and hence less susceptible to "migrations." Indeed, I expect that the "plausibility" of such scenarios of migration (the frame itself is a metaphorical sleight of hand, after all, rather than any kind of lab result) derives more from the deep seated familiarity of mystical framings of insubstantial ensouled consciousness "imprisoned" by the material body more than anything else, however vociferously these superlative futurologists disdain religious faith otherwise.
In a nutshell, I think superlative faith-based initiatives find comfort in these "existence proofs" because they need to find them there, not because there is any reasonable support to be found in them. Skeptics are quite familiar with the faithful who find "proof" of God, and so of the immortality of their souls, in the sublimity of a buzzing blooming sun-drenched natural world without God or immortal souls anywhere on hand. It is a mercy at any rate that the faithful don't try to sell us on the notion that it is in such moments of inspired free-association that they are behaving as model scientists, which is what the galling flabbergasting spectacle of Superlativity too often seems to amount to.
There can be an appealing poetry in esoteric mysticism and in the aesthetic practices of personal perfection in which the faithful indulge -- so long as they don't become hectoring and puritanical I find it as easy to sympathize with their lifeways as I do other creative people, poets, punks, perverts, party-animals and so on. If the superlative futurologists just realized that they are another sf fandom and stopped messing with urgently needed public technodevelopmental deliberation in a time of disruptive technoscientific change, they could be perfectly charming and harmless as well, as better befits their preoccupations.
But to the extent that superlative discourse fancies itself a world-historical movement, whomping up a sub(cult)ural constituency, advocating an ideology from the vantage of an organizational archipelago impacting actual public discourse and actual policy, then I'm afraid it becomes crucial to identify its essentially faithful constitution and critique its mobilization and ramification in the world as such.
That Superlativity is not properly scientific is important to grasp and expose, especially given the tendency of superlatives to sell themselves as a kind of scientific avant-garde, but it matters more to understand not just that Superlativity fails to pass muster as scientific but that it is essentially a faith-based initiative indulging in moralizing politics from a sub(cult)ural vantage and hence constitutes a threat to secular democratic multiculture in much the same ways that perniciously politicized fundamentalist faiths always are.
Wherein I Am Ripped to Pieces
From an exchange with a person called "Roko" (their comments are italicized):
It seems that Dale’s primary argument is a factual one not an axiological one: he is arguing that all this technology stuff is hyperbolae, that it ain’t gonna happen.
My primary argument is that superlative aspirations are conceptually confused to the point of illegibility, and that their advocacy amounts to a faith-based initiative. One would expect in consequence that, despite their protestations to the contrary that they are consummate scientists, superlative futurists would have little empirical evidence to show of being taken seriously by actual scientists (citations in scholarly journals, singularitarians, techno-immortalists, and nano-cornucopiasts in a competing diversity of academic labs with real grants, and so on). And precisely this is the case.
Marginality matters where one wants to claim the mantle of consensus science for one’s advocacy, as it does not necessarily matter where one wants to defend, say, the validity of an unpopular aesthetic judgment or political position.
Such an argument needs to be prosecuted in a more rigorous way than Dale is capable of.
This is laugh out loud funny to me. My whole point is that you people don’t even grasp the genre of argument you are making, let alone the criteria of warrant properly associated with it. You don’t grasp that your perpetual motion machines and square the circle pamphlets don’t constitute science at all in their essential claims (that is to say the claims that are their unique contribution, as against the scientific and policy claims they nibble at the edges of, the claims nobody needs to join a Robot Cult to make contact with). Superlative claims seem to me to be essentially theological, aesthetic, and moral(istic), and those are the terms in which I seek to understand them and critique them.
I challenge Dale to post to Less Wrong an argument whose conclusion is that the probability of smarter than human AI within the next 100 years is less than 0.1%. He will be ripped to pieces.
I challenge you all to stop writing checks you can’t cash and just go burrow off to your secret genius labs in the asteroid belt and code your superintelligent postbiological Robot God or your drextechian genie in a bottle or your sooper immortal cyborg shell, whereupon I’ll genuflect to your quasi-deified super-predicated post-self all the livelong day in the most edifying fashion imaginable, if you like. The same goes to fundamentalists who can gloat from their perches on heavenly clouds as I roast after death in some hell-nook for my atheism or the gay thing or whatever.
It seems that Dale’s primary argument is a factual one not an axiological one: he is arguing that all this technology stuff is hyperbolae, that it ain’t gonna happen.
My primary argument is that superlative aspirations are conceptually confused to the point of illegibility, and that their advocacy amounts to a faith-based initiative. One would expect in consequence that, despite their protestations to the contrary that they are consummate scientists, superlative futurists would have little empirical evidence to show of being taken seriously by actual scientists (citations in scholarly journals, singularitarians, techno-immortalists, and nano-cornucopiasts in a competing diversity of academic labs with real grants, and so on). And precisely this is the case.
Marginality matters where one wants to claim the mantle of consensus science for one’s advocacy, as it does not necessarily matter where one wants to defend, say, the validity of an unpopular aesthetic judgment or political position.
Such an argument needs to be prosecuted in a more rigorous way than Dale is capable of.
This is laugh out loud funny to me. My whole point is that you people don’t even grasp the genre of argument you are making, let alone the criteria of warrant properly associated with it. You don’t grasp that your perpetual motion machines and square the circle pamphlets don’t constitute science at all in their essential claims (that is to say the claims that are their unique contribution, as against the scientific and policy claims they nibble at the edges of, the claims nobody needs to join a Robot Cult to make contact with). Superlative claims seem to me to be essentially theological, aesthetic, and moral(istic), and those are the terms in which I seek to understand them and critique them.
I challenge Dale to post to Less Wrong an argument whose conclusion is that the probability of smarter than human AI within the next 100 years is less than 0.1%. He will be ripped to pieces.
I challenge you all to stop writing checks you can’t cash and just go burrow off to your secret genius labs in the asteroid belt and code your superintelligent postbiological Robot God or your drextechian genie in a bottle or your sooper immortal cyborg shell, whereupon I’ll genuflect to your quasi-deified super-predicated post-self all the livelong day in the most edifying fashion imaginable, if you like. The same goes to fundamentalists who can gloat from their perches on heavenly clouds as I roast after death in some hell-nook for my atheism or the gay thing or whatever.
Richard Jones Is Talking Sense
Buried deep in the Moot still unspooling from a post last week is this rather bolstering comment from Richard Jones which deserves to be disinterred:
Over at Michael Anissimov's Blog, another varation on this conversation has been ongoing. To my repeated charges that superlative claims amount to faith-based initiatives, and altogether fail to pass muster as scientific (all the while Robot Cults sputter about what incomparable indispensable champions of science they are, battling menacing effete relativists in the humanities like yours truly), Michael dismissively replied, from his planet:
To this I responded, roughly (I've expanded and clarified my point a bit now that I'm on my own real estate, follow the link for the exact words):
Richard Jones appeared in the midst of that conversation as well, with this enormously helpful comment:
Sorry to come to this late, but since my name has been mentioned maybe it's still worth my clarifying a few things. My engagement with enthusiasts for Drexlerian nanotech began somewhere around 2003-4. I'd begun developing the critique which took shape in my book Soft Machines (which is in essence an refutation of arguments from biology to justify Drexlerian, "Nanosystems" style nanotechnology, of the kind that Martin cites above) in the late 90's, with a couple of low profile publications. As this engagement went on, at a technical level, I became more and more puzzled by some odd features of the spokesmen for Drexler -- the strength of their will to believe in the face of sceptical arguments from mainstream nanoscience, the fact that belief in "Drextech" didn't come alone, but that it formed part of what an anthropologist might call a "belief package", together with a strong conviction that radical life extension would soon be possible, that we'd soon have superhuman artificial intelligence, and that this would lead to a "singularity". It also struck me as odd, in a time when nanoscience was truly global, that these people seemed to have a background that was rather culturally, ideologically and geographically specific. In this context I found Dale's analysis and critique extremely helpful and revealing. As I've written before, I've grown to realise that the technical issues that I'm qualified to write about aren't really at the heart of this business, and that Dale's perspective from rhetoric and cultural studies is really valuable.
On the matter of tone, there's certainly a difference in personal style in the way Dale and I conduct our critiques. I don't choose to express myself the way Dale does; but this doesn't mean I don't enjoy his writing or think he's not correct in many essentials. My natural tone tends to (sometimes ironic) detachment, but this also doesn't mean I don't sometimes envy Dale for his passion and engagement.
Over at Michael Anissimov's Blog, another varation on this conversation has been ongoing. To my repeated charges that superlative claims amount to faith-based initiatives, and altogether fail to pass muster as scientific (all the while Robot Cults sputter about what incomparable indispensable champions of science they are, battling menacing effete relativists in the humanities like yours truly), Michael dismissively replied, from his planet:
Many scientists are becoming convinced that this stuff [the technodevelopmental "accomplishment" of superintelligence, superlongevity, superabundance] is a big deal.
To this I responded, roughly (I've expanded and clarified my point a bit now that I'm on my own real estate, follow the link for the exact words):
I regard this utterance as the most flabbergasting imaginable nonsense. “Many” as compared to how overabundantly many who don’t?
Do you want to get into citation indexes, Michael? To the extent that citations are mounting a bit (they had nowhere to go but up, after all), how many citations outside a small circle that keep citing each other do these citers garner themselves? How often do these citations occur outside the in-group as throwaways in intros and conclusions — connected to statements roughly to the effect of, “some nutters go so far as to say superlative blah blah utility fog blah blah mind uploading, cite Moravek, cite Kurzweil, etc, whereas in this paper I stick to rather more conventional assumptions and modest expectations?" That is to say, how often do these citations amount to little more than logrolling among superlative futurologists themselves or function as extremities against which more mainstream scientists and policymakers situate themselves as more mainstream?
The whole point of the archipelago of Robot Cult think tanks like IEET and ImmInst and Singularity U and WTA (er, I mean, Humanity Plus!) as far as I can see is to cough up a hairball of apparent respectability for these superlative formulations for media outlets to hook onto, to help your membership organizations get more bottoms into the pews and more eyeballs onto the webpages, what you call “getting out the message,” very much in the way neocons whomp up faux respectability for their scams via Heritage and AEI.
Richard Jones appeared in the midst of that conversation as well, with this enormously helpful comment:
I can perhaps amplify Dale’s comments above. The paper “A Design for a Primitive Nanofactory” [this paper was cited by Michael as evidence that superlativity was not as marginal as I claim it to be] was published in a transhumanist journal which is not indexed by the major scientific databases. According to “Web of Science”, which tracks citations in the mainstream science journals that are in its database, the paper has been cited three times since 2003, by two papers authored by Phoenix himself and another by Drexler and Allis.
Plenty of mainstream scientists are excited by the many possibilities that our increasing control of matter on the nanoscale open up, many of them are developing increasingly sophisticated nanoscale machines and devices. But very few of them, in my experience, give any credence to the superlative claims of “nanofactories” and “superabundance”. “Superlative discourse”, as Dale calls it, pollutes the discussions we should be having about the many potential or possible impacts of developing technoscience; instead talking, as AnneC says, in terms of Utter Certainty about particular (ideologically favoured) outcomes on well-defined timescales that to mainstream science seem fantastic.
Subscribe to:
Posts (Atom)