Using Technology to Deepen Democracy, Using Democracy to Ensure Technology Benefits Us All

Monday, April 14, 2008

What Do Reality-Based Progressive Technocentrisms Owe Faith-Based Superlative Technocentrisms

Mitchell objects in the Moot to the terms on which I critique Superlative technocentricity:
Dale: "I think progressive technoscience politics should focus on" (lots of reality-based stuff).

That's an entirely reasonable agenda. I think even an ideological opponent who objected to your framing of the issues would have to agree that you are at least talking about real things and real options from start to finish.

However, one can agree that all that is real, and worth somebody's attention, perhaps even the majority of "mainstream" attention, and still expect God-in-a-box stuff a very short time afterwards, historically speaking. That expectation derives from a few simple premises and I have not seen anything here to refute them. So there must be an effort to meet that challenge.

If you say so, Mitchell.

It seems to me all that I have to show is that what superlativity really amounts to is indeed, as you say, the expectation of "God in a box," and then sensible people will already know what to do with it.

You point out that I haven't "refuted" the "simple" premises that organize superlative outlooks. But what you, like other transhumanists and singularitarians and techno-immortalists seem to demand is that even your critics concede the basic rationality of your project and then proceed to haggle with you over what you yourselves imagine to be its terms at a "technical" level.

In my view, there is nothing actually happening in superlative discourses in the first place that is worthy of consideration on its own terms except its rhetorical content, its solicitation of identification, investment, and aspiration.

Superlativity piggybacks on some technical discourses that have content which it nibbles at their edges, but the actual contribution of superlativity is to invest this content with super-predicated aspirations and sub(cult)ural significance.

That is what interests me, that is where I lodge most of my critique, that is the location in culture at which both my analyses and critiques are pitched. You can dismiss or decry or disavow the relevance of criticism in this mode, but you must realize that when you do so it is you who are shunting aside whole areas of inquiry and understanding and at costs that are real even if you decide that they are negligible.

If one is firmly transhumanist-identified I don't expect you to be particularly convinced by my critique, I don't expect you to consider your True Beliefs to be "refuted" by what I say. I am probably not playing a "language game" that shares enough of the rules on the basis of which you would accept arguments as "refutations" in the first place.

But when a superlative technocentric goes on to try to dictate the terms on which any effort to challenge their worldview is offered, however, I must say that is a bit funny. This is like a member of some organized religion refusing to accept as a legible challenge to their faith anything but inter-sectarian doctrinal squabbles that, whatever their contentiousness, all share the same essential articles of faith.

I don't engage in the "technical" debates superlative technocentrics seem to regard as the only legitimate concerns they will entertain as worthy of consideration, for a few reasons:

First, they are not my own areas of expertise. Of course, most transhumanists and other superlatives are not experts in these areas either, their self-image as knightly champions of Science battling the forces of Endarkenment notwithstanding, hence their devotion to so many views and predictions that fail to square with actual scientific consensus.

Second, I disagree that anything interesting or actually unique to superlative discourse is happening at this so-called "technical" level in any case. As it happens, as a rhetorician and critical theorist it seems to me I am qualified both by temperament and training to offer up an engagement with superlative discourse at precisely the level and location at which it is actually operating.

From my perspective, to be blunt, superlativity is just made up bullshit. It hijacks real world concerns occasioned by ongoing disruptive technoscientific change and emerging technoscientific capacities. It deranges their terms through a super-predicated discourse that essentially mimes while instrumentalizing theological omni-predicates. In consequence, it activates irrational passions where sensible technodevelopmental deliberation is most needed. As it happens, it is also incredibly vulnerable to appropriation to anti-democratizing ends, endorsing technocratic elitist, scientistic reductionist, normative eugenicist, and the terms of neoliberal development discourse in the service of incumbent interests.

I get it that transhumanists would rather talk about the "serious" and "technical" reasons why some True Believers think the Robot God will arrive on the scene in twenty years versus fifty years, or the "serious" and "technical" reasons why some True believers think nanoscale technique will create a utility fog utopia versus a nanofactory utopia, or the "serious" and "technical" reasons why some True Believers think they will be techno-immortalized through the gradual adoption of robot bodies versus uploading their minds into digital networks, and so on. That's why they are transhumanists, presumably.

It seems to me that all I have to do is point out to most people that these are indeed the "serious" and "technical" views transhumanists are debating, indicate what argumentative and sub(cult)ural work this hyperbolic discourse is doing for transhumanists, delineate what larger rhetorical and conceptual lineages transhumanist discourses are lodged in, and show how our thinking about actually ongoing technodevelopment through the lens of this discourse (whether supporting it as superlatives do or opposing it as bioconservatives do) deranges our perceptions, expectations, and priorities here and now.

Some transhumanists and other superlatives seem to think it is the authors of made up bullshit who get to set the terms on the basis of which anybody can go on to call bullshit on their made up bullshit. Alas, for you, this is not the case.

22 comments:

jimf said...

> Some transhumanists and other superlatives seem to think
> it is the authors of made up bullshit who get to set the
> terms on the basis of which anybody can go on to call
> bullshit on their made up bullshit.

"Scientists" predict pie in the sky by and by.
http://lists.extropy.org/pipermail/extropy-chat/2008-April/042711.html

Anonymous said...

Of course, most transhumanists and other superlatives are not experts in these areas either, their self-image as knightly champions of Science battling the forces of Endarkenment notwithstanding, hence their devotion to so many views and predictions that fail to square with actual scientific consensus.

You still need to get technical arguments from somewhere, because superlativity is completely fine if all its technical claims are true.

Michael Anissimov said...

Note how you maintain an even tone in this post, presenting your critique without getting excessively nasty.

Of course, as peco says and I've said before, if you don't offer technical counterarguments to superlative claims, most geeks won't be bothered to care about your critique.

Dale Carrico said...

What about my "made up bullshit" line? Hey, we've had relatively cordial debates and we've also traded barbs. Both the naughty and the nice are tangos it takes two to take to. I'm a rhetorician and superlativity is a rhetoric, so I'm content to focus where my training suits me and the phenomenon most needs critique in any case. But I'll take your kind advice about what my writing needs to concern itself with if I want to attract more geeks to my readership under serious advisement. If nothing else, I should be able to scare up a pie chart. Mmmmm... pie.

Dale Carrico said...

You still need to get technical arguments from somewhere, because superlativity is completely fine if all its technical claims are true.

As it happens rhetoric, critical theory, and discourse analysis do make their share of "technical arguments," even if they don't pull percentages out of their ass or offer up decade by decade charts with red arrows streaking hyperbolically upward to indicate accelerating techno-tastic trends rocketing ever heavenward and so on. If the monks' technical claims about angels on pinheads are true Jeebus's resurrection in 1666 is also certain. Not everybody has to be interested in that sort of thing to say something worth saying. But, peco, honestly, if you are really so unsatisfied with my perspective, honestly, there is nobody, nobody begging you to stay.

Anonymous said...

pull percentages out of their ass or offer up decade by decade charts

You could just point this out when it happens. There are plenty of "technical arguments" that aren't like this at all. Singulatarians use this a lot, though, so I don't think their arguments are good.

But, peco, honestly, if you are really so unsatisfied with my perspective, honestly, there is nobody, nobody begging you to stay.

I wouldn't stay if I didn't agree with many of your posts (and with parts of most of them).

What about my "made up bullshit" line?

I don't think that's excessively nasty. It's nasty, sure, but not excessive--"made up bullshit" is something that actually happens, and you only say it once. Calling transhumanists Robot Cultists (caps!) over and over again isn't needed.

Mitchell said...

Here is a very simple premise: that incrementally more and more human intellectual capabilities will be reproduced in hardware whose basic elements are thousands of times faster than those of the human brain.

Similarly, if you suppose that all today's major illnesses are cured, leaving only accident, suicide, and murder as causes of death, human life expectancy goes to something like a thousand years.

My point is that these are not made-up numbers. In particular, the bit about transistors being thousands of times faster than neurons is a fact about the present. And it's just sixty years since the first transistor was made. They can be put together in unlimited numbers; they don't have to respect the volume constraints of a human skull. They can guide robots and factories, process language, deduce and experiment. Given all that, it should be obvious that the potential now exists for the human race to be thoroughly supplanted by what amounts to a new kingdom of life. And there should be no need for "technoprogressives" to deny this.

Dale Carrico said...

Here is a very simple premise: that incrementally more and more human intellectual capabilities will be reproduced in hardware whose basic elements are thousands of times faster than those of the human brain.

What am I supposed to say to that? Will they indeed? Which capacities? Important ones? Why should we think so? What if we don't understand them as much as you think we do? What if things break down instead? In what timeframes? Why should we think about this instead of the countless other things serious people should obviously be thinking about at the moment?

You say this is a simple premise.

So is a Ponzi scheme. So is a profession of faith. So is a declaration of war. So what?

[I]f you suppose that all today's major illnesses are cured, leaving only accident, suicide, and murder as causes of death, human life expectancy goes to something like a thousand years.

Why would I "suppose" that? They aren't and show no signs of it. But, hey, go right ahead. How do you know new conditions wouldn't crop up even if we were to "suppose" this implausible outcome? Why are you so sure that curing the conditions we think of diseases today would leave only accidents, suicides, and murders as causes of death? How much time should I have to devote to such topics anyway at a time when people are starving to death or dying of easily treatable neglected diseases?

My point is that these are not made-up numbers.

Almost every single thing to which you are attributing significance here is indeed absolutely made up. We're not living in a science fiction novel. Get a grip!

In particular, the bit about transistors being thousands of times faster than neurons is a fact about the present.

Newsflash! Brains aren't composed of transistors.

And, anyway, what's with the endless boys and their toys fascination with Faster! Bigger! Stronger!

You guys whine about my psychologizing you and caricaturing you and then start handwaving this way? Honestly.

They can guide robots and factories, process language, deduce and experiment.

If you say so. It's not like they are really doing this particularly, but okay.

Given all that, it should be obvious...

"Given" what? What is "given" now? Oh, never mind.

Yes, given that non-reality is to be treated as reality it is obvious that the non-real things you want are inevitable. QED.

the potential now exists for the human race to be thoroughly supplanted by what amounts to a new kingdom of life.

Look how excited he is!

Hulk smash measly hu-mahns! Rarh!

I guess I will agree though that given resource descent (Peak Oil, water pollution, topsoil depletion, greenhouse gases, etc.) there is a non-negligible potential for non-humans to supplant humans for earthly mastery, roaches or lemurs or yeast or who knows what. I can't say that I am particularly pleased at the prospect, personally. Or, yeah, maybe that whole Robot Body sooper race thing will pan out. Yessiree, bob.

jimf said...

Mitchell wrote:

> [T]he bit about transistors being thousands of times faster
> than neurons is a fact about the present.

Faster at **what**, exactly? My car is faster than either one
if you line them all up on the highway.

You're **assuming** (**much** more blithely than a real
neuroscientist would) that a transistor and a neuron are
basically the same kind of thing. I'm afraid the jury
remains out on that one.

And even if it turns out to be true that something made out
of transistors (whether a digital computer or not) can be
"the same kind of thing" as a neuron, nobody knows how
many transistors it would take. Or whether we'll be able
to make 'em small enough, energy-efficient enough, etc.,
for the substitution to be worth making.

Maybe they will. Maybe they won't. Maybe there'll
be something else besides transistors that will make
the experiment more practicable.

But counting on any of those things at this stage of the
game is basically exhibiting True Belief in the acceleration-of-technology
stairway to the Singularity. Don't tell me that Ray Kurzweil
is the Hari Seldon of our time. I'm not buying it.

Anne Corwin said...

mitchell, personally I think a "new kingdom" of electronic life would be super neat! I don't know about the "supplanting" part, but mutually respectful coexistence would be pretty swell, IMO. I would gladly welcome our new robot neighbors.

But...I'm an electrical engineer. And while I've certainly seen transistor-based circuits do some pretty amazing things, I still say it's pretty hand-wavey to suggest that talk of transistor circuits "supplanting" humans (or other animals) is anything other than daydreaming. As someone who has been working in the electronics industry for the past five years, and who worked for a while as a software engineer prior to that, I know full well that the "level of technology" is NOT the major bottleneck when it comes to stuff being developed.

In actual industry, you have a lot of meetings, a lot of paperwork, and a lot of people arguing over what resources should go where.

Seriously. It drives me nuts. The Dilbert comic is actually a documentary, as far as I'm concerned.

And despite the vast and copious use of ever-faster processors and ever-larger hard drives, I have not seen anything remotely resembling a case in which major decisions about who does what, which things actually get developed, etc., being "turned over" to computers.

Honestly, I am pretty open-minded about what MIGHT exist at some point in the distant future. I don't underestimate innovation or serendipity at all. But neither do I have "faith" in such things to bring about particular outcomes "inevitably". Plenty of people in the 50s thought moon bases, universal superabundance due to nuclear energy, and silver jumpsuits for every girl and boy were going to be passe by now. And they're not. We don't have those things, and we never had them. And with the exception of universal nuclear suberabundance, the reasons for not having those things have everything to do with culture and what people do (or don't) value, and who happens to have the most power at any given time, and precious little to do with technology.

Same as for why we still have people starving in the world -- surely nobody with any sense believes that this is due to a plain old lack of matter sufficient to take the form of enough food to feed everyone alive.

So, while it's all well and good to be mainly interested in the hard science and technology stuff, and while it is perfectly okay if you just find politics and logistics boring (personally I *hate* logistics...and don't even get me started on paperwork!), it doesn't make sense to try and engage the world on a rhetorical level while claiming you're doing it on a technical level.

And unless I'm missing something in all this, THAT is one of the major points I see being made over and over again on this blog.

Dale Carrico said...

Calling transhumanists Robot Cultists (caps!) over and over again isn't needed.

I beg to differ. Calling transhumanists Robot Cultists is urgently needed. Also, it's fun.

Dale Carrico said...

[I]t doesn't make sense to try and engage the world on a rhetorical level while claiming you're doing it on a technical level.

And unless I'm missing something in all this, THAT is one of the major points I see being made over and over again on this blog.


Ding! ding! ding! ding! ding! We have a winner, ladies and gentlemen!

Anne Corwin said...

Oh, and when I said that "I have not seen anything remotely resembling a case in which major decisions about who does what, which things actually get developed, etc., being "turned over" to computers.", I also meant to say that I see no reason why anyone should want to transfer over important stakeholder-affecting decision-making processes to massive transistor farms.

And while I have been known to complain about what I see as inefficient bureaucratic nonsense, I do not see the end to this nonsense coming in the form of any amount of transistors -- and I still do not expect or want decisions to someday all become instantaneous. There is no way for any algorithm to substitute for getting the actual inputs of the persons affected by particular decisions.

Mitchell said...

Dale:

What am I supposed to say to that? Will they indeed? Which capacities?

I'll bundle a bunch of them together, and say: The capacity to form a representation of the world and/or parts thereof, to extrapolate possible future states, to choose between them on the basis of a system of preferences, and to design and carry out actions intended to bring about preferred future states.

It's the possession of those capacities which gives human beings a lot of their power. But they are also possessed, in miniature, by any game-playing computer program, and that same framework offers a way to think about general-purpose artificial intelligence.

Important ones? Why should we think so? What if we don't understand them as much as you think we do? What if things break down instead? In what timeframes?

A few decades. We're running out of fundamental problems to solve; it's becoming a matter of combining things we already have in a crude form, and refining them and adjusting them to each other. Agile robotics, dense packing of processors, sensorimotor algorithms, optimization and decision theory... It is a huge engineering challenge, but it does not require magic at any stage.

Why should we think about this instead of the countless other things serious people should obviously be thinking about at the moment?

It's not an either/or situation. But you should think about it because it's real and because you care about the future.

Why would I "suppose" that [all today's major illnesses are cured]? They aren't and show no signs of it. But, hey, go right ahead. How do you know new conditions wouldn't crop up even if we were to "suppose" this implausible outcome? Why are you so sure that curing the conditions we think of diseases today would leave only accidents, suicides, and murders as causes of death?

The point of this calculation is not to rule out new causes of death. It is to model the implications of curing the old causes of death, in the most straightforward way possible: by seeing what happens to life expectancy when you set those death rates to zero, and keep everything else as it is. The assumptions may certainly be amended, but the numbers are not pulled out of air.

How much time should I have to devote to such topics anyway at a time when people are starving to death or dying of easily treatable neglected diseases?

Good news, the Robot God tells me by tachyon transmission that just 5% of your time is all that is required.

Dale Carrico said...

It's the possession of those capacities which gives human beings a lot of their power.

I don't know that I agree that the metaphors of "representation" or "choices from preference sets" at the heart of your understanding of what human intelligence is about are really as apt as all that. You do realize that these are metaphors, right? You are aware of rich traditions of pragmatic and post-Nietzschean philosophy that have called these metaphors into question?

( -- don't sniff about my elitism, now, Maxine, Mitchell seems interested in these issues, I'm not implying that everybody is or should be -- )

But they are also possessed, in miniature, by any game-playing computer program,

I consider this a flabbergasting assumption on your part if you mean it.

We're running out of fundamental problems to solve;

Famous last words, right?

it's becoming a matter of combining things we already have in a crude form,

I think you guys are in for some big surprises. Since they're the usual surprises that befuddle AI triumphalists they really shouldn't still be surprises, but whatcha gonna do.

you should think about it [superlative aspirations] because it's real

You keep using that word. I don't think it means what you think it does.

and because you care about the future

I care about open futures, but I distrust enormously what passes for "the future" from the perspective of parochial pockets of people in the present. (ugh, that's too much alliteration even for me!)

seeing what happens to life expectancy when you set those death rates to zero, and keep everything else as it is. The assumptions may certainly be amended, but the numbers are not pulled out of air

Uh, sure, fine. Let's just cross that bridge if we come to it, I guess. I'm not holding my breath.

Good news, the Robot God tells me by tachyon transmission that just 5% of your time is all that is required.

I'm glad to see people seem a bit better humored today than yesterday, at any rate!

Mitchell said...

You do realize that these are metaphors, right? You are aware of rich traditions of pragmatic and post-Nietzschean philosophy that have called these metaphors into question?

I don't agree with existing philosophy of mind, certainly. I think a real science of consciousness will look more like Penrose plus Husserl than Moravec plus Minsky. But Moravec plus Minsky - the cybernetic theory of mind, materialized in hardware - will eventually give us artificial agencies equalling or exceeding human competency in almost any area you care to name, because that cybernetic feedback schema, already existing even in the humble chess computer, is enough to do the job, though the details would fill a library. I share that expectation with mainstream transhumanists, even while I disagree with their ontology of mind.

I think I've said my piece: technoprogressivism, the critique of superlativity as sensibility, and the critique of superlativity as technological futurism can be dissociated. My personal judgements are (in order) accept some of it, accept some of it, reject most of it.

Dale Carrico said...

But Moravec plus Minsky - the cybernetic theory of mind, materialized in hardware - will eventually give us artificial agencies equalling or exceeding human competency in almost any area you care to name

Well, that's certainly kickin it old school.

I think I've said my piece: technoprogressivism, the critique of superlativity as sensibility, and the critique of superlativity as technological futurism can be dissociated.

Fair enough. And for me one cannot distinguish the superlative assumptions and frames that imbue with significance what are taken to be the plausibly "technical" claims characteristic of superlative formations (what I think you mean to bracket off as "technological futurism" in this case), from those that articulate the other sub(cult)ural idiosyncrasies of superlativity (among them, what I think you mean here as "sensibility").

jimf said...

> Moravec plus Minsky - the cybernetic theory of mind,
> materialized in hardware - will eventually give us
> artificial agencies equalling or exceeding human
> competency in almost any area you care to name, because
> that cybernetic feedback schema, already existing even
> in the humble chess computer, is enough to do the job. . .

Back to the 50s.

Anonymous said...

Calling transhumanists Robot Cultists is urgently needed. Also, it's fun.

It is, but doing it 10 times in a single post is not. It is fun, but it is still excessive.

Michael Anissimov said...

Calling transhumanists Robot Cultists is urgently needed. Also, it's fun.

Yes, name calling is fun. In a 3rd grader sort of way.

Dale Carrico said...

Oooooooooh!

jimf said...

Michael Anissimov wrote:

> Yes, name calling is fun. In a 3rd grader sort of way.

and Dale replied:

> Oooooooooh!

Oh, well, it's clearly time for some more of one of
my favorite 3rd-graders, Bertrand Russell.

This sophisticated bit of name calling was published after
Russell had a falling out with novelist D. H. Lawrence.

"How to Become a Man of Genius"
(28 December 1932)

---------------------------
"If there are among my readers any young men or women who
aspire to become leaders of thought in their generation, I
hope they will avoid certain errors into which I fell in
youth for want of good advice. When I wished to form an
opinion upon a subject, I used to study it, weigh the
arguments on different sides, and attempt to reach a
balanced conclusion. I have since discovered that this
is not the way to do things. A man of genius knows
it all without the need of study; his opinions are
pontifical and depend for their persuasiveness upon
literary style rather than argument. It is necessary
to be one-sided, since this facilitates the vehemence
that is considered a proof of strength. It is essential
to appeal to prejudices and passions of which men
have begun to feel ashamed and to do this in the name
of some new ineffable ethic. It is well to decry the
slow and pettifogging minds which require evidence
in order to reach conclusions. Above all, whatever is
most ancient should be dished up as the very latest
thing.

There is no novelty in this recipe for genius; it
was practised by Carlyle in the time of our grandfathers,
and by Nietzsche in the time of our fathers, and it has
been practised in our own time by D. H. Lawrence. Lawrence
is considered by his disciples to have enunciated all
sorts of new wisdom about the relations of men and women;
in actual fact he has gone back to advocating the domination
of the male which one associates with the cave dwellers.
Woman exists, in his philosophy, only as something soft
and fat to rest the hero when he returns from his labours.
Civilised societies have been learning to see something more
than this in women; Lawrence will have nothing of civilisation.
He scours the world for what is ancient and dark and loves
the traces of Aztec cruelty in Mexico. Young men, who had
been learning to behave, naturally read him with delight and
go round practising cave-man stuff so far as the usages of
polite society will permit.

One of the most important elements of success in becoming
a man of genius is to learn the art of denunciation. You
must always denounce in such a way that your reader thinks
that it is the other fellow who is being denounced and not
himself; in that case he will be impressed by your noble
scorn, whereas if he thinks that it is himself that you
are denouncing, he will consider that you are guilty of
ill-bred peevishness. Carlyle remarked: ``The population
of England is twenty millions, mostly fools.'' Everybody
who read this considered himself one of the exceptions,
and therefore enjoyed the remark. You must not denounce
well-defined classes, such as persons with more than a
certain income, inhabitants of a certain area, or believers
in some definite creed; for if you do this, some readers
will know that your invective is directed against them.
You must denounce persons whose emotions are atrophied,
persons to whom only plodding study can reveal the truth,
for we all know that these are other people, and we
shall therefore view with sympathy your powerful diagnosis
of the evils of the age.

Ignore fact and reason, live entirely in the world of
your own fantastic and myth-producing passions; do this
whole-heartedly and with conviction, and you will become
one of the prophets of your age."

-- Bertrand Russell