Using Technology to Deepen Democracy, Using Democracy to Ensure Technology Benefits Us All

Monday, March 24, 2008

Confusing Fancies for Facts

Michael Anissimov makes a familiar techno-utopian claim, and with the completely unearned cocksure swagger that is also familiar from techno-utopians when they are bluffing in this way:
Appeal or no appeal, human-level AI will eventually be created if it is technologically possible. Can you name a reason why it wouldn't be?

Of course, it is the extraordinary claim that demands the extraordinary evidence.

It is always an incredible mistake for reasonable people to start trading "reasons" with techno-utopians on their own terms like the transhumanists are always trying to induce critics to do in the name of having what they call a "technical" discussion. This is because to do so is always to relinquish actual reality and enter the topsy-turvy virtual reality transhumanists inhabit in which it is somehow "extraordinary" to deny that a Superintelligent Robot God is coming to End History, that human beings are going to be robotically or digitally immortalized, and that nanoscale robots are going to create a superabundance that trumps the impasse of diverse stakeholder politics.

In the actual world, it is of course the transhumanists, the singularitarians, and the other techno-utopians who have to name the reasons why any of these beliefs of theirs make any kind of sense at all. And it is their job to make these actually compelling reasons.

Reasons that fail to account for the actually embodied nature of human consciousness, reasons that fail to account for the actual vulnerabilities of metabolism in demanding environments, reasons that fail to account for the actual impasse of diverse aspiration in a finite shared world that structurally tends to yield urgent conflicts between incumbent minorities and dynamic majorities are not likely to be reasons that are compelling to those of us who are not already True Believers like they are. If the transhumanists want to be, or at any rate to appear, reasonable I fear that it is they who have the explaining to do. And they certainly shouldn't expect me to make this easy for them. Nobody, not even the transhumanists themselves, would ultimately benefit from such a free ride, however unhappy it makes them to confront informed skepticism and disdain.

Something I wrote quite a few years ago, interestingly enough in response to the very same Michael Anissimov with whom I am sparring now, speaks to this quandary very directly:
“Permitted in principle by the laws of physics” is a larger set of propositions than “stuff that can be plausibly engineered” is a larger set of propositions than “stuff people actually want” is a larger set of propositions than “stuff people are willing to pay for” is a larger set of propositions than “things people still want in the longer-term that they wanted enough to pay for in the shorter-term.”

Glib corporate-futurists and other hype-notized technophiliacs are of course notoriously quick to pronounce outcomes “immanent” and “inevitable” (genetically-engineered immortality! nanotech abundance! uploading consciousness! superintelligent AI! bigger penises!), just because a survey of science at the moment implies to them that an outcome they especially desire or dread is “permitted in principle by the laws of physics.” But nested within that set like concentric rings on a tree-trunk are ever more restricted and more plausible sets, of which the target set at the center is the set of things people tend to still want enough over the longer-term that they are satisfied to pay (or have paid) for them.

I think it is a good exercise, and sometimes a good penance, for technocentrics to take special care around their use of the word "inevitable" to describe outcomes that are radically different from states of affairs that obtain today.

My suspicion is that this is a word technophiles actually use more to signal the usual attitude of the faithful; namely, "I'm not interested in arguing with you anymore." Too often, “inevitable” is a word that signals an inability to chart an intelligible sequence of developmental stages that could plausibly delineate a path from where we are to whatever Superlative State is imagined to be likely and attractive. And by plausible, I mean both technically and politically plausible.

Part of what is interesting about this passage in the context of the larger discussion of which it was a part is that I seem to remember that Michael claimed to find it reasonable in spirit, if not to the letter, and made lots of reassuring reasonable noises to that effect at the time.

And yet, here he is again, making the usual techno-utopian mistake, with the usual techno-utopian certainty, "human-level AI will eventually be created if it is technologically possible." From here, no doubt, he believes (he has said it elsewhere if not here and now) that the logical inevitability of physically possible human-level AI indicates the equally logical inevitability of superhuman-level AI, which in turn indicates the equally logical inevitability of a history shattering "Singularity" in which a Robot God metes out apocalyptic rewards and punishments to worthies and unworthies according to whether it is "Friendly" or not.

Needless to say, what looks like logical inevitability to even very bright well-meaning True Believers can all too easily equal batshit craziness if one's foundational assumptions or underlying motivations go too far awry too soon.

Let that be a lesson to us all.

16 comments:

jimf said...

What's wrong with this quiz?
http://transsurvivalist.blogspot.com/2008/03/makes-me-look-pretty-hard-core.html

(Hint: Could it have anything to do with the fact that
all the questions are phrased something like 'Pigs can fly.
Do you think people should be allowed to train as pig pilots?')

BTW, my score was exactly the same as Mark Plus's.

Nick Tarleton said...

a history shattering "Singularity" in which a Robot God metes out apocalyptic rewards and punishments to worthies and unworthies according to whether it is "Friendly" or not.

I know of nobody who believes the "rewards and punishments to worthies and unworthies" part, or wants it to happen. On behalf of the Association for the Humane Treatment of Straw Men, I must strongly object.

Dale Carrico said...

I know of nobody who believes the "rewards and punishments to worthies and unworthies" part, or wants it to happen.

It's true, that's not the way they put it in the glossy brochures. (Yes, I know, there probably aren't literally glossy brochures, either, but, see, you know exactly what I mean.) But a Friendly Robot God rapturing up the techno-leets is what Singularitarianism finally amounts to, in my humble opinion. And parodying PR spin isn't exactly the same thing as the Straw Man fallacy. It's, you know, disagreement disrespectfully put.

Anonymous said...

That excerpt from 2006 was much more interesting than your current writings on this topic.

Your current extrapolations and fabrications are wrong.

Anonymous said...

It is always an incredible mistake for reasonable people to start trading "reasons" with techno-utopians on their own terms like the transhumanists are always trying to do induce critics to do in the name of having what they call a "technical" discussion. This is because to do so is always to relinquish actual reality and enter the topsy-turvy virtual reality transhumanists inhabit in which it is somehow "extraordinary" to deny that a Superintelligent Robot God is coming to End History, that human beings are going to be robotically or digitally immortalized, and that nanoscale robots are going to create a superabundance that trumps the impasse of diverse stakeholder politics.

In the actual world, it is of course the transhumanists, the singularitarians, and the other techno-utopians who have to name the reasons why any of these beliefs of theirs make any kind of sense at all. And it is their job to make these actually compelling reasons.

Reasons that fail to account for the actually embodied nature of human consciousness, reasons that fail to account for the actual vulnerabilities of metabolism in demanding environments, reasons that fail to account for the actual impasse of diverse aspiration in a finite shared world that structurally tends to induce urgent conflicts between incumbent minorities and dynamic majorities are not likely to be reasons that are compelling to those of us who are not already True Believers like they are. If the transhumanists want to be, or at any rate to appear, reasonable I fear that it is they who have the explaining to do. And they certainly shouldn't expect me to make this easy for them. Nobody, not even the transhumanists themselves, would ultimately benefit from such a free ride, however unhappy it makes them to confront informed skepticism and disdain.


Everything (not really) you say about transhumanism would be wrong if the things transhumanists said were true. If transhumanists are making bad arguments, you can just point that out (and get other people to point that out, too, if they keep making the argument).

You keep saying that "consciousness is embodied," which makes sense, but that doesn't make a Moravec transfer (or any other form of mind uploading) any less likely--uploaded minds are embodied in computers.

“Permitted in principle by the laws of physics” is a larger set of propositions than “stuff that can be plausibly engineered” is a larger set of propositions than “stuff people actually want” is a larger set of propositions than “stuff people are willing to pay for” is a larger set of propositions than “things people still want in the longer-term that they wanted enough to pay for in the shorter-term.”

Glib corporate-futurists and other hype-notized technophiliacs are of course notoriously quick to pronounce outcomes “immanent” and “inevitable” (genetically-engineered immortality! nanotech abundance! uploading consciousness! superintelligent AI! bigger penises!), just because a survey of science at the moment implies to them that an outcome they especially desire or dread is “permitted in principle by the laws of physics.” But nested within that set like concentric rings on a tree-trunk are ever more restricted and more plausible sets, of which the target set at the center is the set of things people tend to still want enough over the longer-term that they are satisfied to pay (or have paid) for them.

I think it is a good exercise, and sometimes a good penance, for technocentrics to take special care around their use of the word "inevitable" to describe outcomes that are radically different from states of affairs that obtain today.

My suspicion is that this is a word technophiles actually use more to signal the usual attitude of the faithful; namely, "I'm not interested in arguing with you anymore." Too often, “inevitable” is a word that signals an inability to chart an intelligible sequence of developmental stages that could plausibly delineate a path from where we are to whatever Superlative State is imagined to be likely and attractive. And by plausible, I mean both technically and politically plausible.


disagreement disrespectfully put

That was disagreement disrespectfully put without being insulting/annoying/etc.

Dale Carrico said...

That excerpt from 2006 was much more interesting than your current writings on this topic.

I take much greater care now to disallow transhumanists the pretense that anything I say can be used to make them seem more sensible, more respectable, or more progressive. You will probably like me much less from here on out.

Anonymous said...

Pity. Beyond the excerpt, I carefully read the entire old post and found it eloquent and engaging, genuinely thought provoking. Seems that changed over time into the current poo flinging and odd conspiracy theories about wannabe supervillains. Oh well, poo flinging can be entertaining, once one knows that's what's on the program.

Dale Carrico said...

Pity. Beyond the excerpt, I carefully read the entire old post and found it eloquent and engaging, genuinely thought provoking. Seems that changed over time into the current poo flinging and odd conspiracy theories about wannabe supervillains.

Wow, a transhumanist Concern Troll. Who knew? How cute.

Anonymous said...

Sigh. Had no desire to be a "concern troll". I'll just go away.

I did go check out Anissimov's site which I hadn't seen in quite some time and found it unbelievably clueless even on technical points (I had assumed he would have some technical training or skill but it is not displayed if so). That in itself does not say anything about the possibility of radical technological change, but it does make the debate between you two fundamentally uninteresting -- idiot vs jerk.

Good luck to both sides in this mutually-beneficial PR war. With no desire to proselytize any particular position, it was a mistake to participate in a political discussion area. Lesson learned.

jimf said...

Anonymous wrote:

> I'll just go away.

Oh, don't do **that**. Seriously, you've posted some extremely
valuable stuff lately.

Anonymous said...

Hi, Dale.

From Seed Magazine: Out of the Blue: Can a thinking, remembering, decision-making, biologically accurate brain be built from a supercomputer?

Notwithstanding its framing, I think it's neat article.

Anonymous said...

And I have high standards for high articles. As you can see, I won't even use certain indefinite ones.

jimf said...

http://www.seedmagazine.com/news/2008/03/out_of_the_blue.php?page=6
------------------------------------------------------------
> In fact, the model is so successful that its biggest restrictions are
> now technological. "We have already shown that the model can scale up,"
> Markram says. "What is holding us back now are the computers." The
> numbers speak for themselves. Markram estimates that in order to
> accurately simulate the trillion synapses in the human brain, you'd
> need to be able to process about 500 petabytes of data (peta being a
> million billion, or 10 to the fifteenth power). That's about 200 times
> more information than is stored on all of Google's servers. (Given current
> technology, a machine capable of such power would be the size of several
> football fields.) Energy consumption is another huge problem. The
> human brain requires about 25 watts of electricity to operate. Markram
> estimates that simulating the brain on a supercomputer with existing
> microchips would generate an annual electrical bill of about $3 billion.
> But if computing speeds continue to develop at their current exponential
> pace, and energy efficiency improves, Markram believes that he'll be
> able to model a complete human brain on a single machine in ten years or less.

Yes, well, that's fine. Seriously, it's kewl.

However, I can't help hearing an echo of the late Arthur C. Clarke's
_Profiles of the Future_ (which I bought in paperback in 1963) in
which he says (this is from memory; the quotes here are strictly paraphrastic)
"It was said in the days of vacuum-tube computers that a machine
equivalent to a human brain would be the size of the Empire State
Building and would require Niagara Falls to cool it. With solid-state
circuitry, I don't see why a human-equivalent computer can't eventually
fit in the volume of a matchbox." That was in the days when the
IBM 7094 was the state of the art.

I wouldn't bet anything too valuable on that "ten years or less".

Also, bear in mind that a digital simulation of a biological
system, in all its splendid messiness, is **not** what the
Overcoming Bias/Bayes is the Ways/SL4/Accelerating Future
folks are hankering after -- they're still hooked on the hoary-old
GOFAI notion of some sort of theorem-proving calculator that can be
knocked out with fewer transistors yet will still be Superintelligent
and guaranteed Friendly (TM). **That** I'm not holding onto any hope for.

Michael Anissimov said...

What am I so clueless about? What technical points? :\

Michael Anissimov said...

yet will still be Superintelligent
and guaranteed Friendly (TM)


No one is making any guarantees here.

jimf said...

> No one is making any guarantees here.

Oh? I thought that was the current raison d'etre of SIAI --
that since the development of AI **without** a literal guarantee
of Friendliness is a criminally irresponsible undertaking,
therefore SIAI, being the only outfit with the brains to
be able to figure out how to engineer such a guarantee,
was in business to 1) figure out how to guarantee
the, uh, guarantee and 2) get the product on the market first.

Thereby performing an end run around the Defense
Department, IBM, Bill Gates, Osama bin Laden, Larry
Ellison, and other such irresponsible powers-that-be.