Appeal or no appeal, human-level AI will eventually be created if it is technologically possible. Can you name a reason why it wouldn't be?
Of course, it is the extraordinary claim that demands the extraordinary evidence.
It is always an incredible mistake for reasonable people to start trading "reasons" with techno-utopians on their own terms like the transhumanists are always trying to induce critics to do in the name of having what they call a "technical" discussion. This is because to do so is always to relinquish actual reality and enter the topsy-turvy virtual reality transhumanists inhabit in which it is somehow "extraordinary" to deny that a Superintelligent Robot God is coming to End History, that human beings are going to be robotically or digitally immortalized, and that nanoscale robots are going to create a superabundance that trumps the impasse of diverse stakeholder politics.
In the actual world, it is of course the transhumanists, the singularitarians, and the other techno-utopians who have to name the reasons why any of these beliefs of theirs make any kind of sense at all. And it is their job to make these actually compelling reasons.
Reasons that fail to account for the actually embodied nature of human consciousness, reasons that fail to account for the actual vulnerabilities of metabolism in demanding environments, reasons that fail to account for the actual impasse of diverse aspiration in a finite shared world that structurally tends to yield urgent conflicts between incumbent minorities and dynamic majorities are not likely to be reasons that are compelling to those of us who are not already True Believers like they are. If the transhumanists want to be, or at any rate to appear, reasonable I fear that it is they who have the explaining to do. And they certainly shouldn't expect me to make this easy for them. Nobody, not even the transhumanists themselves, would ultimately benefit from such a free ride, however unhappy it makes them to confront informed skepticism and disdain.
Something I wrote quite a few years ago, interestingly enough in response to the very same Michael Anissimov with whom I am sparring now, speaks to this quandary very directly:
“Permitted in principle by the laws of physics” is a larger set of propositions than “stuff that can be plausibly engineered” is a larger set of propositions than “stuff people actually want” is a larger set of propositions than “stuff people are willing to pay for” is a larger set of propositions than “things people still want in the longer-term that they wanted enough to pay for in the shorter-term.”
Glib corporate-futurists and other hype-notized technophiliacs are of course notoriously quick to pronounce outcomes “immanent” and “inevitable” (genetically-engineered immortality! nanotech abundance! uploading consciousness! superintelligent AI! bigger penises!), just because a survey of science at the moment implies to them that an outcome they especially desire or dread is “permitted in principle by the laws of physics.” But nested within that set like concentric rings on a tree-trunk are ever more restricted and more plausible sets, of which the target set at the center is the set of things people tend to still want enough over the longer-term that they are satisfied to pay (or have paid) for them.
I think it is a good exercise, and sometimes a good penance, for technocentrics to take special care around their use of the word "inevitable" to describe outcomes that are radically different from states of affairs that obtain today.
My suspicion is that this is a word technophiles actually use more to signal the usual attitude of the faithful; namely, "I'm not interested in arguing with you anymore." Too often, “inevitable” is a word that signals an inability to chart an intelligible sequence of developmental stages that could plausibly delineate a path from where we are to whatever Superlative State is imagined to be likely and attractive. And by plausible, I mean both technically and politically plausible.
Part of what is interesting about this passage in the context of the larger discussion of which it was a part is that I seem to remember that Michael claimed to find it reasonable in spirit, if not to the letter, and made lots of reassuring reasonable noises to that effect at the time.
And yet, here he is again, making the usual techno-utopian mistake, with the usual techno-utopian certainty, "human-level AI will eventually be created if it is technologically possible." From here, no doubt, he believes (he has said it elsewhere if not here and now) that the logical inevitability of physically possible human-level AI indicates the equally logical inevitability of superhuman-level AI, which in turn indicates the equally logical inevitability of a history shattering "Singularity" in which a Robot God metes out apocalyptic rewards and punishments to worthies and unworthies according to whether it is "Friendly" or not.
Needless to say, what looks like logical inevitability to even very bright well-meaning True Believers can all too easily equal batshit craziness if one's foundational assumptions or underlying motivations go too far awry too soon.
Let that be a lesson to us all.