Dale: I don’t think there is any way of talking about this sort of thing that Roko would regard as “fact-based” “rigorous” or not "name calling."
Roko: How do you know? You haven’t even tried yet!
Dale: You have literally just proved my point.
Roko: Ok, so I’ll ask again. Dale, please present me with an argument whose conclusion is “smarter than human AI is impossible” or “smarter than human AI is extremely unlikely to be developed within the next 100 years”.
I’ll give you an example of the sort of thing I am looking for:
1. The human brain is extremely complex
2. Software is really hard to write
3. Any smarter than human AI must be as complex (or more) than the human brain
4. Therefore, no-one will write software that clever in the next 100 years
5. Therefore, smarter than human AI is extremely unlikely to be developed within the next 100 years.
Dale: I have no doubt that this is what you are looking for, inasmuch as there is no version of this formulation that does not confirm the very prejudices at issue. The primary weight in your formulation is borne by the metaphorical usage of the word “smarter” in both 3 and 5 to describe what you take to be on offer when computer programs grow more complex. That’s not a "fact," that’s a figure. The argument relies on framing, not evidence. I disagree that there is anything in the facts to justify the analogy and I believe all the other business about timelines and so on actually functions to distract True Believers from noticing what thin ice you are on when you try to extrapolate from this sort of development to a grand narrative eventuating in a post-biological superintelligence or the Robot God.
If I may anticipate your next objection, this concern of mine is not at all equivalent to declaring consciousness supernatural, but just my insisting that the actual materialization of human consciousness is non-negligible in ways superlative futurologists seem ill-disposed to take into account in ways that empower most of their glib and deranging talk on this subject. A pile of gravel may be as complex as a skyscraper, but one hardly is indulging in mysticism to note that they are different nor to express skepticism that even a growing gravel pile is sure to crystallize into a skyscraper in the fullness of time.
Roko: As a matter of fact, there is a way of talking that I would regard as both fact-based rather than ad-hominem… It’s very simple: write a comment whose last line is “therefore smarter than human AI is not possible or is extremely unlikely in the next 100 years”, and write a justification of that statement above it. Finally, delete from your comment any personal attacks on people who believe the negation of your conclusion.
So, I ask you for the fifth(?) time: will you present a logical argument supporting the conclusion that smarter than human AI is not possible or is extremely unlikely in the next 100 years.
Dale: Endlessly failing, and loudly, to get my point that we disagree on premises and at the level of definitions isn’t exactly the stirring demonstration of your superior rationality and scientificity that you seem to think it is.
Roko: What are the prejudices at issue? The use of the word 'intelligence' to describe certain computer programs?
Dale: In a word, yes. Given what computers are and given what intelligence is, it's actually problematic to glibly associate them. When in the 50s people referred to room-sized computers as "electronic brains," it was, you know, a metaphor. I think the metaphor was not an illuminating one.
Roko: I am not claiming that all complex computer programs are smart.
Dale: "Are"? Well, that's a mercy. After all, such a claim would be obviously crazy.
Roko: I am claiming that there is a significant chance that some human team of software engineers and AI researchers might create a particular computer program that has a greater intellectual ability than any human.
Dale: Despite the fact that the locution "there is a significant chance that" sounds superficially sciency (the stock in trade of Robot Cultism), there are, of course, no actual empirical instances from which you can possibly be determining these "odds." Your utterance is essentially an expression of faith, unless you mean it, you know, figuratively, as a kind of bad poetry or something.
Roko: You claim that smarter than human intelligence is a nonsensical concept, right? But reality doesn’t care about your confusion!
Dale: Moments like these are my favorites!
First of all, as an atheist I don't attribute consciousness to "reality" any more than I do to software, and so I don't think reality "cares" about what either of us are saying. But more to the point, it cracks me up that you are so cocksure that you have "reality" on your side in the first place.
Given that literally no actually-existing computer exhibits intelligence and given that literally every exhibition of intelligence in the world has always been organismically embodied, you'll forgive me if I don't concede your faithful utterances to the contrary of that reality the force of "reality."
Indeed, the only reality I can associate with your performance is the long line of AI-ideologues speaking with exactly your certainty about the imminent arrival of AI, every year on the year for literally decades, and never once being anything but endlessly absolutely wrong about everything.
Roko: I am (and so are the rest of the singularitarian community) erring on the side of caution: we are like insurance against Superintelligent AI actually making sense and posing a real threat. It might not… but it just might.
Dale: There is no "it" in the terms on which your discourse depends. Your frames are confused and hence confusing. The singularitarian "community" (note the glancing admission there that singularitarianism isn't an argument but a sub(cult)ure) is erring, but not in anything like a useful way. There are many problems, among them security threats, associated with complex software and dynamic coding, and there are many intelligent people working on these problems. Nobody needs a fandom of boys-with-toys who cannot distinguish science from science fiction to arrive on the scene and save us from ourselves. To my mind, the larger more proximate threat by far is not so much the imminent arrival of the unfriendly Robot God, but the deranging discourse of singularitarianism itself that sensationalizes and hyperbolizes technical questions in ways that make sensible deliberation less easy by far, investing code with the utterly irrelevant cadences of hysterical apocalypse and wish-fulfillment fantasies of personal transcendence.
In a surprise move, the one and only Giulio Prisco, made a brief appearance on the scene at this point to contribute the following comment:
Yes Dale, facts have this unpleasant habit of getting in the way of the serious business of hair-splitting and mental masturbation. I am sure when you will meet a superintelligent AI at the pub, assuming you ever visit such mundane places, you will argue his non-existence with him based on the same nebulous empty pseudo-arguments that you use now. Perhaps he will disappear in a puff after your “proof” of his own non-existence, but somehow I doubt it.
Dale: I cannot distinguish this from the rantings of a lunatic. Yes, "facts," yes, I'll pull up a barstool and have a beer with the Robot God, and that'll show me. Mm-kay...
Prisco had more to say (follow the link if you like) but I will leave aside here the inevitable turn to the Wright Brothers that ensued, since this is a move I have lampooned incessantly here already. The long and the short of it: Giulio Prisco, you are not the Wright Brothers. Giulio Prisco, you are not Einstein. Giulio Prisco, you are somebody's crazy uncle building a perpetual motion machine in the garage out of milk bottles and popsickle sticks.
But Roko was apparently inspired by Giulio's comment to demand of me "an argument as to why superintelligent AI can’t exist! That’s what I’ve been asking him for for the last 50 comments!"
Dale: I wonder if there are any logicians in the house who can name the fallacy "Roko" is indulging in through this gambit? Anybody who has had the misfortune of trying to have a conversation with a frothing True Believer in God or UFOs or the Hollow Earth or fairies or Nessie will know what I am talking about.
Roko: If you think I am a member of a “Cult”, then you are acting in an extremely unethical way if you don’t respond to my requests for a clear explanation of what exactly is wrong with the Cult’s doctrine.
Dale: Bwa-ha-ha-ha-ha! If ridiculing the ridiculous is wrong I don't wanna be right! What critique of a cult will count as "clear" to the cultist that doesn't concede the organizing assumptions of the cult? C'mon, "Roko," surely you can do better than this! We've arrived at diminishing returns, I fear (Giulio Prisco's arrival on the scene is a sure-fire signal of that, if nothing else). I've been as clear as your provocation warrants, and I'm satisfied my point is made. I leave it to the peanut gallery to make their own assessments from here on out. Best to all.