"Roko" serves up the usual premature dismissal:
If you have a fact-based argument as to why smarter than human AI is not possible then please tell me.
Just what assumptions and frames are embedded in your notion of "smarter" here, and are the implications of those assumptions matters of fact? Are differences arising from these assumptions open to adjudication on the basis of what you consider to be facts?
People who have trouble distinguishing science fiction from science should be less cocksure that they always have the facts on their side, and that their skeptics are always ignorant or irrational.
Is "smartness" a matter of instrumental or formulaic calculation, are sensitivity, imagination, improvisation, criticism, expressivity dimensions contained in your notion of "smarter than human AI"?
Does it matter or not to your visions of post-biological smartness that intelligence has always only been materialized in brains, does it matter that performances of intelligence are always social, and that in some construals collaboration is already a form of greater-than-personal intelligence?
If not, why not? At what point is the trait you claim to be so palpably possible sufficiently remote from the actual phenomena denoted by the term "intelligence" that you might properly be compelled (by the demands of sense, I mean) to find some other word for what you are talking about?
What are the stakes of your attribution of "possibility" to the "arrival" of this smartness, whatever you happen to mean by it? Is it logical possibility? Is it theoretical possibility, however well-substantiated or not, however remote or not? Is it proximate practical possibility capable of attracting investment capital or demanding immediate regulation?
Do these distinctions figure at all in your determination of whether or not this question of engineering "smarter than human AI" is worthy of serious consideration?
If not, why not? Wouldn't these sorts of distinctions figure in most practical considerations of the kind you seem to think you are engaging in?
If you want to sell what looks to me like a faith-based initiative concerning the arrival of post-biological "superintelligence" you'll discover that skeptics you want to persuade don't have to meet your terms, you have to meet ours. It's the extraordinary claim that demands the extraordinary substantiation.
Your personal challenge to me is finally irrelevant, of course, since the challenge of scientific consensus is the one that confronts your claim and so far you have failed to attract that consensus. You may be able to find a cul-de-sac in which your claim passes muster for a marginal minority (that's the whole point of joining a Robot Cult, presumably), and you are surely able to best me or at any rate bamboozle me in some exchange on some technical matter I have neither the training nor the temperament to address the proper significance of, but all that is neither here nor there.
I pose my own challenges to you on the terms I am fit for, and those terms are relevant even if they are not the only relevant ones in a question like this, and even if you choose to demote them as not "fact based" and hence, apparently, unworthy of consideration. You'll discover that you live in a world with sufficiently many people in it who differ with you on the question of which concerns are the ones worthy of consideration that dismissals only ensure that you are dismissed. That, too, after all, is a fact.