Using Technology to Deepen Democracy, Using Democracy to Ensure Technology Benefits Us All

Thursday, August 21, 2014

Scattered Speculations on a Twitterscrum with Robot Cultists Andre@ and Rachel Haywire

I have always found it curious the way, in the give and take of a realtime argument, techno-transcendentalists cheerleading about mind-uploading info-soul immortalization schemes and about coding history-ending superintelligent Robot Gods and the like, cannot help but crow whenever we arrive at the inevitable point in the argument when I point out that I am not a scientist. For a fairly representative example of the phenomenon, from a twitter spat I found myself in with a Robot Cultist last night, observe:
I am, of course, a philosopher and rhetorician by training, and never ever pretend otherwise in the least. Since so much of the force of futurological discourses depends on their recourse to metaphors, hyperbolizations, reframing commonplaces as novelties, naturalizations of contested terms, distractions from rather than solutions to conspicuous problems, consoling subcultural signaling and appeals to identity, and so on, it has always seemed to me that my training provides a useful critical perspective rather than a disqualification. The cackling delight with which my status as a non-scientist gets trotted out by the futurological faithful who declare me incapable of engaging in the relevant "technical" specifications endorsing their triumphalism is all the more bizarre given how many of them are no more practicing scientists than I am. Time and time again, a query for their science degree, current lab, or published papers goes unanswered or reveals I am in the presence of yet another coder fancying themselves an honorary biologist, plasma physicist, civic engineer, and political economist as a result. Last night's interlocutor was just such a coder. From my Futurological Brickbats: LXX. I know enough to know I don't know enough to be a scientific authority, while futurologists know enough to know that most people don't know enough to know the difference when they pretend to be scientific authorities.

Beyond this deceptive and also probably self-deceptive gambit, I also have to say that there is something that feels to me not only pseudo-scientific but actively anti-scientific in the Wall of Words partisans of cryonics and uploading and drextechian nano-abundance and the rest like to fling up in the name of "the technical discussion" to silence criticisms of conceptual and otherwise rhetorical sleights of hand on which their rationalizations tend ultimately to depend. Confronted with a critic who exposes the fairly conspicuous religiosity of their fervent assertions about the techno-transcendental arrival at immortality as info-spirit-selves in Holodeck Heaven under the ministrations of a post-biological post-parental superintelligent Robot God and with omni-competent nano or femto matter-mulching Anything-for-Nothing machines at their every whim's disposal, these faith-based futurologists like to retreat as quickly as possible to the prosaic. Cryonicists start lecturing you about the harmless revival of the drowned and of organs cryopreserved for transatlantic treks to surgery, nano-cornucopiasts handwave about the productive factory floor of the molecule, SENS longevists blather on about the new car smell of a century old roadster repaired and maintained by a loving hobbyist, AI-deadenders keep winning Chess and Jeopardy with glorified abacuses with database access, and on and on and on.

Of course, quite a lot of the science and technique these futurologists are drawing on argumentatively is perfectly well warranted as far as that goes. As a matter of fact, my impression is that most of the science the priestly experts of the Robot Cult archipelago lean on amounts to fairly undergraduate tech talk, sound as far as it goes but never particularly advanced. And their preferences in the matter of the "advanced" tend to incline more in the direction of the Aquarian, I find, their cutting-edge looks to be rather, er, cosmic.

Let us delve deeper into an aria offered up by my interlocutor last night. First, read through the twitter scroll, and then my reading and response will follow. (I am fairly confident, by the way that "Andre@" would regard this very sequence as their strongest most triumphant portion of the debate. This selection is not offered up in an effort to ridicule through expurgated editorial shenanigans on my part, and I do hope none of the directly interested parties would perceive otherwise. The tweets are clickable and fuller reconstructions of what was a much longer and ramifying twitterscrum should be possible for the diligent): Got that? You will notice that my "strong claim" is the suggestion that, given all the questions we have about the relations of brain processes to the phenomena we describe as "intelligence" and "mind," modesty may be more warranted than declarations of certainty that software minds indistinguishable from human minds are obviously possible and immortalizing uploads of info-selves on the horizon no less obviously. I am someone who celebrates science as much as the next geek, but I do think our discoveries raise more and more questions rather than providing rationalizations for faith in wish-fulfillment fantasies. Notice, I am explicitly materialist in these exchanges in a way that leads me to think it probably actually matters that what we mean by minds in the real world have always been specifically materialized in biological brains and social formations and to think we should qualify, to say the least, expectations that non-biological non-social materializations will be "indistinguishable" from human minds or even intelligibly described as "minds" at all. I am not the one blathering on about superintelligent AIs, info-souls, cyberangel avatars and so on. But presumably I am the one indulging in "bullshit argument by assertion"? Presumably I am the one "desperately grasping at magic pixie dust"?

I am far from denying the warranted assertions my interlocutor breathlessly exhaled in the Wall of Words made to loom before me last night, tweet by tweet, block by block. Indeed, most of the science scribbled on the Wall is well-worn enough that for all I know it was being read off the promotional descriptions on the back of a set of Cosmos blu-rays (which I own myself, by the way, despiser of science that I am). As I have said, futurologists tend to retreat in such moments to fairly undergraduate science in performing their technical preening acts. The rhetorician in me cannot help but notice that the argumentative force of the tirade does indeed derive in important part from the illustrative scenery painting of figures -- "supervene" in the first one, "fix[ation]" in the next, "computab[ility]" in the next, "extrem[ity]" in the next, and so on. The definition of materialist in the first post is idiosyncratic in the extreme, and hardly dispositive. Brazening it out nonetheless is something a rhetorician can appreciate as commonplace, needless to say. However warranted the string of observations following, there is nothing in what we are well warranted to believe we know in them to warrant the further declarations that "behavior... *is fixed by known physics* -- there is *nothing* [emphasis in the original, but I would add it if it weren't there --d] mysterious or unknown about the behavior" or that our knowledge as it is renders assertions about mind-uploads "perfectly [emphasis added --d] justified" or that "[t]he unknowns in physics are all [emphasis added --d] under extreme conditions" (famous last words) or that "[t]he only [emphasis added --d] thing that matters under the conditions that occur in the brain is ordinary" as we conceive it, and so on. The criteria on the basis of which we select as warranted the beliefs that would yield prediction and control are always defeasible and never provide grounds for the unqualified superlatives of "only" "all" "nothing" "perfect" that freight the discourse of the faithful far more than the scientific.

One of the reasons that vanishingly few actually qualified, actually practicing scientists in the actually relevant fields associated with the confident super-predicated assertions of futurologists will have anything whatsoever to do with these superlative futurologists is that their robocultic tech talk is too rudimentary to be of much interest to scientists while the spirited projections where all the robocultic action is are far too wild and wooly and unwarranted for them to take seriously. Contrary to the insistence of cryonicists and mind-uploaders who decry the corpse-coddling "deathism" and "sheeple" timidity of those who dare not Challenge! Death! (those who, you know, recognize the fact that all humans are mortal and that death denialism may yield an irrational death in life but will not render the spellcaster immortal in fact) the reason biologists and gerontologists and lab techs administering diagnostic brain scans aren't in the futurological megachurch pews is that there simply is a whole hell of a lot of distance between where we are and where we would have to be to begin even to contemplate modest variations on superlative futurological aspirations.

Again, of course it is true that there are enormously interesting problems and possibilities for better sensors and materials in biochemistry; and of course it is true that there are ferocious hopes and fraught hurdles for better therapies in brain diagnostic media and organ cryopreservation and gene therapies; and of course it is true that planetary digitally networked data framing, surveillance, marketing, and finance introduce extraordinary dangers of error and attack and crucial demands for accountability and user-friendliness for software designers, and so on. Although Robot Cultists retreat to this register to ground their wish-fulfillment fantasies in something like an everyday "reality effect," it is crucial to recognize that no futurologist qua futurologist has ever made a problem-solving contribution at this level of technicality (it could happen accidentally or incidentally, I suppose).

The substance of futurology consists in its reframing of such problems and accomplishments as stepping stones along a path to super-predicated capacities providing personal transcendence. This, in turn, is simply a reductio ad absurdum or amplification into the cadences of outright religiosity of the already prevalent deceptions and hyperbole of advertizing norms and forms as well as the ideology promulgated by self-esteem pop psychology for the consumer masses and management seminars for the actual and aspirational venture capital/"creative" class minority. Age Defying Skin Kreme! Find Your Inner Winner! Grasping the nature and consequences of these formations depends far less than you might expect on technical debates over the scientific claims on which Robot Cultists pin their hopes (especially since futurologists will tend to retreat to the warranted in such debates, disavowing the hyperbolizations which really substantiate their distinctive claims, making these discussion exactly as relevant and decisive as technical debates among monks over angels cavorting on pinheads) and benefit far more than you might expect on the expertise of literary and cultural critics and ethnographers who are more familiar with the actual dynamisms playing out in futurological discourses and sub(cult)ures.

It is not a scientific but an altogether rhetorical production to try to create the efficacious impression that it is not the one who affirms the warranted in a qualified and contextual way who supports the scientific but instead the one who leaps from the warranted into the superlative who so supports it. To declare modesty assertive, and the refusal of wish-fulfillment a belief in magic requires something of a bravura rhetorical operation, reminding one not least of the dynamics of the Big Lie. Needless to say, it is the one who makes the extraordinary claim who is required to provide extraordinary evidence in support of it. But beyond this, it is not the one who indulges in the superlative rather than the warranted who gets to determine what claims actually are the extraordinary ones and what evidence is extraordinary enough to support them. It is not for Robot Cultists to tell me that their marginal and unqualified assertions are the ordinary ones and that the burden of proof for the support of qualified, contextualized, modest warranted assertibility falls on me because mine is the extraordinary position, that my skepticism of their magic is the magical thinking. Cultists ALWAYS seem to think their articles of faith are commonsensical and undeniable. This sort of facile abuse is hardly unprecedented.

Quite a few Robot Cultists are crowing (although some are doing so in an ironic way meant to cover all their bases in case the verdict changes) about how I "lost" the battle with my robocultic interlocutor last night. I cannot say I know exactly what "winning" or "losing" such an exchange would actually mean. Certainly nothing particularly unexpected happened for someone who has engaged in too many exchanges of this sort over the years to count them. The debate such as it was seemed to me interestingly representative, and worthy, as you see, of a closer reading. In such matters I suppose that winning and whining can become rather hard to distinguish sometimes.

13 comments:

jimf said...

> I also have to say that there is something that feels to me
> not only pseudo-scientific but actively anti-scientific in the
> Wall of Words with which partisans of cryonics and uploading
> and drextechian nano-abundance and the rest like to fling up
> in the name of "the technical discussion". . .

http://rationalwiki.org/wiki/Transhumanism#The_woo_of_transhumanism
-----------------
The woo of transhumanism

Scientific criticisms

Sadly, a lot of the underpinnings of transhumanism are based
on a sort of blind-men-at-the-elephant thinking—people assuming
that because it can be imagined, it must be possible. Transhumanism
is particularly associated with figures in computer science,
which is a field that is in some ways more math and art than a
true experimental science; as a result, a great many transhumanists
tend to conflate technological advancement with scientific advancement;
though these two things are intimately related, they are separate
things. In fact, though transhumanists strenuously deny it, a
great number of their arguments are strongly faith-based — they
assume because there are no known barriers to their pet development,
that it's inevitably going to happen. Seldom is the issue of
unknowns — known or otherwise — factored into the predictions. . .

The example of the singularity is instructive. . .
[S]ingularitarians hit the wall when confronted with the realities
of brain development research — though a true AI may in fact be possible,
there simply is not enough known about the brain to understand
its functions to the degree necessary to create a workable emulation,
meaning a prediction of such a creation is meaningless at best,
dishonest at worst. . .

No science necessary

Worst of all, some transhumanists outright ignore what people
in the fields they're interested in tell them; a few AI boosters,
for example, believe that neurobiology is an outdated science
because AI researchers can do it themselves anyway. They seem
to have taken the analogy used to introduce the computational
theory of mind, "the mind (or brain) is like a computer."
Of course, the mind/brain is not a computer in the usual sense.
Debates with such people can take on the wearying feel of a
debate with a creationist or climate change denialist, as such
people will stick to their positions no matter what. Indeed,
many critics are simply dismissed as Luddites or woolly-headed
romantics who oppose scientific and technological progress.
=====

jimf said...

> The only thing that matters under the conditions that
> occur in the brain is ordinary electromagnetic interactions.
> The next leading contribution would be the weak force.
> People actually do [model] that [on computers] - google
> electroweak quantum chemistry. It leads to corrections
> *twenty decimal places out* in binding energies for
> L vs. D enantiomers. That's perfectly computable too,
> but if the working of the brain depended on anything that
> small, the noise at 298 K would be incompatible with functioning.

And yet, it's astonishing how big an abacus is required
for (usefully) digitally modelling even systems which,
I imagine everybody would concede, are
rather simpler than brains (human, rat, squid, honeybee,
or nematode worm). Like, say, automobile engines:

http://news.nationalgeographic.com/news/energy/2012/04/120430-titan-supercomputing-for-energy-efficiency/
---------------
[As of 2012, t]he problem of improving upon the 150-year-old internal
combustion engine is so complex that the scientists who work on
it are eager for a major development in the supercomputing world
to occur later this year. The U.S. Department of Energy's Oak Ridge
National Laboratory (ORNL) in Tennessee is set to deploy a massive
upgrade to Jaguar, the nation's fastest supercomputer and
Number 3 in the world. The new system, called Titan, is expected
to work at twice the speed of the machine that is currently the
fastest supercomputer in the world, Japan's K computer. . .
====

And note that that's a computational model specifically designed to provide
information about how to design **actual** engines to be
more fuel-efficient (or whatever) -- nobody expects to be
able to put a Titan into a Subaru and drive it around town.

So "exascale" computing will be able to tackle the problem of
simulating a human brain, you may say (even though nobody quite knows
what a complete simulation would have to entail). Well, hold
onto your hats, global warming fans:

http://www.theregister.co.uk/2012/07/11/doe_fastforward_amd_whamcloud/
---------------
The U[ltra]H[high]P[erformance]C[computing]
program was announced in March 2010 with the goal of
creating an HPC system that by 2018 can do 50 gigaflops
per watt (BlueGene/Q, the current top performer and
most efficient super in the world, can do a little more
than 2 gigaflops per watt) and pack 10 petabytes of
storage and do around 3 petaflops of number crunching. . .
within a 57 kilowatt power budget.

Building an exascale system would seem easier, by comparison,
since there is, in theory, no limit on the size of the machine
or its power budget. But in reality, there are big-time power
limits on exascale supers because no one is going to build
a 20 megawatt nuclear or coal power station to keep one fed
and cooled. . .

On a current petaflops-class system today, it costs somewhere
between $5m and $10m to power and cool the machine today, and
extrapolating to an exascale machine using current technology,
even with efficiency improvements, you would be in for
$2.5bn a year just to power an exascale beast and you would
need something on the order of 1,000 megawatts to power it up.
That's 50 nuclear reactors, more or less. The DOE has set a
target of a top juice consumption at 20 megawatts for an
exascale system. . .
====

Well, let's all ask Santa Claus for a graphene transistor
this year.

Athena Andreadis said...

Twitter is not the place to have real scientific conversations. However, from these micro-blatherings I conclude that neither Vulnerata nor Haywire are acquainted with real biology. Furthermore, I suspect that both resort to the gambit of "If you don't believe the brain is a computer, u r a dualist!"

Dale Carrico said...

Quite so -- on both counts.

jimf said...

Chemists and physicists who prefer to mess
about with the "real world" disparage what the
simulation folks do as "type and hype", whereas the latter
disparage the former as "shake and bake"

(via
http://amormundi.blogspot.com/2012/11/you-are-not-picture-of-you.html )

(We have all been here before. ;-> )

Chad Lott said...

Pigs like to be wrassled with.

jimf said...

> Chad Lott said...
>
> Pigs like to be wrassled with.

Of course, that's an equal opportunity insult.
;->


http://tech.groups.yahoo.com/group/New_Cryonet/message/2966
----------------
Re: [New_Cryonet] Amor Mundi discussion
Posted By: [Eugen Leitl]
Fri Aug 31, 2012

On Thu, Aug 30, 2012 at 06:20:39PM -0700, Max More wrote:

> Yes, I know, I know. I shouldn't have bothered to comment on Carrico's
> crap. He's consigned me to his imaginary "Robot Cult" (a phrase he repeats
> over and over and over again like... a robot) and won't actually ever
> engage in productive discussion. Anyone who has read even a few pieces by
> him knows what a nasty piece of work he is. As a professional rhetorician,
> he's much more interested in looking clever and putting down his opponents
> than seeking truth.

It's useless to wrestle with a pig. You both get dirty, and the pig enjoys it.
====

(via
http://amormundi.blogspot.com/2012/09/if-youre-robot-cultist-there-is-no-such.html )

jimf said...

> Athena Andreadis said...
>
> Twitter is not the place to have real scientific conversations.
> However, from these micro-blatherings I conclude that neither
> Vulnerata nor Haywire are acquainted with real biology. Furthermore,
> I suspect that both resort to the gambit of "If you don't believe
> the brain is a computer, u r a dualist!"

Who, BTW, is this Andre@ (@puellavulnerata)?

http://charon.persephoneslair.org/~andrea/
------------------
I'm a software developer for the Tor Project.

I've written mpkg, a minimalist package manager for *nix systems.

I have a version of the Cyclades PC-300 T1 card driver patched
to run on sparc64 kernels.

"Gothnix
Nice boot. Wanna fsck?"
====

A coder (surprise, surprise). Well, OK,
that's fine. (So'm I. Not as distinguished,
though. ;-> ).

http://www.antipope.org/charlie/blog-static/2011/09/i-singularity.html
------------------
Andrea Shepard
September 9, 2011
168:

Where does this belief that uploading is about isolating some sort
of abstract, ill-defined 'mind' from the meat arise? It's about
emulating the brain and whichever other parts of the meat prove essential
to its function in software. I don't intend on being disembodied or
unemotional or whatever other reason vs. passion (or is that an
ill-fitting disguise for masculine vs. feminine?) false dichotomy
someone projects onto the idea; I intend to experience an improved,
engineered body, whether as a physical object or an entity in a
virtual world, not subject to decay or disease or any of the flaws
that go along with meat, and I most certainly intend on retaining
all the emotions I have now.

This essay seems to be regarding uploading as a sort of continuation of
the rather hoary trope of the emotionless being (Spock, Data, and so on...). . .
====

Indeed, there are a lot of assumptions lurking behind
various SFnal and transhumanist fantasies about AI that
can usefully be explored. It's also worth nothing that
the earliest speculations about AI (and the field of "cognitive
psychology" that flourished in the wake of behaviorism's
demise) hinged very much on "hopes
of being able to "isolate... some sort of abstract...
'mind' from the [brain]".

Loc. cit.
------------------
Consider an atom by atom simulation of a whole human body
and brain - either it produces the same sort of behavior as
a physical human, or you're postulating that atoms in a
human body follow different physics than atoms outside it,
which amounts to vitalism or interaction dualism. . .

In the end, if you accept that such an atom by atom simulation would
actually be conscious, then we are no longer arguing about whether
uploading is possible, merely about how difficult it is.
====

That atom-by-atom simulation of a whole human body is presumably
interacting with an atom-by-atom simulation of a whole external
universe (including a few billion other atom-by-atom simulations
of people and other living things). Wow.

jimf said...

> Who, BTW, is this Andre@. . .
> A coder. . . So'm I. Not as distinguished,
> though. ;->

Nor am I a "high-interest target" of the NSA!


> http://www.infowars.com/tor-developer-suspects-nsa-interception-of-amazon-purchase/
--------------
Andrea Shepard, a Seattle-based core developer for the Tor Project,
suspects her recently ordered keyboard may have been intercepted
by the NSA. . .

Instead of shipping straight towards Seattle from the Amazon
storage warehouse in Santa Ana, California, Shepard’s package
made its way clear across the country to Dulles, Virginia. Jumping
around an area deep inside what some privacy experts refer to as
America’s “military and intelligence belt,” the package was
finally delivered to its new endpoint in Alexandria. . .

According to recently revealed internal NSA documents,
the agency’s Office of Tailored Access Operations group, or TAO,
is responsible for intercepting shipping deliveries of
high-interest targets. . .

Given the NSA’s deep interest in Tor, a popular online
anonymity tool, some speculate Shepard’s keyboard could
likely have been implanted with a TAO bug known as “SURLYSPAWN,”
a small keylogging chip often implanted in a keyboard’s
cable. According to NSA slides, a bugged keyboard can
be monitored even when a computer is offline. . .
====

Well golly, Batman!

Dale Carrico said...

I still think she's lost her way, captured by deranging computational figurations of ontology and consciousness, but on the NSA business I daresay she's probably on to something. Our "intelligence" services may have changed the name but they never gave up the dream of Total(itarian) Information Awareness.

Esebian said...

So, what are Miss Haywire or Vulnerata's scientific science degrees in science again?

Dale Carrico said...

Indeed.

Unknown said...

All Coked Up at "Dave and Busters" !

https://www.youtube.com/watch?v=L2gQoWXpWRw