Using Technology to Deepen Democracy, Using Democracy to Ensure Technology Benefits Us All

Sunday, July 26, 2009

Futurological Brickbats

To care most about things that are merely not impossible is simply not sensible.

3 comments:

Jason said...

Alright, but what if those very improbable things have the (remote) possibility of having disastrous impacts?

It seems to me that it makes sense to look at this from something of a cost/benefit perspective. Yes, time would be invested in something that would probably not happen anyway, but disaster might also end up averted. Your "it's too unlikely to happen" thing is getting quite old.

I'm not advocating against attempts to halt climate change or other such nonsense. I'm just saying that it seems a reasonable thing to do to put forth an appropriate amount of resources to prevent our extinction, however unlikely (and let the populace determine exactly what "likely" is. But first they need to actually be aware that there is actually a small threat of, say, AI showing up this century. I would argue that the public is disproportionally terrified of something that is extremely highly unlikely: an asteroid or comet impact. Resources and researchers' time is spent on this issue, yet you seem to completely ignore this).

Dale Carrico said...

it makes sense to look at this from something of a cost/benefit perspective

Specify the "this" -- and you'll discover soon enough that if the "this" really is a matter usefully susceptible of stakeholder deliberation then the majority of people engaging in that discussion have no need of futurologists hyperbolizing the stakes, and that if the "this" really is a matter of blue-skying then it should be treated as an aesthetic matter not to be mistaken for actual science or actual policy in the first place. In a nutshell, nobody needs to join a Robot Cult to engage in actually sensible scientific research or technodevelopmental policy -- but anything beyond actually sensible scientific research and technodevelopmental policy marks the Robot Cult as a fandom sub(cult)ure, attractive or not according to a person's taste, but something that no more earns pretensions to representing science or policy than any other fundamentalism. When it comes to superlative futurology this either/or amounts to something close to an iron law.

Think about it. The "arrival" of an entitative post-biological superintelligent Robot God of the kind fetishized by superlative futurologists would be preceded by innumerable problems demanding decisions of technique and policy and regulation and education and on and on and on, not one of which is illuminated by looking at it here and now through the lens of would-be prophets claiming to speak for "the future," but a process of invention, collaboration, contestation, deliberation the substance of which constitutes the actually-existing rationality out of which that "arrival" would truly consist, should "it" be possible, whatever "it" actually shapes up to be. Those who claim to skip all the steps are always just con-artists trying to sell you something.

In my view the dead enders of the GOFAI program who cling to one another among the Robot Cultists regularly deny or fail to grasp fundamental realities about the social exhibitions and biological incarnations of the "intelligence" about which they speak so glibly, which means you are jumping the gun when you demand we leap forward into calculating the likely arrival of the entity presumably premised on these incomprehensions.

The Robot Cultists aren't actually ready for prime time (which helps account for their enduring marginality from the consensus of scientists in the fields indispensable to their own preferred outcomes).

Certainly one doesn't overcome the basic problem of this initial incoherence by ratcheting up the dire stakes presumably involved in the prediction -- that's exactly like being unable to explain what you actually mean by saying "God exists" but trying to distract our attention from this basic incomprehension by saying when God returns he will thrust all non-believers into eternal hellfire so you better pray to him.

When you say it is "reasonable" to devote public monies to prevent human extinction at the hands of the Robot God (even if, oh so wheedlingly reasonably, you admit the chances may only be negligible that the Robot God apocalypse will come to fruition), you really mean that a handful of pseudo-scientific nutjobs in a Robot Cult who worship at the feet of embarrassing wannabe gurus like Eliezer Yudkowsky and Ray Kurzweil should be given tax money to subsidize their flabbergasting crackpottery. Thanks, but no thanks.

jimf said...

> [I]f the "this" really is a matter of blue-skying then it
> should be treated as an aesthetic matter not to be mistaken
> for actual science or actual policy. . . [A]nything beyond
> actually sensible scientific research and technodevelopmental
> policy marks the Robot Cult as a fandom sub(cult)ure,
> attractive or not according to a person's taste, but something
> that no more earns pretensions to representing science
> or policy than any other fundamentalism. . . Those who claim
> to skip all the steps are always just con-artists trying to
> sell you something. . . The Robot Cultists aren't actually
> ready for prime time (which helps account for their enduring
> marginality from the consensus of scientists in the fields
> indispensable to their own preferred outcomes).

Bibliography without comment:

"Wanted - academic discussions of mind uploading"
http://lists.extropy.org/pipermail/extropy-chat/2009-July/date.html

--------------------------------
I'm playing around with the idea of doing a paper on the hive mind
aspect of an uploaded society. . . for which I need to do a survery
on the previous academic discussions on mind uploading. . .
(Note that I'm specifically looking for hive minds that have developed
from *human uploads*. I'm not looking for AI hive minds, cyborg
hive minds or hive minds in general - I know that scifi has plenty
of *those*.)

Here are the academic uploading articles which I'm already aware of
(which might also be a handy reference for anyone else interested in
the topic).

Non-fiction about uploading in general:

Anders Sandberg & Nick Bostrom (2008): Whole Brain Emulation: A
Roadmap, Technical Report #2008-3. Future of Humanity Institute,
Oxford University. (An analysis of what is yet required for uploads.)

Robin Hanson (1994, 2008): If uploads come first - The crack of a
future dawn; Economics of the Singularity. (Hanson's classic must-read
papers on the economic consequences of uploads.)

Susan Schneider (2008): Future Minds: Transhumanism, Cognitive
Enhancement and the Nature of Persons. Neuroethics Publications.
(Critique of uploading on the grounds that an uploaded copy "would be
just a clone, not you", and seems to assume that this can just be
taken as granted. Groan.)

V. Astakhov (2008): Mind Uploading and Resurrection of Human
Consciousness. Place for Science? NeuroQuantology. (/Seems/ to discuss
some sort of theory for the actual upload process. I think. Not sure
if it's entirely serious, but at least I'm unable to follow it.)

Ray Kurzweil (2005): The Singularity is Near. (Briefly discusses the
possibility of uploading.)

Nick Bostrom (2004): The Future of Human Evolution. In Death and
Anti-Death: Two Hundred Years After Kant, Fifty Years After Turing.
(The only paper I could find that actually discusses a hive mind -like
scenario.)

Hans Moravec (2000): Robot: Mere Machine to Transcendent Mind.
(Mentions uploading in the form of a "cyberspace" that people will
move into.)

Robert Harle (2002): Cyborgs, Uploading and Immortality - Some Serious
Concerns. Sophia, Volume 41, Number 2. (Mainly attempts to debunk the
whole idea of uploading. Humorous for stating that "the most serious
problem for uploaders" is the fact that a brain cannot function
without body, completely ignoring the possibility of *gasp* people
also simulating a body. Not very interesting.)

Hans Moravec (1988): Mind Children. (Has a brief description of an
upload process.)