Sunday, March 16, 2014

Why So Many Transhumanists and Digital Utopians Are Incapable of Imagining There's No Heaven

Upgraded and adapted from an exchange in the Moot "JimF" commented:
You know, I had (or attempted to have) [an] argument (or "civilized discussion" ;-> ) with a very smart person (a computer programmer I've known for decades) and it was impossible for me to get the subtle distinction to register. In his view, saying "the brain is not a computer" is tantamount to being a mystic, or a vitalist -- equivalent to claiming that life, and intelligence, must have a supernatural or non-material basis. I could **not** get past that impasse. So it is, I would guess, with many naive Transhumanists, Singularitarians, and AI enthusiasts. And that's true at the highest levels of what passes as the intelligentsia. I suspect you might see the same misunderstandings spun out if you were to witness a conversation between, say, Gerald M. Edelman and, oh, Daniel Dennett.
Mind has been metaphorized as a cloud, as a mirror, as the inscription of a tablet, as a complex of steam-pipes, and on and on and on. Setting that platitude aside, it is also true that many GOFAI ("Good Old-Fashioned Artificial Intelligence")-deadenders and techno-immortalists of the uploading sect (subcultures with more than a little overlap, I might add) seem to reject supernaturalism about mind only to replace it with a kind of techno-transcendental sooper-naturalism of robotic-AI, in which the mind-body dualism of spiritualism is re-erected as an information-meat dualism invested with very specifically false, facile hopes for techo-transcendence.

Of course, any consistent materialism about mind will necessarily treat seriously the materialization of actually-existing minds in biological bodies, and will recall that all information is non-negligibly and non-dispensably instantiated in actually-existing material carriers. The non-negligible, non-dispensable material substantiation of intelligence and mind forcefully argues against the pseudo-science and bad poetry of Robot Cultists who depend instead on inapt metaphors of "translation" and "uploading" and "transfer" to wish away these material realities the better to indulge in infantile wish-fulfillment fantasies about invulnerability from error, contingency, disease, or mortality by techno-transcending the hated meat body as digital cyberangel avatars in Holodeck Heaven and then peddling that priestly con-artistry and New Age woo as science.

Anyway, it makes no kind of sense to pretend materialism justifies dualism but once that irrational leap has been made it becomes perfectly predictable that those for whom the denial of traditionally religious mind-body dualisms does justify robo-cultically religious info-meat dualisms will treat as an entailment of their sooper-naturalist anti-supernaturalism that those who reject their enabling incoherence must somehow be common or garden variety pro-supernaturalists. The "vitalist" charge is just a variation: for the robo-soopernaturalist any materialism that is a barrier rather than a prop for their wanted techno-immortalizing informationalism about mind must be some kind of stealthy supernaturalist luddism. The point to grasp is that for the futurological faithful, one is either a believer in spirit stuff that might live on in Heaven or a believer in info-stuff that might be uploaded in Holodeck Heaven: Actual, this-worldly, secular-progressive, technoscientifically-literate materialisms don't hold much interest for Robot Cultists, you know.

6 comments:

  1. It doesn't take much to see how divorced from reality the uploadies' beliefs are.

    If you suppose as a thought experiment that it's possible to build a super-duper-MRI device, a more-than-a-supercomputer device and super-duper-algorithms, once your neural architecture and activity is somehow fully scanned and modeled by the software, would you really say the voice coming out of the boxes is you after you're out of the tube? (And yes, I know that indulging in fantasies about Star Trek gizmos probably falls under "distracting the technoscience debate", but let's see if that wakes up the one or other uploadie.)

    ReplyDelete
  2. > [A]ny consistent materialism about mind will necessarily treat
    > seriously the materialization of actually-existing minds in
    > biological bodies, and will recall that all information is
    > non-negligibly and non-dispensably instantiated in actually-existing
    > material carriers. . .

    As I mentioned a year and a half ago, in a comment on

    http://amormundi.blogspot.com/2012/11/you-are-not-picture-of-you.html
    --------------------
    I was reading recently that there's a computational model
    of the internal combustion engine that apparently requires
    (or at least "deserves") the current biggest supercomputer
    in the world to run.

    I don't think putting the supercomputer in a car would
    actually propel it down the road, though. The car, that
    is. (Or the supercomputer, for that matter.)

    http://news.nationalgeographic.com/news/energy/2012/04/120430-titan-supercomputing-for-energy-efficiency/
    ====

    Maybe a computational model of an engine could propel a computational
    model of a car, on a computational model of a highway, in a computational
    model of a city. . . That's a lotta computational modelling! ;->

    One of the Gee Whiz! aspects of real computer science, though (and it
    **is** a cool idea) is the concept of the "Virtual machine".

    This was a pretty esoteric notion for decades, though the original idea
    (in the guise of "microprogramming"), goes back to the very beginnings
    of the digital computer -- to the 1940's in England:

    http://people.cs.clemson.edu/~mark/uprog.html
    --------------------
    In the late 1940s Maurice Wilkes of Cambridge University started work
    on a stored-program computer called the EDSAC. . .
    Wilkes recognized that the sequencing of control signals within the
    computer was similar to the sequencing actions required in a
    regular program and that he could use a stored program to represent
    the sequences of control signals. . . In 1951, he published the
    first paper on this technique, which he called microprogramming. . .

    In an expanded paper published in 1953, Wilkes and his colleague
    John Stringer further described the technique. . .

    The Cambridge University group, including William Renwick and David Wheeler,
    went on to implement and test the first microprogrammed computer in 1957. . .

    Due to the difficulty of manufacturing fast control stores in the
    1950s, microprogramming did not immediately become a mainstream technology.
    However, several computer projects did pursue Wilkes' ideas. . .

    John Fairclough at IBM's laboratory in Hursley, England. . .
    played a key role in IBM's decision to pursue a full range of compatible
    computers, which was announced in 1964 as the System/360. . .
    All but two of the initial 360 models (the high-end Models 75 and 91)
    were microprogrammed. . .
    ====

    It was a tremendous advantage to have the hardware details of
    a product line of software-compatible machines thus decoupled from
    the operating system(s) and the compiler(s) (with the latter being
    able to run on a wide range of otherwise very different devices).

    ReplyDelete
  3. Also, the technique soon led to the idea of having a single hardware
    device that could instantiate multiple ("virtual") instruction sets
    (of current and "legacy" hardware):

    http://people.cs.clemson.edu/~mark/uprog.html
    --------------------
    IBM was spared mass defection of former customers when engineers
    on the System/360 Model 30 suggested using an extra control store
    that could be selected by a manual switch and would allow the
    Model 30 to execute IBM 1401 instructions. . . Stuart Tucker
    and Larry Moss led the effort to develop a combination of hardware,
    software, and microprograms to execute legacy software for not only
    the IBM 1401 computers but also for the IBM 7000 series. . .
    Moss felt their work went beyond mere imitation and equaled
    or excelled the original in performance;
    thus, he termed their work as **emulation**. . . The emulators they
    designed worked well enough so that many customers never converted
    legacy software and instead ran it for many years on System/360
    hardware using emulation.

    Because of the success of the IBM System/360 product line, by the
    late 1960s microprogramming became the implementation technique
    of choice for most computers except the very fastest and the
    very simplest. This situation lasted for about two decades.
    For example, all models of the IBM System/370 aside from the
    Model 195 and all models of the DEC PDP-11 aside from the
    PDP-11/20 were microprogrammed.

    At perhaps the peak of microprogramming's popularity, the [microprogrammed]
    DEC VAX 11/780 was delivered in 1978. . .
    ====

    In addition to microprogramming, IBM in the late 60s began exploring
    the possibility of computers that could "simulate" **themselves**
    at the virtual instruction-set level (though this required additional
    hardware support in the form of "dynamic address translation"
    [virtual memory]). Thus, we got "hypervisor"
    operating systems such as CP-67 and VM-370 -- very esoteric at the time!
    (though not really fast enough for practical use, and not often
    seen outside of research or university environments).

    ReplyDelete
  4. In a reprise of the early history of "big iron", the path of the
    so-called "microcomputer" has followed much the same trajectory.

    First we got the "microprogrammed" processors:

    http://people.cs.clemson.edu/~mark/uprog.html
    --------------------
    Several early microprocessors were hardwired, but some amount of
    microprogramming soon became a common control unit design feature.
    For example, among the major eight-bit microprocessors produced
    in the 1974 to 1976 time frame, the MC6800 was hardwired while
    the Intel 8080 and Zilog Z80 were microprogrammed. . . An interesting
    comparison between 1978-era 16-bit microprocessors is the
    hardwired Z8000. . . and the microcoded Intel 8086. . .
    In 1978 the microprogramming of the Motorola 68000 was described. . .
    This design contained a sophisticated two-level scheme [a "microprogram"
    level and a "nanoprogram" level]. . .
    ====

    And modern Intel processors are still "microcoded" (though in general,
    that's something the end-user needn't ever be aware of):
    http://en.wikipedia.org/wiki/P6_%28microarchitecture%29
    http://en.wikipedia.org/wiki/NetBurst_%28microarchitecture%29
    http://en.wikipedia.org/wiki/Intel_Core_%28microarchitecture%29


    And in more recent times, we've got a plethora of virtual machines,
    from hobbyist simulators that run classic hardware on PCs (e.g., SimH for
    old DEC machines; Hercules for the IBM 360 and its successors;
    simulators for Macs of various generations on PC, PC on Mac, and
    old 70's- and 80's-era micros or game consoles on modern PCs)
    all the way to commercial VMware (or Microsoft Hyper-V, or whatever)
    hypervisors that transform a single large server in a datacenter
    to a bunch of virtualized PCs for the desktop.
    (And IBM, after years of trying to kill it off, now
    sells VM-370 -- it's now called z/VM -- as the most popular way
    to run multiple instances of Linux on a single IBM mainframe.)

    So yeah, in the computer world, this idea of the "functional independence"
    of a "virtual machine" from its hardware instantiation really works,
    and it's really cool. It's not hard to see the attraction (especially
    if you view the human mind as **already** running on a computer of
    some sort) of the science-fictional notion that a mind (or a world!)
    could be similarly independent of its "substrate". (Greg Egan, among
    many, many others, has explored this idea in "Wang's Carpets" and many
    other stories and novels.)

    In a way, the very coolness of the idea (in the computer-science world,
    **and** in SF) gets in the way of rational contemplation of its
    applicability in the realm of "real world" AI.

    ReplyDelete
  5. Send this to Gary Marcus of the NY Times, who "smells dualism" when told that the brain is not a computer (or that the analogy imparts no useful information/explanation about either the brain or computers and was edgy, if ever, back in the nineties). Or, as I said on Twitter, crying "dualism" when told the brain is not a computer is the last refuge of the argument-less.

    ReplyDelete
  6. Hey, Athena, your tweets were the prompt for this post and discussion, keep up the good work.

    ReplyDelete